Peter Singer’s ‘A Darwinian Left’

Peter Singer, ‘A Darwinian Left: Politics, Evolution and Cooperation’, London: Weidenfeld & Nicolson 1999.

Social Darwinism is dead. 

The idea that charity, welfare and medical treatment ought to be withheld from the poor, the destitute and the seriously ill so that they perish in accordance with the process of natural selection and hence facilitate further evolutionary progress survives only as a straw man sometimes attributed to conservatives by leftists in order to discredit them, and a form of guilt by association sometimes invoked by creationists in order to discredit the theory of evolution.[1] 

However, despite the attachment of many American conservatives to creationism, there remains a perception that evolutionary psychology is somehow right-wing

Thus, if humans are fundamentally selfish, as Richard Dawkins is taken, not entirely accurately, to have argued, then this surely confirms the underlying assumptions of classical economics. 

Of course, as Dawkins also emphasizes, we have evolved through kin selection to be altruistic towards our close biological relatives. However, this arguably only reinforces conservatives’ faith in the family, and their concerns regarding the effects of family breakdown and substitute parents

Finally, research on sex differences surely suggests that at least some traditional gender roles – e.g. women’s role in caring for young children, and men’s role in fighting wars – do indeed have a biological basis, and also that patriarchy and the gender pay gap may be an inevitable result of innate psychological differences between the sexes

Political scientist Larry Arnhart thus champions what he calls a new ‘Darwinian Conservatism’, which harnesses the findings of evolutionary psychology in support of family values and the free market. 

Against this, however, moral philosopher and famed animal liberation activist Peter Singer, in ‘A Darwinian Left’, seeks to reclaim Darwin, and evolutionary psychology, for the Left. His attempt is not entirely successful. 

The Naturalistic Fallacy 

At least since David Hume, it has an article of faith among most philosophers that one cannot derive values from facts. To do otherwise is to commit what some philosophers refer to as the naturalistic fallacy

Edward O Wilson, in Sociobiology: The New Synthesis was widely accused of committing the naturalistic fallacy, by attempting to derive moral values form facts. However, those evolutionary psychologists who followed in his stead have generally taken a very different line. 

Indeed, recognition that the naturalistic fallacy is indeed a fallacy has proven very useful to evolutionary psychologists, since it has enabled them investigate the possible evolutionary functions of such morally questionable (or indeed downright morally reprehensible) behaviours as infidelityrape, warfare and child abuse while at the same time denying that they are somehow thereby providing a justification for the behaviours in question.[2] 

Singer, like most evolutionary psychologists, also reiterates the sacrosanct inviolability of the fact-value dichotomy

Thus, in attempting to construct his ‘Darwinian Left’, Singer does not attempt to use Darwinism in order to provide a justification or ultimate rationale for leftist egalitarianism. Rather, he simply takes it for granted that equality is a good thing and worth striving for, and indeed implicitly assumes that his readers will agree. 

His aim, then, is not to argue that socialism is demanded by a Darwinian worldview, but rather simply that it is compatible with such a worldview and not contradicted by it. 

Thus, he takes leftist ideals as his starting-point, and attempts to argue only that accepting the Darwinian worldview should not cause one to abandon these ideals as either undesirable or unachievable. 

But if we accept that the naturalistic fallacy is indeed a fallacy then this only raises the question: If it is indeed true that moral values cannot be derived from scientific facts, whence can moral values be derived?  

Can they only be derived from other moral values? If so, how are our ultimate moral values, from which all other moral values are derived, themselves derived? 

Singer does not address this. However, precisely by failing to address it, he seems to implicitly assume that our ultimate moral values must simply be taken on faith. 

However, Singer also emphasizes that rejecting the naturalistic fallacy does not mean that the facts of human nature are irrelevant to politics. 

On the contrary, while Darwinism may not prescribe any particular political goals as desirable, it may nevertheless help us determine how to achieve those political goals that we have already decided upon. Thus, Singer writes: 

An understanding of human nature in the light of evolutionary theory can help us to identify the means by which we may achieve some of our social and political goals… as well as assessing the possible costs and benefits of doing so” (p15). 

Thus, in a memorable metaphor, Singer observes: 

Wood carvers presented with a piece of timber and a request to make wooden bowls from it do not simply begin carving according to a design drawn up before they have seen the wood, Instead they will examine the material with which they are to work and modify their design in order to suit its grain…Those seeking to reshape human society must understand the tendencies inherent within human beings, and modify their abstract ideals in order to suit them” (p40). 

Abandoning Utopia? 

In addition to suggesting how our ultimate political objectives might best be achieved, an evolutionary perspective also suggests that some political goals might simply be unattainable, at least in the absence of a wholesale eugenic reengineering of human nature itself. 

In watering down the utopian aspirations of previous generations of leftists, Singer seems to implicitly concede as much. 

Contrary to the crudest misunderstanding of selfish gene theory, humans are not entirely selfish. However, we have evolved to put our own interests, and those of their kin, above those of other humans. 

For this reason, communism is unobtainable because: 

  1. People strive to promote themselves and their kin above others; 
  2. Only coercive state apparatus can prevent them so doing; 
  3. The individuals in control of this coercive apparatus themselves seek to promote the interests of themselves and their kin and corruptly use this coercive apparatus to do so. 

Thus, Singer laments: 

What egalitarian revolution has not been betrayed by its leaders?” (p39). 

Or, alternatively, as HL Mencken put it:

“[The] one undoubted effect [of political revolutions] is simply to throw out one gang of thieves and put in another.” 

In addition, human selfishness suggests, if complete egalitarianism were ever successfully achieved and enforced, it would likely be economically inefficient – because it would remove the incentive of self-advancement that lies behind the production of goods and services, not to mention of works of art and scientific advances. 

Thus, as Adam Smith famously observed: 

It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own interest.” 

And, again, the only other means of ensuring goods and services are produced besides economic self-interest is state coercion, which, given human nature, will always be exercised both corruptly and inefficiently. 

What’s Left? 

Singer’s pamphlet has been the subject of much controversy, with most of the criticism coming, not from conservatives, whom one might imagine to be Singer’s natural adversaries, but rather from other self-described leftists. 

These leftist critics have included both writers opposed to evolutionary psychology (e.g. David Stack in The First Darwinian Left), but also some other writers claiming to be broadly receptive to the new paradigm but who are clearly uncomfortable with some of its implications (e.g.  Marek Kohn in As We Know It: Coming to Terms with an Evolved Mind). 

In apparently rejecting the utopian transformation of society envisioned by Marx and other radical socialists, Singer has been accused by other leftists for conceding rather too much to the critics of leftism. In so doing, Singer has, they claim, in effect abandoned leftism in all but name and become, in their view, an apologist for and sell-out to capitalism. 

Whether Singer can indeed be said to have abandoned the Left depends, of course, on precisely how we define ‘the Left’, a rather more problematic matter than it is usually regarded as being.[3]

For his part, Singer certainly defines the Left in unusually broad terms.

For Singer, leftism need not necessarily entail taking the means of production into common ownership, nor even the redistribution of wealth. Rather, at its core, being a leftist is simply about being: 

On the side of the weak, not the powerful; of the oppressed, not the oppressor; of the ridden, not the rider” (p8). 

However, this definition is obviously problematic. After all, few conservatives would admit to being on the side of the oppressor. 

On the contrary, conservatives and libertarians usually reject the dichotomous subdivision of society into oppressed’ and ‘oppressor groups. They argue that the real world is more complex than this simplistic division of the world into black and white, good and evil, suggests. 

Moreover, they argue that mutually beneficial exchange and cooperation, rather than exploitation, is the essence of capitalism. 

They also usually claim that their policies benefit society as a whole, including both the poor and rich, rather than favouring one class over another.[4]

Indeed, conservatives claim that socialist reforms often actually inadvertently hurt precisely those whom they attempt to help. Thus, for example, welfare benefits are said to encourage welfare dependency, while introducing, or raising the level of, a minimum wage is said to lead to increases in unemployment. 

Singer declares that a Darwinian left would “promote structures that foster cooperation rather than competition” (p61).

Yet many conservatives would share Singer’s aspiration to create a more altruistic culture. 

Indeed, this aspiration seems more compatible with the libertarian notion of voluntary charitable donations replacing taxation than with the coercively-extracted taxes invariably favoured by the Left. 

Nepotism and Equality of Opportunity 

Yet Selfish gene theory suggests humans are not entirely self-interested. Rather, kin selection makes us care also about our biological relatives.

But this is no boon for egalitarians. 

Rather, the fact that our selfishness is tempered by a healthy dose of nepotism likely makes equality of opportunity as unattainable as equality of outcome – because individuals will inevitably seek to aid the social, educational and economic advancement of their kin, and those individuals better placed to do so will enjoy greater success in so doing. 

For example, parents with greater resources will be able to send their offspring to exclusive fee-paying schools or obtain private tuition for them; parents with better connections may be able to help their offspring obtain better jobs; while parents with greater intellectual ability may be able to better help their offspring with their homework. 

However, since many conservatives and libertarians are as committed to equality of opportunity as socialists are to equality of outcome, this conclusion may be as unwelcome on the right as on the left. 

Indeed, the theory of kin selection has even been invoked to suggest that ethnocentrism is innate and ethnic conflict is inevitable in multi-ethnic societies, a conclusion unwelcome across the mainstream political spectrum in the West today, where political parties of all persuasions are seemingly equally committed to building multi-ethnic societies. 

Unfortunately, Singer does not address any of these issues. 

Animal Liberation After Darwin 

Singer is most famous for his advocacy on behalf of what he calls animal liberation

In ‘A Darwinian Left’, he argues that the Darwinian worldview reinforces the case for animal liberation by confirming the evolutionary continuity between humans other animals. 

This suggests that there are unlikely to be fundamental differences in kind as between humans and other animals (e.g. in the capacity to feel pain) sufficient to justify the differences in treatment currently accorded humans and animals. 

It sharply contrasts account of creation in the Bible and the traditional Christian notion of humans as superior to other animals and as occupying an intermediate position between beasts and angels. 

Thus, Singer concludes: 

By knocking out the idea that we are a separate creation from the animals, Darwinian thinking provided the basis for a revolution in our attitudes to non-human animals” (p17). 

This makes our consumption of animals as food, our killing of them for sport, our enslavement of them as draft animals, or even pets, and our imprisonment of them in zoos and laboratories all ethically suspect, since these are not things generally permitted in respect of humans. 

Yet Singer fails to recognise that human-animal continuity cuts two ways. 

Thus, anti-vivisectionists argue that animal testing is not only immoral, but also ineffective, because drugs and other treatments often have very different effects on humans than they do on the animals used in drug testing. 

Our evolutionary continuity with non-human species makes this argument less plausible. 

Moreover, if humans are subject to the same principles of natural selection as other species, this suggests, not the elevation of animals to the status of humans, but rather the relegation of humans to just another species of animal. 

In short, we do not occupy a position midway between beasts and angels; we are beasts through and through, and any attempt to believe otherwise is mere delusion. 

This is, of course, the theme of John Gray’s powerful polemic Straw Dogs: Thoughts on Humans and Other Animals (which I have reviewed hereherehere and here). 

Finally, acceptance of the existence of human nature surely entails recognition of carnivory as a part of that nature. 

Of course, we must remember not to commit the naturalistic or appeal to nature fallacy.  

Thus, just because meat-eating may be natural for humans, in the sense that meat was a part of our ancestors diet in the EEA, this does not necessarily mean that it is morally right or even morally justifiable. 

However, the fact that meat is indeed a natural part of the human diet does suggest that, in health terms, vegetarianism is likely to be nutritionally sub-optimal. 

Thus, the naturalistic fallacy or appeal to nature fallacy is not always entirely fallacious, at least when it comes to human health. What is natural for humans is indeed what we are biologically adapted to and what our body is therefore best designed to deal with.[5]

Therefore, vegetarianism is almost certainly to some degree sub-optimal in nutritional terms. 

Moreover, given that Singer is an opponent of the view that there is a valid moral distinction between acts and omissions, then we must ask ourselves: If he believes it is wrong for us to eat animals, does he also believe we should take positive measures to prevent lions from eating gazelles? 

Economics 

Thus, bemoaning the emphasis of neoliberals on purely economic outcomes, he protests:

From an evolutionary perspective, we cannot identify wealth with self-interest… Properly understood self-interest is broader than economic self-interest” (p42). 

Singer is right. The ultimate currency of natural selection is not wealth, but rather reproductive success – and, in evolutionarily novel environments, wealth may not even correlate with reproductive success (Vining 1986). 

Thus, as discussed by Laura Betzig in Despotism and Differential Reproduction, a key difference between Marxism and sociobiology is the relative emphasis on production versus reproduction

Whereas Marxists see societal conflict and exploitation as reflecting competition over control of the means of production, for Darwinians, all societal conflict ultimately concerns control over, not the means of production, but rather what we might term the means of reproduction – in other words, women, their wombs and vaginas

Thus, sociologist-turned-sociobiologist Pierre van den Berghe observed: 

“The ultimate measure of human success is not production but reproduction. Economic productivity and profit are means to reproductive ends, not ends in themselves” (The Ethnic Phenomenon**: p165). 

Production is ultimately, in Darwinian terms, merely by which to gain the necessary resources to permit successful reproduction. The latter is the ultimate purpose of life. 

Thus, for all his ostensible radicalism, Karl Marx, in his emphasis on economics (‘production’) at the expense of sex (‘reproduction’), was just another Victorian sexual prude

Competition or Cooperation: A False Dichotomy? 

In Chapter  Four, entitled “Competition or Cooperation?”, Singer argues that modern western societies, and many modern economists and evolutionary theorists, put too great an emphasis on competition at the expense of cooperation. 

Singer accepts that both competition and cooperation are natural and innate facets of human nature, and that all societies involve a balance of both. However, different societies differ in their relative emphasis on competition or cooperation, and that it is therefore possible to create a society that places a greater emphasis on the latter at the expense of the former. 

Thus, Singer declares that a Darwinian left would: 

Promote structures that foster cooperation rather than competition” (p61) 

However, Singer is short on practical suggestions as to how a culture of altruism is to be fostered.[6]

Changing the values of a culture is not easy. This is especially so for a liberal democratic (as opposed to a despotic, totalitarian) government, let alone for a solitary Australian moral philosopher – and Singer’s condemnation of “the nightmares of Stalinist Russia” suggests that he would not countenance the sort of totalitarian interference with human freedom to which the Left has so often resorted in the past, and continues to resort to in the present (even in the West), with little ultimate success, in the past. 

But, more fundamentally, Singer is wrong to see competition as necessarily in conflict with cooperation. 

On the contrary, perhaps the most remarkable acts of cooperation, altruism and self-sacrifice are those often witnessed in wartime (e.g. kamikaze pilotssuicide bombers and soldiers who throw themselves on grenades). Yet war represents perhaps the most extreme form of competition known to man. 

In short, soldiers risk and sacrifice their lives, not only to save the lives of others, but also to take the lives of other others. 

Likewise, trade is a form of cooperation, but are as fundamental to capitalism as is competition. Indeed, I suspect most economists would argue that exchange is even more fundamental to capitalism than is competition. 

Thus, far from disparaging cooperation, neoliberal economists see voluntary exchange as central to prosperity. 

Ironically, then, popular science writer Matt Ridley also, like Singer, focuses on humans’ innate capacity for cooperation to justify political conclusions in his book, The Origins of Virtue

But, for Ridley, our capacity for cooperation provides a rationale, not for socialism, but rather for free markets – because humans, as natural traders, produce efficient systems of exchange which government intervention almost always only distorts. 

However, whereas economic trade is motivated by self-interested calculation, Singer seems to envisage a form of reciprocity mediated by emotions such as compassiongratitude and guilt
 
However, sociobiologist Robert Trivers argues in his paper that introduced the concept of reciprocal altruism to evolutionary biology that these emotions themselves evolved through the rational calculation of natural selection (Trivers 1971). 

Therefore, while open to manipulation, especially in evolutionarily novel environments, they are necessarily limited in scope. 

Group Differences 

Singer’s envisaged ‘Darwinian Left’ would, he declares, unlike the contemporary left, abandon: 

“[The assumption] that all inequalities are due to discrimination, prejudice, oppression or social conditioning. Some will be, but this cannot be assumed in every case” (p61). 

Instead, Singer admits that at least some disparities in achievement may reflect innate differences between individuals and groups in abilities, temperament and preferences. 

This is probably Singer’s most controversial suggestion, at least for modern leftists, since it contravenes the contemporary dogma of political correctness

Singer is, however, undoubtedly right.  

Moreover, his recognition that some differences in achievement as between groups reflect, not discrimination, oppression or even the lingering effect of past discrimination or oppression, but rather innate differences between groups in psychological traits, including intelligence, is by no means incompatible with socialism, or leftism, as socialism and leftism were originally conceived. 

Thus, it is worth pointing out that, while contemporary so-called ‘cultural Marxists‘ may decry the notion of innate differences in ability and temperament as between different racessexesindividuals and social classes as anathema, the same was not true of Marx himself

On the contrary, in famously advocating from each according to his ability, to each according to his need, Marx implicitly recognized that people differed in “ability” – differences which, given the equalization of social conditions envisaged under communism, he presumably conceived of as innate in origin.[7]

As Hans Eysenck observes:

“Stalin banned mental testing in 1935 on the grounds that it was ‘bourgeois’—at the same time as Hitler banned it as ‘Jewish’. But Stalin’s anti-genetic stance, and his support for the environmentalist charlatan Lysenko, did not derive from any Marxist or Leninist doctrine… One need only recall The Communist Manifesto: ‘From each according to his ability, to each according to his need’. This clearly expresses the belief that different people will have different abilities, even in the communist heaven where all cultural, educational and other inequalities have been eradicated” (Intelligence: The Battle for the Mind: p85).

Thus, Steven Pinker, in The Blank Slate, points to the theoretical possibility of what he calls a “Hereditarian Left”, arguing for a Rawlsian redistribution of resources to the, if you like, innately ‘cognitively disadvantaged’.[8] 

With regard to group differences, Singer avoids discussing the incendiary topic of race differences in intelligence, a question too contentious for Singer to touch. 

Instead, he illustrates the possibility that not “all inequalities are due to discrimination, prejudice, oppression or social conditioning” with the marginally less incendiary case of sex differences.  

Here, it is sex differences, not in intelligence, but rather in temperament, preferences and personality that are probably more important, and likely explain occupational segregation and the so-called gender pay gap

Thus, Singer writes: 

If achieving high status increases access to women, then we can expect men to have a stronger drive for status than women” (p18). 

This alone, he implies, may explain both the universalilty of male rule and the so-called gender pay gap

However, Singer neglects to mention another biological factor that is also probably important in explaining the gender pay gap – namely, women’s attachment to infant offspring. This factor, also innate and biological in origin, also likely impedes career advancement among women. 

Thus, it bears emphasizing that never-married women with no children actually earn more, on average, than do unmarried men without children of the same age in both Britain and America.[9]

For a more detailed treatment of the biological factors underlying the gender pay gap, see Biology at Work: Rethinking Sexual Equality by professor of law, Kingsley Browne, which I have reviewed here and here.[10] See also my review of Warren Farrell’s Why Men Earn More, which can be found herehere and here.

Dysgenic Fertility Patterns? 

It is often claimed by conservatives that the welfare system only encourages the unemployed to have more children so as to receive more benefits and thereby promotes dysgenic fertility patterns. In response, Singer retorts: 

Even if there were a genetic component to something as nebulous as unemployment, to say that these genes are ‘deleterious’ would involve value judgements that go way beyond what the science alone can tell us” (p15). 

Singer is, of course, right that an extra-scientific value judgement is required in order to label certain character traits, and the genes that contribute to them, as deleterious or undesirable. 

Indeed, if single mothers on welfare do indeed raise more surviving children than do those who are not reliant on state benefits, then this indicates that they have higher reproductive success, and hence, in the strict biological sense, greater fitness than their more financially independent, but less fecund, reproductive competitors. 

Therefore, far from being deleterious’ in the biological sense, genes contributing to such behaviour are actually under positive selection, at least under current environmental conditions.  

However, even if such genes are not ‘deleterious’ in the strict biological sense, this does not necessarily mean that they are desirable in the moral sense, or in the sense of contributing to successful civilizations and societal advancement. To suggest otherwise would, of course, involve a version of the very appeal to nature fallacy or naturalistic fallacy that Singer is elsewhere emphatic in rejecting. 

Thus, although regarding certain character traits, and the genes that contribute to them, as undesirable does indeed involve an extra-scientific “value judgement”, this is not to say that the “value judgement” in question is necessarily mistaken or unwarranted. On the contrary, it means only that such a value judgement is, by its nature, a matter of morality, not of science. 

Thus, although science may be silent on the issue, virtually everyone would agree that some traits (e.g. generosity, health, happiness, conscientiousness) are more desirable than others (e.g. selfishness, laziness, depression, illness). Likewise, it is self-evident that the long-term unemployed are a net burden on society, and that a successful society cannot be formed of people unable or unwilling to work. 

As we have seen, Singer also questions whether there can be “a genetic component to something as nebulous as unemployment”. 

However, in the strict biological sense, unemployment probably is indeed partly heritable. So, incidentally, are road traffic accidents and our political opinions – because each reflect personality traits that are themselves heritable (e.g. risk-takers and people with poor physical coordination and slow reactions probably have more traffic accidents; and perhaps more compassionate people are more likely to favour leftist politics). 

Thus, while it may be unhelpful and misleading to talk of unemployment as itself heritable, nevertheless traits of the sort that likely contribute to unemployment (e.g. intelligenceconscientiousnessmental and physical illness) are indeed heritable

Actually, however, the question of heritability, in the strict biological sense, is irrelevant. 

Thus, even if the reason that children from deprived backgrounds have worse life outcomes is entirely mediated by environmental factors (e.g. economic or cultural deprivation, or the bad parenting practices of low-SES parents), the case for restricting the reproductive rights of those people who are statistically prone to raise dysfunctional offspring remains intact. 

After all, children usually get both their genes and their parenting from the same set of parents – and this could be changed only by a massive, costly, and decidedly illiberal, policy of forcibly removing offspring from their parents.[11]

Therefore, so long as an association between parentage and social outcomes is established, the question of whether this association is biologically or environmentally mediated is simply beside the point, and the case for restricting the reproductive rights** of certain groups remains intact.  

Of course, it is doubtful that welfare-dependent women do indeed financially benefit from giving birth to additional offspring. 

It is true that they may receive more money in state benefits if they have more dependent offspring to support and provide for. However, this may well be more than offset by the additional cost of supporting and providing for the dependent offspring in question, leaving the mother with less to spend on herself. 

However, even if the additional monies paid to mothers with dependent children are not sufficient as to provide a positive financial incentive to bearing additional children, they at least reduce the financial disincentives otherwise associated with rearing additional offspring.  

Therefore, given that, from an evolutionary perspective, women probably have an innate desire to bear additional offspring, it follows that a rational fitness-maximizer would respond to the changed incentives represented by the welfare system by increasing their reproductive rate.[12]

A New Socialist Eugenics

If we accept Singer’s contention that an understanding of human nature can help show us how achieve, but not choose, our ultimate political objectives, then eugenics could be used to help us achieve the goal of producing the better people and hence, ultimately, better societies. 

Indeed, given that Singer seemingly concedes that human nature is presently incompatible with communist utopia, perhaps then the only way to revive the socialist dream of equality is to eugenically re-engineer human nature itself so as to make it more compatible. 

Thus, it is perhaps no accident that, before World War Two, eugenics was a cause typically associated, not with conservatives, nor even, as today, with fascism, but rather with the political left

Thus, early twentieth century socialist-eugenicists like H.G. Wells, Sidney Webb, Margaret Sanger and George Bernard Shaw may then have tentatively grasped what eludes contemporary leftists, Singer very much included – namely that re-engineering society necessarily requires as a prerequisite re-engineering Man himself.[13]

_________________________

Endnotes

[1] Indeed, the view that the poor and ill ought to be left to perish so as to further the evolutionary process seems to have been a marginal one even in its ostensible late nineteenth century heyday (see Bannister, Social Darwinism Science and Myth in Anglo-American Social Thought). The idea always seems, therefore, to have been largely, if not wholly, a straw man.

[2] In this, the evolutionary psychologists are surely right. Thus, no one accuses biomedical researchers of somehow ‘justifying disease’ when they investigate how infectious diseases, in an effort maximize their own reproductive success, spread form host to host. Likewise, nobody suggests that dying of a treatable illness is desirable, even though this may have been the ‘natural’ outcome before such ‘unnatural’ interventions as vaccination and antibiotics were introduced.

[3] The convenional notion that we can usefully conceptualize the political spectrum on a single dimensional left-right axis is obviously preposterous. For one thing, there is, at the very least, a quite separate liberal-authoritarian dimension. However, even restricting our definition of the left-right axis to purely economic matters, it remains multi-factorial. For example, Hayek, in The Road to Serfdom classifies fascism as a left-wing ideology, because it involved big government and a planned economy. However, most leftists would reject this definition, since the planned economy in question was designed, not to reduce economic inequalities, but rather, in the case of Nazi Germany at least, to fund and sustain an expanded military force, a war economy, external military conquest and grandiose vanity public works architectural projects. The term ’right-wing‘ is even more problematic, including everyone from fascists, to libertarians to religious fundamentalists. Yet a Christian fundamentalist who wants to outlaw pornography and abortion has little in common with either a libertarian who wants to decriminalize prostitution and child pornography, nor with a eugenicist who wants to make abortions, for certain classes of person, compulsory. Yet all three are classed together as ’right-wing’ even though they share no more in common with one another than any does with a raving unreconstructed Marxist.

[4] Thus, the British Conservatives Party traditionally styled themselves one-nation conservatives, who looked to the interests of the nation as a whole, rather than what they criticized as the divisive ‘sectionalism’ of the trade union and labour movements, which favoured certain economic classes, and workers in certain industries, over others, just as contemporary leftists privilege the interests of certain ethnic, religious and culturally-defined groups (e.g. blacks, Muslims, feminists) over others (i.e. white males).

[5] Of course, some ‘unnatural’ interventions have positive health benefits. Obvious examples are modern medical treatments such as penicillin, chemotherapy and vaccination. However, these are the exceptions. They have been carefully selected and developed by scientists to have this positive effect, have gone through rigorous testing to ensure that their effects are indeed beneficial, and are generally beneficial only to people with certain diagnosed conditions. In contrast, recreational drug use almost invariably has a negative effect on health.

[6] It is certainly possible for more altruistic cultures to exist. For example, the famous (and hugely wasteful) potlatch feasts of some Native American cultures exemplify a form of competitive altruism, analogous to conspicuous consumption, and may be explicable as a form of status display in accordance with Zahavi’s handicap principle. However, recognizing that such cultures exist does not easily translate into working out how to create or foster such cultures, let alone transform existing cultures in this direction.

[7]  Indeed, by modern politically-correct standards, Marx was a rampant racist, not to mention an anti-Semite

[8] The term Rawlsian is a reference to political theorist John Rawles version of social contract theory, whereby he poses the hypothetical question as to what arrangement of political, social and economic affairs humans would favour if placed in what he called the original position, where they would be unaware of, not only their own race, sex and position in to the socio-economic hierarchy, but also, most important for our purposes, their own level of innate ability. This Rawles referred to as ’veil of ignorance’. 

[9] As Warren Farrell documents in his excellent Why Men Earn More (which I have reviewed here, here and here), in the USA, women who have never married and have no children actually earn more than men who have never married and have no children and have done since at least the 1950s (Why Men Earn More: pxxi). More precisely, according to Farrell, never-married men without children on average earn only about 85% of their childless never-married female counterparts (Ibid: pxxiii). The situation is similar in the UK. Thus, economist JR Shackleton reports:

“Women in the middle age groups who remain single earn more than middle-aged single males” (Should We Mind the Gap? p30).

The reasons unmarried, childless women earn more than unmarried childless men are multifarious and include:

  1. Married women can afford to work less because they appropriate a portion of their husband’s income in addition to their own
  2. Married men and men with children are thus obliged to earn even more so as to financially support, not only themselves, but also their wife, plus any offspring;
  3. Women prefer to marry richer men and hence poorer men are more likely to remain single;
  4. Childcare duties undertaken by women interfere with their earning capacity.

[10]  Incidentally, Browne has also published a more succinct summary of the biological factors underlying the pay-gap that was first published in the same ‘Darwinism Today’ series as Singer’s ‘A Darwinian Left’, namely Divided Labors: An Evolutionary View of Women at Work. However, much though I admire Browne’s work, this represents a rather superficial popularization of his research on the topic, and I would recommend instead Browne’s longer Biology at Work: Rethinking Sexual Equality (reviewed here) for a more comprehenseive treatment of the same, and related, topics. 

[11] A precedent for just such a programme, enacted in the name of socialism, albeit imposed consensually, was the communal rearing practices in Israeli Kibbutzim, since largely abandoned. Another suggestion along rather different lines comes from Adolf Hitler, who, believing that nature trumped nurture, is quoted as proposing: 

The State must also teach that it is the manifestation of a really noble nature and that it is a humanitarian act worthy of all admiration if an innocent sufferer from hereditary disease refrains from having a child of his own but bestows his love and affection on some unknown child whose state of health is a guarantee that it will become a robust member of a powerful community” (quoted in: Parfrey 1987: p162). 

[12] Actually, it is not entirely clear that women do have a natural desire to bear offspring. Other species probably do not have any such natural desire. Since they are almost certainly are not aware of the connection between sex and child birth, such a desire would serve no adaptive purpose and hence would never evolve. All an organism requires is a desire for sex, combined perhaps with a tendency to care for offspring after they are born. (Indeed, in principle, a female does not even require a desire for sex, only a willingness to submit to the desire of a male for sex.) As Tooby and Cosmides emphasize: 

Individual organisms are best thought of as adaptation-executers rather than as fitness-maximizers.” 

There is no requirement for a desire for offspring as such. Nevertheless, anecdotal evidence of so-called broodiness, and the fact that most women do indeed desire children, despite the costs associated with raising children, suggests that, in human females, there is indeed some innate desire for offspring. Curiously, however, the topic of broodiness is not one that has attracted much attention among evolutionists.

[13] However, there is a problem with any such case for a ‘Brave New Socialist Eugenics’. Before the eugenic programme is complete, the individuals controlling eugenic programmes (be they governments or corporations) would still possess a more traditional human nature, and may therefore have less than altruistic motivations themselves. This seems to suggest then that, as philosopher John Gray concludes in Straw Dogs: Thoughts on Humans and Other Animals (which I have reviewed here):  

“[If] human nature [is] scientifically remodelled… it will be done haphazardly, as an upshot of the struggles in the murky world where big business, organized crime and the hidden parts of government vie for control” (Straw Dogs: p6).

References  

Parfrey (1987) Eugenics: The Orphaned Science. In Parfrey (Ed.) Apocalypse Culture (New York: Amoc Press). 

Trivers 1971 The evolution of reciprocal altruism Quarterly Review of Biology 46(1):35-57 

Vining 1986 Social versus reproductive success: The central theoretical problem of human sociobiologyBehavioral and Brain Sciences 9(1), 167-187.

The Decline of the Klan and of White (and Protestant) Identity in America

Wyn Craig Wade, The Fiery Cross: The Ku Klux Klan in America New York: Simon and Schuster, 1987

Given the infamy of the organization, it is surprising that there are so few books that cover the entire history of the Ku Klux Klan in America. 

Most seem to deal only with only one period (usually, but not always, either the Reconstructionera Klan or the Second Klan that reached its apotheosis during the twenties), one locality or indeed only a single time and place

On reflection, however, this is not really surprising. 

For, though we habitually refer to the Ku Klux Klan, or the Klan (emphasis on ‘the’), as if it were a single organization that has been in continuous existence since its first formation in the Reconstruction-era, there have in fact been many different groups calling themselves ‘the Ku Klux Klan’, or some slight variant upon this name (e.g. ‘Knights of the Ku Klux Klan’, ‘United Klans of America’), that have emerged and disappeared over the century and a half since the name was first coined in the aftermath of the American Civil War.

Most of these groups had small memberships, recruited and were active in only a single locality and soon disappeared altogether. Yet even those incarnations of the Klan name that had at least some claim to a national, or at least a pan-Southern, membership invariably lacked effective centralized control over local klaverns.

Thus, Wade observes: 

After the Klan had spread outwards from Tennessee, there wasn’t the slightest chance of central control over it – a problem that would characterize the Klan throughout its long career” (p58). 

It is perhaps for this reason that most historians authoring books about the Klan have focussed on Klan activity in only a single time-frame or geographic locality.

Indeed, it is notable, besides Wynn Wade’s ‘The Fiery Cross’, the only other work of which I am aware that even purports to cover the entirety of the Klan’s history (apart from the recently published White Robes and Burning Crosses, which I have not yet read) is David Chambers’ Hooded Americanism: The History of the Ku Klux Klan

Yet even this latter work (‘Hooded Americanism’), though it purports in its blurb to be “The only work that treats Ku Kluxism for the entire period of it’s [sic] existence”, actually devotes only a single, short, cursory chapter to the Reconstruction-era Klan, when the group was first founded, arguably at its strongest, and certainly at its most violent.

Moreover, ‘Hooded Americanism’ is composed of separate chapters recounting the history of the Klan in different states in each time period, such that the book lacks an overall narrative structure and is difficult to read. 

In contrast, for those with an interest in the topic, Wade’s ‘The Fiery Cross’ is both readable and informative, and somehow manages to weave the story of the various Klan groups in different parts of the country into a single overall narrative. 

A College Fraternity Turned Terrorist? 

If, today, the stereotypical Klansman is an illiterate redneck, it might come as some surprise that the group’s name actually bears an impressively classical etymology. It derives from the ancient Greek kuklos, meaning ‘circle’. To this was added ‘Klan’, both for alliterative purposes, and in reference to the ostensible Scottish ancestry of the group’s founders.[1]

This classical etymology reflected the social standing and educational background of its founders, who, far from being illiterate rednecks, were, Wade reports, “well educated for their day” (p32). 

Thus, he reports, of the six founder members, two would go on to become lawyers, another would become editor of a local newspaper, and yet another a state legislator (p32). 

Neither, seemingly, was the group formed with any terroristic, or even any discernible political, aspirations in mind. Instead, one of these six founder members, the, in retrospect, perhaps ironicallynamed James Crow, claimed their intention was initially: 

Purely social and for our amusement” (p34). 

Since, as a good white Southerner and Confederate veteran, Crow likely approved the politics with which the Klan later became associated, he had no obvious incentive to downplay a political motive. Certainly, Wade takes him at his word. 

Thus, if the various Klan titles – Grand GoblinImperial Wizard etc. – sound more like what one might expect in, say, a college fraternity than a serious political or terrorist group, then this perhaps reflects the fact that the organization was indeed conceived with just such adolescent tomfoolery in mind. 

Indeed, although it is not mentioned by Wade, it has even been suggested that a then-defunct nineteenth-century fraternity, Kuklos Adelphon, may even have provided a partial model for the group. Thus, Wade writes: 

It has been said that, if Pulaski had had an Elks Club, the Klan would never have been born” (p33). 

White Sheets and Black Victims 

However, from early on, the group’s practical jokes increasingly focussed on the newly-emancipated, and already much resented, black population of Giles County

Yet, even here, intentions were initially jocular, if mean-spirited. Thus, the white sheets famously worn by Klansmen were, Wade informs us, originally conceived in imitation of ghosts, the wearers ostensibly posing as: 

The ghosts of the Confederate dead, who had risen from their graves to wreak vengeance on [the blacks]” (p35). 

This accorded with the then prevalent stereotype of black people as being highly superstitious. 

However, it is likely that few black victims were taken in. Instead, the very real fear that the Klan came to inspire in its predominantly black victims reflected instead the also very real acts of terror and cruelty with which the group became increasingly associated. 

The sheets also functioned, of course, as a crude disguise.  

However, it was only when the Klan name was revived in the early twentieth century, and through the imagination of its reviver, William Joseph Simmons, that this crude disguise was transformed into a mysterious ceremonial regalia, the sale of which was jealously guarded, and an important source of revenue for the Klan leadership. 

Indeed, in the Reconstruction-era Klan, the sheets, though a crude disguise, would not even qualify as a uniform, as there was no standardization whatsoever. Instead:  

Sheets, pillowcases, handkerchiefs, blankets, sacks… paper masks, blackened faces, and undershirts and drawers were all employed” (p60).  

Thus, Wade reports the irony whereby one: 

Black female victim of the Klan was able to recognise one of her assailants because he wore a dress she herself had sewed for his wife” (p60). 

Chivalry – or Reproductive Competition? 

Representing perhaps the original white knights, Klansmen claimed to be acting in order to protect the ostensible virtue and honour of white women. 

However, at least in Wade’s telling, the rapes of white women by black males, upon which white Southern propaganda so pruriently dwelt (as prominently featured, for example, in the movie, Birth of a Nation, and the book upon which the movie was based, The Clansman: A Historical Romance of the Ku Klux Klan) were actually very rare. 

Indeed, he even quotes a former Confederate General, and alleged Klansman, seemingly admitting as much when, on being asked whether such assaults were common, he acknowledged: 

Oh no sir, but one case of rape by a negro upon a white woman was enough to alarm the whole people of the state” (p20). 

Certainly, the Emmett Till case demonstrates that even quite innocuous acts could indeed invite grossly disproportionate responses in the Southern culture of honour, at least where the perceived malfeasors were black. Thus, Wade claims: 

“Sometimes a black smile or the tipping of a hat were sufficient grounds for prosecution for rape. As one southern judge put it, ‘I see a chicken cock drop his wings and take after a hen; my experience and observation assure me that his purpose is sexual intercourse, no other evidence is needed’” (p20). 

Likewise, such infamous cases as the Scottsboro boys and Groveland four illustrate that false allegations were not unknown in the American South. Indeed, false rape allegations remain common to this day

However, I remain skeptical of Wade’s claim that black-on-white rape were quite as rare as he makes out. 

After all, American blacks have had high rates of violent crime ever since records began, and, as contemporary racists are fond of pointing out, today, black-on-white rape is actually quite common, at least as compared to other victim-offender dyads. 

Thus, in Paved with Good Intentions: The Failure of Race Relations in Contemporary America, published in 1992, Jared Taylor reports: 

In a 1974 study in Denver, 40 percent of all rapes were of whites by blacks, and not one case of white-on-black-rape was found. In general, through the 1970s, black-on-white rape was at last ten times more common than white-on-black rape… In 1988 there were 9,406 cases of black-on-white rape and fewer than ten cases of white-on-black rape. Another researcher concludes that in 1989, blacks were three or four times more likely to commit rape than whites and that black men raped white women thirty times as often as white men raped black women” (Paved with Good Intentions: p93) 

Indeed, the authors of one recent textbook on criminology even claim that: 

Some researchers have suggested, because of the frequency with which African Americans select white victims (about 55 percent of the time), it [rape] could be considered an interracial crime” (Criminology: A Global Perspective: p544).[2] 

At any rate, Southern chivalry was rather selectively accorded, and certainly did not extend to black women. 

Indeed, Wade claims that Klansmen themselves, employing a blatant double-standard and rank hypocrisy, actually themselves regularly raped black women during their raids: 

The desire for group intercourse was sometimes sufficient reason for a den to go out on a raid…. Sometimes during a political raid, Klansmen would rape the female members of the household as a matter of course” (p76). 

As someone versed in sociobiological theory who has studied evolutionary psychology, I tempted to see these double-standards in sociobiological terms as a form of reproductive competition, designed to maximize the reproductive success of the white males involved, and indeed of the white race in general.

Thus, for white men, it was open season on black women, but white women were strictly off-limits to black men: 

In Southern white culture, the female was placed on a pedestal where she was inaccessible to blacks and a guarantee of purity of the white race. The black race, however, was completely vulnerable to miscegenation. White men soon learned that women placed on a pedestal acted like statues in bed, and they came to prefer the female slave whom they found open and uninhibited… The more white males turned to female slaves, the more they exalted their own women, who increasingly became a mere ornament and symbol of the Southern way of life” (p20). 

While it may not have extended to black women, the chivalry accorded white women did apparently extend to white women from Northern states, including even those who, as white Southerners saw it, came south to interfere with southern customs and traditions

Thus, among the groups targeted for intimidation by Klansmen were idealistic teachers from Northern states who had travelled south to educate black children as volunteer teachers. However, these women received better treatment than the men: 

Overt violence was frequently used on male school teachers… [whereas] as a rule, women school teachers were safer than men from Ku Klux violence. The Klan preferred to scare female teachers into leaving by written warnings” (p63-4). 

Thus, one white northern teacher reported that, unlike white men, and blacks of either sex, “They treated me gentlemanly and quietly” (p64). 

Klan Success? 

The Klan came to stand for the reestablishment of white supremacy and the denial of voting rights to blacks. 

In the short-term, at least, these aims were to be achieved, with the establishment of segregation and effective disenfranchisement of blacks throughout much of the South. Wade, however, denies the Klan any part in this victory: 

The Ku-Klux Klan… didn’t weaken Radical Reconstruction nearly as much as they nurtured it. So long as an organized secret conspiracy swore oaths and used cloak and dagger methods in the South, Congress was willing to legislate against it… Not until the Klan was beaten and the former confederacy turned to more open methods of preserving the Southern way of life did Reconstruction and its Northern support decline” (p109-110). 

Thus, it was, Wade reports, not the Klan, but rather other groups, today largely forgotten, such as Louisiana’s White League and South Carolina’s Red Shirts, that were responsible for successfully scaring blacks away from the polls and ensuring the return of white supremacy in the South. Moreover, he reports that they were only able to do so only because the federal laws enacted to tackle the Klan had ceased to be enforced precisely because the Klan itself had ceased to represent a serious threat. 

On this telling, then, the First Klan was, politically, a failure. In this respect, it was to set the model for later Klans, which would fight a losing rearguard action against Catholic immigration and the civil rights movement. 

Resurrection 

If the First Klan was a failure, why then was it remembered, celebrated and ultimately revived, while other groups, such as the White LeagueRed Shirts and Knights of the White Camelia, which employed similar terrorist tactics in pursuit of the same political objectives, are today largely forgotten? 

Wade does not address this, but one suspects the outlandishness of the group’s name and ceremonial titles contributed, as did the fact that the Klan seems to have been the only such group active throughout the entirety of the former Confederacy

The reborn Klan, founded in the early twentieth century, was the brainchild of William Joseph Simmons, a self-styled professional ‘fraternalist’, alumni of countless other fraternal organizations, Methodist preacher, strict prohibitionist and rumoured alcoholic. 

It is him to whom credit must go for inventing most of the ritualism (aka ‘Klancraft’) and terminology (including the very word ‘Klancraft’) that came to be associated with the Klan in the twentieth century. 

Birth of a Nation’ and the Rebirth of the Klan 

Two further factors contributed to the growth and success of the reborn Klan. First, was the spectacularly successful 1915 release of the movie, The Birth of a Nation

Both deplored for its message yet also grudgingly admired for its technical and artistic achievement, this film occupies a curious place in film history, roughly comparable to Leni Riefenstahl’s Nazi propaganda film, Triumph of the Will. (Sergei Eisenstein’s Communist and Stalinist propaganda films curiously, but predictably, receive a free pass.) 

In this movie, pioneering filmmaker DW Griffith is credited with largely inventing much of the grammar of modern moviemaking. If, today, it seems distinctly unimpressive, if not borderline unwatchable, this is, not only because of the obvious technological limitations of the time period, but also precisely because it invented many of the moviemaking methods that cinema-goers, and television viewers, have long previously learnt to take for granted (e.g. cross-cutting). 

Yet, if its technical and artistic innovations have won the grudging respect of film historians, its message is, of course, wholly anathema to modern western sensibilities. 

Thus, portraying the antebellum American South with the same pair of rose-tinted spectacles as those donned by the author of Gone with the Wind, ‘Birth of a Nation’ went even further, portraying blacks during the Reconstruction period as rampant rapists salivating after the flesh of white women, and Klansmen as heroic white knights who saved white womanhood, and indeed the South itself, from the ravages of both reconstruction and of Southern blacks. 

Yet, though it achieved unprecedented box-office success, even being credited as the first modern blockbuster, the movie was controversial even for its time. 

It even became the first movie to be screened in the White House, when, as a favour to Thomas Dixon, the author of the novel upon which the movie was based, the film received an advance, pre-release screening for the benefit of the then-President, Woodrow Wilson, a college acquaintance of Dixon – though what the President thought of it is a matter of dispute.[3]

Indeed, such was the controversy that the movie was to provoke that the nascent NAACP, itself formed only a few years earlier, even launched a campaign to have the film banned outright (p127-8). 

This, of course, puts the lie to the notion that the political left was, until recent times, wholly in favour of freedom of speech and artistic expression

Actually, even then, the Left’s commitment to freedom of expression was, it seems, highly selective, just as it is today. Thus, it was one thing to defend the rights of raving communists, quite another to apply the same principle to racists. 

The Murders of Mary Phagan and Leo Frank 

Another factor in the successful resurrection of the Klan were two murders that galvanized popular opinion in the South, and indeed the nation. 

First was the rape and murder of Mary Phagan, a thirteen-year-old factory girl in Atlanta, Georgia. Second was the lynching of Leo Frank, her boss and ostensible murderer, who was convicted of her murder and sentenced to death, only to have this sentence commuted to life-imprisonment, only to be lynched by outraged locals. 

His lynching was carried out by a group styling themselves ‘The Knights of Mary Phagan’, many of whom would go on to become founder members of the newly reformed Klan. 

It was actually this group, not the Klan itself, which would establish a famous Klan ritual, namely the ascent of Stone Mountain to burn a cross, a ritual Simmons would repeat to inaugurate his nascent Klan a few months later.[4]

Yet, in the history of alleged miscarriages of justice in the American South, the lynching of Leo Frank stands very much apart. 

For one thing, most victims of such alleged miscarriages of justice were, of course, black. Yet Leo Frank was a white man. 

Moreover, most of his apologists insist that the real perpetrator was, in fact, a black man. They are therefore in the unusual position of claiming racism caused white Southerners to falsely convict a white man when they should have pinned the blame on a black instead.

It is true, of course, that Frank was also Jewish. However, there was little history of anti-Semitism in the South. Indeed, I suspect there was more prejudice against him as a wealthy Northerner who had come south for business purposes, and hence as, in Southern eyes, a ‘Yankee carpetbagger’.

Moreover, although his lynching was certainly unjustified, and his conviction possibly unsafe, it is still not altogether clear that Frank was indeed innocent of the murder of which he stood accused.[5]

Wade himself admits that there was some doubt as to his innocence at the time. However, he refers to a deathbed statement by an elderly witness some seventy years later in 1982 as finally proving his innocence: 

Not until 1982 would Frank’s complete innocence come to light as a result of a witness’s deathbed statement” (p143). 

However, a claim made, not in court under oath, but rather to the press for a headline, by an elderly, dying man, regarding things he had supposedly witnessed some seventy years earlier when he was himself little more than a child, is obviously open to question.

At any rate, it is interesting to note that Frank’s lynching played an important role, not only in the founding of the Second Klan, but also in the genesis of another political pressure group whose influence on American social, cultural and political life has far outstripped that of the Klan and which, unlike the Second Klan, survives to this day – namely the Anti-Defamation League of of B’nai B’rith or ADL

The parallels abound. Just as the Second Klan was a fraternal organization for white protestants, so B’nai B’rith, the organization which birthed the ADL, was a fraternal order for Jews, and Frank himself, surely not uncoincidentally, was president of the Atlanta chapter of the group. 

The organizational efforts of B’nai B’rith to protect Frank, a local chapter president, from punishment can therefore be viewed as analogous to the way in which the Klan itself sought to protect its own members from successful prosecution through its own corrupt links in law enforcement and government and on juries. 

Moreover, just as the Klan was formed to defend and promote the interests of white Christian protestants, so the ADL was formed to protect the interests of Jews.

However, the ADL was to prove far more successful in this endeavour than the Klan had ever been, and, unlike the Second Klan, very much survives, and prospers, to this day.[6]

Klan Enemies 

Jews were not, however, the primary objects of Klan enmity during the twenties – and neither, perhaps surprisingly, were blacks. 

This was, after all, the period that later historians have termed ‘the nadir of American race relations’, when, throughout the South, blacks were largely disenfranchised, and segregation firmly entrenched. 

Yet, from a white racialist perspective, the era is misnamed.[7] Far from a nadir, for white racialists the period represented something like a utopia, lost Eden or Golden Age.[8] 

White supremacy was firmly entrenched and not, it seemed, under any serious threat. The so-called civil rights movement had barely begun.

Of course, then as now, race riots did periodically puncture the apparent peace – at Wilmington in 1898Springfield in 1908Tulsa in 1912Rosewood in 1923, and throughout much of America in 1919

However, unlike contemporary American race riots, these typically took the form of whites attacking blacks rather than vice versa, and, even when the latter did occur, white solidarity was such that the whites invariably gave at least as good as they got.[9]

Thus, in early-twentieth century America, unlike during Reconstruction, there was no need for a Klan to suppress ‘uppity’ blacks. On the contrary, blacks were already adequately suppressed.  

Thus, if the Second Klan was to have an enemy worthy of its enmity, and a cause sufficient to justify its resurrection, and, more important, sufficient to persuade prospective inductees to hand over their membership dues, it would have to look elsewhere. 

To some extent the enemy selected varied on a regional basis, depending on the local concerns of the population. The Klan thus sought, like Hitler’s later NSDAP, to be ‘all things to all men’, and, for some time before it hit upon a winning strategy, the Klan flitted from one issue to another, never really finding its feet. 

However, to the extent the Second Klan, at the national level, was organized in opposition to a single threat or adversary, it was to be found neither in Jews nor blacks, but rather in Catholics. 

Anti-Catholicism 

To modern readers, the anti-Catholicism of the Second Klan seems bizarre. Modern Americans may be racist and homophobic in ever decreasing numbers, but they at least understand racism and homophobia. However, anti-Catholicism of this type, especially in so relatively recent a time period, seems wholly incomprehensible.

Indeed, the anti-Catholicism of the Second Klan is now something of an embarrassment even to otherwise unreconstructed racists and indeed to contemporary Klansmen, and is something they very much disavow and try to play down. 

Thus, anti-Catholicism, at least of this kind, is now wholly obsolete in America, and indeed throughout the English-speaking world outside of Northern Ireland – and perhaps Ibrox Football stadium for ninety minutes on alternate Saturdays for the duration of the Scottish football season. 

It seems something more suited to cruel and barbaric times, such as England in the seventeenth century, or Northern Ireland in the 1970s… or, indeed, Northern Ireland today. But in twentieth century America? Surely not. 

How then can we make sense of this phenomenon? 

Partly, the Klan’s anti-Catholicism reflected the greater religiosity of the age. In particular, the rise of the Second Klan was, at least in Wade’s telling, intimately linked with the rise of Christian fundamentalism in opposition to reforming practices (the so-called Social Gospel) in the early twentieth century.

Indeed, under its first Imperial Wizard, William Joseph Simmons, a Methodist preacher, the new Klan was initially more of a religious organization than it was a political one, and Simmons himself was later to lament the Klan’s move into politics under his successor.[10]

There was, however, also a nativist dimension to the Klan’s rabid anti-Catholicism, since, although Catholics had been present among the first settlers of North America and numbered even among the founding fathers, Catholicism was still associated with recent immigrants to the USA, especially Italians, Irish and Poles, who had yet to fully assimilate into the American mainstream. 

Catholics were also seen as inherently disloyal, as the nature of their religious affiliation (supposedly) meant that they owed ultimate loyalty, not to America, but rather to the Pope in Rome.  

This idea seems to have been a cultural inheritance from the British Isles.[11] In England, Catholics had long been viewed as inherently disloyal, and as desirous to overthrow the monarchy and restore Britain to Catholicism, as, in an earlier age, many had indeed sought to do

This view is, of course, directly analogous to the claim of many contemporary Islamophobes and counter-Jihadists today that the ultimate consequence of Muslim immigration into Europe will be the imposition of Shariah law across Europe.

However, even in the twenties, during the Second Klan’s brief apotheosis, their anti-Catholicism already seemed, in Wade’s words, “strangely anachronistic”, to the point of being “almost astounding” (p179).

Thus, as anti-Catholicism waned as a serious organizing force in American social and political (or even religious) life, it soon became clear that the Klan had nailed their colours to a sinking ship. Thus, as anti-Catholic sentiments declined among the American population at large, so the Klan attempted to distance itself from its earlier anti-Catholicism.[12]

First, anti-Catholicism was simply deemphasized by the Klan in favour of new enemies like communism, trade unionism and the burgeoning civil rights movement. 

Eventually, in the Sixties, the United Klans of America, the then dominant Klan faction in America, announced, during “an all-out crusade for new members”, that: 

Catholics were now welcome to join the Klan – the Communist conspiracy more than made up for the Klan’s former anti-Catholic fears of Americans loyal to a foreign power” (p328). 

Today, meanwhile, the Second Klan’s anti-Catholicism is seen as an embarrassment even by otherwise unreconstructed racists and Klansmen. 

The decline of anti-Catholicism provides, then, an optimistic case-study of the remarkable speed with which (some) prejudices can be overcome.[13]

It also points to an ironic side-effect of the gradual move towards greater tolerance and inclusivity in American society – namely, even groups ostensibly opposed to this process have nevertheless been affected by it. 

In short, even the Klan has become more tolerant and inclusive

Land Losses

For many nationalists, racial and ethnic conflict is ultimately a matter of competition for territory and land.

It is therefore of interest that the decline of the Klan, and of white protestant identity in the USA, was itself presaged and foreshadowed by two land sales, one in the early-twenties, when Klan membership was at a peak, and a second just over a decade later, when the decline was already well underway.

First, in the early-twenties, the Klan’s boldly envisaged Klan University had gone bankrupt. The land was sold and a synagogue was constructed on the site. 

Then, under financial pressure in the 1930s as the Depression set in, the Klan was even forced to sell even its main headquarters in Atlanta. 

If selling a Klan university only to see a synagogue constructed on the same site was an embarrassment, then the eventual purchaser of the Klan headquarters was to be an even greater Klan enemy – the Catholic Church. 

Thus, the erstwhile site of the Klan’s grandly-titled Imperial Palace became a Catholic cathedral

Perhaps surprisingly, and presumably in an effort at rapprochement and reconciliation, the new cathedral’s hierarchy reached out to the Klan by inviting the then-Grand Wizard, Hiram Evans, who had outmanoeuvred Simmons for control of the then-lucrative cash-cow during the Klan’s twenties heyday, to the new Cathedral’s inaugural service. 

Perhaps even more surprisingly, Evans actually accepted the invitation. Afterwards, even more surprisingly still, he was quoted as observing: 

It was the most ornate ceremony and one of most beautiful services I ever saw” (p265). 

More beautiful even than a cross-burning!

Evans was forced to resign immediately afterwards. However, in deemphasizing anti-Catholicism, he correctly gaged the public mood and the Klan was later, if belatedly, to follow his lead. 

The Turn to Terror 

The Klan is seemingly preadapted to terror. However benign the intentions of its successive founders, each Klan descended into violence. 

If the First Klan was formed, as a sort of college fraternity, the Second Klan seems to have been conceived primarily as a money-making venture, and hence, in principle, no more inherently violent than the Freemasons or the Elks

Yet the turn to terror was perhaps, in retrospect, inevitable. After all, this new Klan had been modelled on what had been, or at least become, a terrorist group (namely, the First Klan), employed masks, and, from the lynching of Leo Frank, had associated itself with vigilantism from the very onset. 

Interestingly, although precise data is not readily available, one gets the distinct impression that, during this era of Klan activity, most of the victims of its violence were, not blacks nor even Catholics, but rather the very white protestant Christians whom the Klan ostensibly existed to protect, or, more specifically, those among this community who had somehow offended against the values of the community, or simply offended Klansmen themselves. 

Of course, lynchings of blacks continued, at least in the South. But these were rarely conducted under the auspices of the Klan, since these were a longstanding tradition that long predated the Klan’s re-emergence, and the perpetrators of such acts rarely felt the need to wear masks to conceal their identities, let alone don the elaborate apparel, and pay the requisite membership dues, of the upstart Klan.[14]

But Klan violence per se did not always deter new members. On the contrary, some seem to have been attracted by it. Thus, Klan recruiters (‘Kleagles’) at first maintained that newspaper exposés amounted to free publicity and only helped them in their recruitment drive. 

Instead, Wade claims, more than violence, it was the perceived hypocrisy of Klan leaders which ultimately led to the group’s demise (p254).  

Thus, it purported to champion prohibition, temperance and Christian values, but had been founded by Simmons, a rumoured alcoholic, while its (hugely successful) marketing and recruitment campaign was headed by Edward Young Clarke and Mary Elizabeth Tyler of the Southern Publicity Association, who were openly engaged in an extra-marital affair with one another. 

However, the most damaging scandal to hit the Klan, which, as we have seen, purported to champion Prohibition and the protection of the sanctity of white womanhood, combined both violence, drunkenness and hypocrisy, and occurred when DC ‘Steve’ Stephenson, a hugely successful Indianna Grand Dragon, was convicted of the rape, kidnap and murder of Madge Oberholtzer, herself a white protestant woman, during a drunken binge. 

In fact, by the time of the assault, Stephenson had already split from the national Klan to form his own rival, exclusively Northern, Klan group. However, his former prominence in the organization meant that, though they might disclaim him, the Klan could never wholly disassociate themselves from him.  

It seems to have been this scandal more than any other which finally discredited the Klan in the minds of most Americans. Thus, Wade concludes: 

The Klan in the twenties began and ended with the death of an innocent young girl. The Mary Phagan-Leo Frank case had been the spark that ignited the Klan. And the Oberholtzer-Stephenson case had put out the fire” (p247). 

Decline 

Thenceforth, the Klan’s decline was as rapid and remarkable as its rise. Thus, Wade reports: 

In 1924 the Ku Klux Klan had boasted more than four million members. By 1930, that number had withered to about forty-five thousand… No other American movement has ever risen so high and fallen so low in such a short period” (p253). 

Indeed, in Wade’s telling, even its famous 1925 march on Washington “proved to be its most spectacular last gasp”, attracting, Wade reports, “only half of the sixty thousand expected” (p249) 

The National gathering of thirty thousand was less than what [DC Stephenson] could have mustered in Indiana alone during the Klan’s heyday” (p250). 

Not only did numbers decline, so did the membership profile. 

Thus, initially, the new group had attracted members from across the socioeconomic spectrum of white protestant America, or at least among all those who could afford the membership dues. Indeed, analyses of surviving membership rolls suggest that the Klan in this era was, at first, a predominantly middle-class group representing what was then the heart of Middle America

However, probably as a consequence of the revelations of violence, the respectable classes increasingly deserted the group.

Klan defections began with the prominent, the educated and the well-to-do, and proceeded down through the middle-class” (p252). 

Thus, the stereotype of the archetypal Klansman as an uneducated, semi-literate, tattooed, beer-swilling redneck gradually took hold. 

Indeed, from 1926 or so, the Klan even sought to reclaim this image as a positive attribute, portraying themselves as, in their own words, “a movement of plain people” (p252). 

But this marketing strategy, in Wade’s telling, badly backfired, since even less well-off, but ever aspirant, Americans hardly wanted to associate themselves with a group that admitted to being uneducated hicks (Ibid.). 

As well as the membership narrowing in its socioeconomic profile, Klan membership also retreated geographically. 

Thus, in its brief heyday, the Second Klan, unlike its Reconstruction-era predecessor, had had a truly national membership. 

Indeed, the state with the largest membership was said to be Indiana, where DC ‘Steve’ Stephenson, in the few years before his dramatic downfall, was said to have built up a one-man political machine that briefly came to dominate politics in the Hoosier State. 

However, in the aftermath of the fall of Stephenson and his Indiana Klan, the Klan was to haemorrhage members in not just Indiana, but throughout the North. The result was that: 

By 1930, the Klan’s little strength was concentrated in the South. Over the next half-century the Klan would gradually lose its Northern members, regressing more and more closely towards its Reconstruction ancestor until, by the 1960s, it would stand as a near-perfect replica” (p252) 

Thenceforth, the Klan was to remain, once again, a largely Southern phenomenon, with what little numerical strength it retained overwhelmingly concentrated in the states of the former Confederacy. 

Death and Taxes – the Only Certainties in Life 

The Second Klan was finally destroyed, however, not by declining membership, violent atrocities, bad publicity and inept brand-management, nor even by government prosecution, though all these factors did indeed play a part.  

Rather, the final nail in the Klan’s coffin was dealt by the taxman. 

In 1944, the Inland Revenue demanded restitution in respect of unpaid taxes due on the profits earnt from subscription dues during the Klan’s brief but lucrative 1920s membership boom (p275). 

The Klan, which had been haemorrhaging members even before the 1930s Depression, and, unlike the economy as a whole, had yet to recover, was already in a dire financial situation. Therefore, it could never hope to pay the monies demanded by the government, and instead was forced to declare bankruptcy (p275). 

Thenceforth, the Klan was no more. 

Ultimately, then, the government destroyed the Klan the same way had did Al Capone – failure to pay their taxes! 

The Klan and the Nazis – A Match Made in Hell? 

In between recounting the Klan’s decline, Wade also discusses its supposed courtship of, or by, the pro-Nazi German-American Bund

Actually, however, a careful reading of Wade’s account suggests that he exaggerates the extent of any such association. 

Thus, it is notable, if bizarre, that, in Wade’s own telling, the Bund’s leader, German-born Fritz Julius Kuhn, in seeking the “merging of the Bund with some native American organization who would shield it from charges of being a ‘foreign’ agency”, had first set his sights on that most native of “native American organizations” – namely, Native Americans (p269-70). 

When this quixotic venture inevitably ended in failure, if only due to “profound indifference on the Indians’ part”, only then did the rebuffed Kuhn turn his spurned attentions to the Klan (p270). 

Yet the Klan seemed to have been almost as resistant to Kuhn’s advances as the Native Americans had been. Thus, Wade quotes Kuhn as admitting, somewhat ambiguously:

The Southern Klans did not want to be known in it… So the negotiations were between representatives of the Klans in New Jersey and Michigan, but it was understood that the Southerners were in” (p270). 

Yet, by this time, in Wade’s own telling, the Klan was extremely weak in Northern states such as New Jersey and Michigan, and what little numerical strength it retained was concentrated in the Southern states of the former Confederacy. 

This suggests that it was only the already marginalized northern Klan groups who, bereft of other support, were willing to entertain the notion of an alliance with Bund. 

If the Southern Klan leadership was indeed aware of, and implicitly approved, the link, it was nevertheless clear that they wanted to keep any such association indirect and at an arm’s length, hence maintaining plausible deniability

This is perhaps the only way we can make sense of Kuhn’s acknowledgement, on the one hand, that “the Southern Klans did not want to be known in it”, while, on the other, that “it was understood that the Southerners were in” (p270). 

Thus, when negative publicity resulted from the joint Klan-Bund rally in New Jersey, the national (i.e. Southern) Klan leadership was quick to distance itself from and disavow any notion of an alliance, promptly relieving the New Jersey Grand Dragon of his office.

On reflection, however, this is little surprise.

For one thing, German-Americans, especially those who willing to flagrantly flaunt their ‘dual loyalty’ by joining a group like the German-American Bund, were themselves exactly the type of hyphenated-Americans that the 100% Americans of the Klan affected to disparage.

Indeed, though they may have been white and (mostly) protestant, German-Americans own integration into the American mainstream was, especially after the anti-German sentiment aroused during the First World War, still very much incomplete. 

Today, of course, we might think of Nazis and the Klan as natural allies, both being, after all, that most reviled species of humanity – namely, white racists.

However, besides racialism, the Klan and the Nazis actually had surprisingly little in common. 

After all, the Klan was a Protestant fundamentalist group opposed to Darwinism and the teaching of evolutionary theory in schools.

Hitler, in contrast, was an ardent social Darwinist, who was reported by his confidents as harbouring a profound antipathy to the Christian faith, albeit one he kept out of his public pronouncements for reasons of political expediency, and some of whose followers even championed a return to Germanic paganism.[15]

Indeed, even their shared racialism was directed primarily towards different targets.

In Germany, blacks, though indeed persecuted by the Nazis, were few in number, and hence not a major target of Nazi propaganda, animosity or persecution – and nor were Catholics among the groups targeted for persecution by the Nazis, Hitler himself having been raised as a Catholic in his native Austria.[16]

Yet, if Catholics were not among the groups targeted for persecution by the Nazis, members of secret societies like the Klan very much were. 

Thus, among the less politically-fashionable targets for persecution by the Nazis were both the Freemasons and indeed the closest thing Germany had to a Ku Klux Klan. 

Thus, in 1923 a Klan-like group, “the German Order of the Fiery Cross”, had been founded in Germany in imitation of the Klan, by an expatriate German on his return to the Fatherland from America (266). 

Yet, ironically, it was Hitler himself who ultimately banned and suppressed this German Klan imitator (p267). 

The Third Klan/s 

The so-called Third Klan was really not one Klan, but many different Klans, each not only independent of one another, but also often in fierce competition with one another for members and influence. 

They filled the vacuum left by the defunct Second Klan and competed to match its size, power and influence – though none were ever to succeed. 

From this point, it is no longer really proper to talk about the Klan, since there was not one Klan but rather many separate Klans, with little if any institutional connections with one another. 

Moreover, the different Klan groups varied more than ever in their ethos and activity. Thus, Wade reports: 

Some Klans were quietly ineffective, some were violent and some were borderline psychotic” (p302) 

With no one group maintaining a registered trademark over the Klan ‘brand’, inevitably the atrocities committed by one group ended up discrediting even other groups with no connection to them. The Klan ‘brand’ was irretrievably damaged, even among those who might otherwise be attracted to its ideology and ethos.[17] 

Indeed, the plethora of different groups was such that even Klansmen themselves were confused, one Dragon complaining: 

The old countersigns and passwords won’t work because all Klansmen are strangers to each other” (p302). 

Increasingly, opposition to the burgeoning Civil Rights Movement, rather than to Catholicism, now seems to have become the Klan’s chief preoccupation and the primary basis upon which Klaverns, and Kleagles, sought to attract recruits. 

However, respectable opposition to desegregation throughout the South was largely monopolized by the Citizens’ Councils.

Indeed, in Wade’s telling, “preventing a build-up of the Ku Klux Klan” was, quite as much as opposing desegregation, one of the principal objectives for which the Citizens Councils had been formed, since “violence was bad for business, and most of the council leaders were businessmen” (p299). 

If this is true, then perhaps the Citizens Councils were more successful in achieving their objectives than they are usually credited as having been. Segregation, of course, was gone and did not come back – but, then again, neither did the Klan. 

Yet, in practice, Wade reports, the main impact of the Citizens Councils on the Klan was: 

Not so much eliminating the Klan as leaving it with nothing but nothing but the violence prone dregs of Southern white society” (p302). 

Thus, the Klan’s image, and the characteristic socioeconomic status of its membership profile, declined still further. 

The electoral campaigns of the notorious segregationist and governor of Alabama George Wallace also had a similar effect. Thus, Wade reports: 

Wallace’s campaigns… swallowed a lot of disaffected Klansmen. In fact, Wallace’s campaigns offered them the first really viable alternative to the Klan” (p364). 

Political Cameos and Reinventions 

Here in Wade’s narrative, the myriad of disparate Klan groups inevitably fade into the background, playing a largely reactive, and often violent but nevertheless largely ineffective, and often outright counterproductive, role in opposing desegregation. 

Instead, the starring role is taken, in Wade’s own words, by: 

Two men who were masters of the electronic media: an inspired black minister, Martin Luther King, and a pragmatic white politician, JFK, who would work in an uneasy but highly productive tandem” (p310). 

Actually, in my view, it would be more accurate to say that the starring role was taken by two figures who are today vastly overrated on account of their respective early deaths by assassination, and consequent elevation to martyr status. 

In fact, however, while Wade’s portrait of King is predictably hagiographic, that of Kennedy is actually refreshingly revisionist. 

Far from the liberal martyr of contemporary left-liberal imagining, Kennedy was, in Wade’s telling, only a “pragmatic white politician”, and moreover only a rather late convert to the African-American civil rights movement

Indeed, before he first took office, Wade reports, Kennedy had actually endorsed the the Dunning School of historiography regarding the Reconstruction-era, was critical of Eisenhower having sent the National guard into Arkansas to enforce desegregation, and only reluctantly, when his hand was forced, himself sent the National Guard into Alabama (p317-22). 

Meanwhile, another political figure making a significant cameo appearance in Wade’s narrative, ostensibly on the opposite side of the debate over desegregation, is the notorious segregationist governor of Alabama, George Wallace

Yet Wade’s take on Wallace is, in many respects, as revisionist as his take on Kennedy. Thus, far from a raving racist and staunch segregationist, Wade argues: 

In retrospect… no one used and manipulated the Klansmen more than Wallace. He gave them very few rewards for their efforts on his behalf: often his approval was enough. And in spite of his fiery cant and cries of ‘Never!’ that so thrilled Klansmen, Wallace was a former judge who well understood the law – especially how far he could bend it” (p322). 

Thus, Wade reports, while it is well-known that Wallace famously blocked the entrance to the University of Alabama preventing black students from entering, what is less well-known is that: 

When the marshals asked for the black students to be admitted in the afternoon, Wallace quietly stepped aside. Instead of being recognized, at best, as a practical politician or, at worst, a pompous coward, Wallace was instead hailed by Klansmen as a dauntless hero” (p322). 

Thus, if Kennedy was, in Wade’s telling, “a pragmatic white politician”, then Wallace emerges as an outright political chameleon and shameless opportunist. 

As further evidence for this interpretation, what Wade does not get around to mentioning is that, in his first run for the governorship of Alabama in 1958, Wallace had actually spoken against the Klan and been backed by the NAACP, only after his defeat vowing, as he was eloquently quoted as observing, ‘never to be outniggered again’ again, and hence reinventing himself as an (ostensible) arch-segregationist. 

Neither does Wade mention that, in his last run for governor in 1982, reinventing himself once again as a born-again Christian, Wallace actually managed to win over 90% of the black vote

Yet even Wallace’s capacity for political reinvention is outdone by that of one of his supporters and speech-writers, former Klan leader Asa ‘Ace’ Carter, a man so notorious for his racism that even the Wallace denied employing him, but who was supposedly responsible for penning the words to Wallace’s infamous segregation now, segregation tomorrow, segregation forever” speech

Expelled from a Citizens’ Council for extremism, Carter had then founded and briefly reigned as tin pot führer of one of the most violent Klan outfits – “the Original Ku Klux Klan of the Confederacy, which resembled a cell of Nazi storm troopers” (p303). 

This group was responsible for one of the worst Klan atrocities of the period, namely the literal castration of a black man, whom they: 

Castrated… with razor blades; and then tortured… with by pouring kerosene and turpentine over his wounds” (p303). 

This gruesome act was, according to a Klan informant, performed for no better reason than as a “test of one of the members’ mettle before being elected ‘captain of the lair” (p303). 

The group was also, it seems, too violent even for its own good. Thus, it subsequently broke up when, in a dispute over financing and the misappropriation of funds, Carter was to shoot two fellow members, yet, for whatever reason, never stood trial (Ibid.). 

Yet what Wade does not get around to mentioning is Asa ‘Ace’ Carter was also, like Wallace, to later successfully reinvent himself, and achieve fame once again, this time as Forrest Carter, an ostensibly half-Native American author who penned such hugely successful novels as The Rebel Outlaw: Josey Wales (subsequently made into the successful motion picture, The Outlaw Josey Wales, directed by and starring Clint Eastwood) and The Education of Little Tree, an ostensible autobiography of a growing up on an Indian reservation, and a book so sickeningly sentimental that it was even recommended and championed by none other than Oprah Winfrey! 

The David Duke Show” 

By the 1970s, open support for white supremacy and segregation was in decline, even among white Southerners. This, together with Klansmen’s involvement in such atrocities such as the 16th Street Baptist Church bombing, might have made it seem that the Klan brand was irretrievably damaged and in terminal decline, never again to play a prominent role in American social or political life again. 

Yet, perhaps surprisingly, the Klan brand did manage one last hurrah in the 1970s, this time through the singular talents of one David Duke

Duke was to turn the Klan’s infamy to his own advantage. Thus, his schtick was to use the provocative imagery of the Klan (white sheets, burning crosses) to attract media attention, but then, having attracted that attention, to come across as much more eloquent, reasonable, intelligent and clean-cut than anyone ever expected a Klansman to be – which, in truth, isn’t difficult. 

The result was a media circus that one disgruntled Klansmen aptly dismissed as “The David Duke Show” (p373). 

It was the same trick that George Lincoln Rockwell had used a generation before, though, whereas Rockwell used Nazi imagery (e.g. swastikas, Nazi salutes) to attract media attention, Duke instead used the imagery of the Klan (e.g. white sheets, burning crosses).

If Duke was a successor to Rockwell, then Duke’s own contemporary equivalent, fulfilling a similar niche for the contemporary American media as the handsome, eloquent, go-to face of white nationalism, is surely Richard Spencer. Indeed, if rumours are to be believed, Spencer even has a similar penchant to Duke for seducing the wives and girlfriends of his colleagues and supporters.. 

Such behaviour, along with his lack of organizational ability, were among the reasons that Duke alienated much of his erstwhile support, haemorrhaging members almost as fast as he attracted them. 

Many such defectors would go on to form rival groups, including Tom Metzger, a TV repairman, who split from Duke to form a more openly militant group calling itself White Aryan Resistance (known by the memorable backronym ‘WAR’), and who achieved some degree of media infamy by starring in multiple television documentaries and talk-shows, before being bankrupted by a legal verdict in which he was held liable for involvement in a murder in which he seems to have had literally no involvement.

However, for Wade, the most important defector was, not Metzger, but rather Bill Wilkinson, perhaps because, unlike Metzger, who, on splitting from Duke, abandoned the Klan name, Wilkinson was to set up a rival Klan group, successfully poaching members from Duke. 

However, lacking Duke’s eloquence and good-looks, Wilkinson had instead to devise to another strategy in order to attract media attention and members. The strategy he hit upon was that of “taking a public stance of unbridled violence” (p375). 

This, together with the fact the fact that he was nevertheless able to evade prosecution, led to the allegation that he was a state agent and his Klan an FBI-sponsored honey trap, an allegation only reinforced by the recent revelation that he is now a multimillionaire in the multiracial utopia of Belize

Besides openly advocating violence, Wilkinson also hit upon another means of attracting members. Thus, Wade reports, he “perfected a technique that other Klan leaders belittled as ‘ambulance chasing’” (p384): 

Wilkinson… traversed the nation seeking racial ‘hot spots’… where he can come into a community, collect a large amount of initiation fees, sell a few robes, sell some guns… collect his money and be on his way to another ‘hot spot’” (p384). 

This is, of course, ironically, the exact same tactic employed by contemporary black race-baiters like Al Sharpton and the Black Lives Matter movement

Owing partly to the violent activities of rival Klan groups from whom he could never hope to wholly disassociate himself, Duke himself eventually came to see the Klan baggage as a liability. 

One by one, he jettisoned these elements, styling himself National Director rather than Imperial Wizard, wearing a suit rather than a white sheet and eventually giving up even the Klan name itself. Finally, in what was widely perceived as an act of betrayal, Duke was recorded offering to sell his membership rolls to Wilkinson, his erstwhile rival and enemy (p389-90). 

In place of the Klan, Duke sought to set up what he hoped would be a more mainstream and respectable group, namely the National Assocation for the Advancement of White People or NAAWP, one of the many short-lived organizations to adopt this rather unimaginative name.[18]

Yet on abandoning the provocative Klan imagery that had first brought him to the attention of the media, Duke suddenly found media attention much harder to come by. Wade concludes:

Duke had little chance at making a go of any Klan-like organization without the sheets and ‘illuminated crosses’. Without the mumbo-jumbo the lure of the Klan was considerably limited. Five years later the National Association for the Advancement of White People hadn’t got off the ground” (p390). 

Duke was eventually to re-achieve some degree notoriety as a perennial candidate for elective office, initially with some success, even briefly holding a seat in the Louisiana state legislature and winning a majority of the white vote in his 1991 run for Governorship of Louisiana.  

However, despite abandoning the Klan, Duke was never to escape its shadow. Thus, even forty years after abandoning the Klan name, Duke was to still find his name forever prefixed with the title former Klansman or former Grand Wizard David Duke, an image he was never able to jettison. 

Today, still railing against “the Jews” to anyone still bothering to listen, his former good looks having long previously faded, he cuts a lonely, rather pathetic figure, marginal even among the already marginal alt-right, and in his most recent electoral campaign, an unsuccessful run for a Senate seat, he managed to pick up only a miserly three percent of the vote. 

Un-American Americanism 

Where once Klansmen could unironically claim to stand for 100% Americanism, now, were not the very word ‘un-American‘ so tainted by McCarthyism as to sound almost un-American in itself, the Klan could almost be described as a quintessentially un-American organization. 

Indeed, interestingly, Wade reports that there was pressure on the House Un-American Activities Committee to investigate the Klan from even before the committee was first formed. Thus, Wade laments: 

The creation of the Dies Committee had been urged and supported by liberals and Nazi haters who wanted it used as a congressional forum against fascism. But in the hands of chairman Martin Dies of Texas, an arch-segregationist and his reactionary colleagues… the committee instead had become an anachronistic pack of witch hunters who harassed labor leaders… and discovered ‘communists’ in every imaginable shape and place” (p272).[19]

Thus, Wade’s chief objection to the House Un-American Activities Committee seems to be, not that they became witch hunters, but that they chose to hunt, to his mind, the wrong coven of witches. Instead of going after the commies, they should have targeted the racists instead.

Ultimately, Wade was to have his wish, and the Klan did indeed fall victim to the same illiberal and sometimes illegal FBI cointelpro programme of harassment as more fashionable victims on the left, such as Martin Luther King, the Nation of Islam, and the Black Panther Party (p361-3).[20]  

Licence to Kill?

The Klan formerly enjoyed a reputation something like that of the the Mafia, namely as a violent dangerous group whom a person crossed at their peril, since, again like the Mafia, they had a proven track record of committing violent acts and getting away with it, largely through their corrupt links with local law enforcement in the South, and the unwillingness of all-white Southern juries to hand down convictions.[21]

Today, however, this reputation is long lost.

Indeed, if today a suspect in a racist murder were outed as a Klansman, this would likely unfairly prejudice a jury of any ethnic composition, anywhere in the country, against him, arguably to the point of denying him any chance of a fair trial. 

Thus, when aging Klansmen, such as Edgar Ray KillenThomas Blanton and Bobby Frank Cherrywere belatedly put on trial and convicted in the 2000s for killings committed in the early 1960s, some forty years previously, I rather suspect that they received no fairer a trial then than they did, or would have had, when put on trial before all-white juries in the 1960s American South. The only difference was that now the prejudice was against them rather than in their favour. 

Thus, today, we have gone full circle. Quite when the turning point was reached is a matter of conjecture.

Arguably, the last incident of Klansmen unfairly getting away with murder was the so-called Greensboro massacre in 1979, when Klansmen and other white nationalist activists shot up an anti-Klan rally organized by radical left Maoist labour agitators in North Carolina. 

Here, however, if the all-white jury was indeed prejudiced against the victims of this attack, it was not because they were blacks (all but one of the five people killed were actually white), but rather that they were ‘reds’ (i.e. communists).[22] 

Today, then, the problem is not with all-white juries in the South refusing to convict Klansmen, but rather with majority-black juries in urban areas across America refusing to convict black defendants, especially on police evidence, no matter how strong the case against them, for example in the OJ case (see also Paved with Good Intentions: p43-4; p71-3). 

Klans Today 

Wade’s ‘The Fiery Cross’ was first published in 1987. It is therefore not, strictly speaking, a history of the Klan for the entirety of its existence right up to the present day, since Klan groups have continued to exist since this date, and indeed continue to exist in modern America even today. 

However, Wade’s book nevertheless seems complete, because such groups have long previously ceased to have any real significance in American political, social and cultural life save as a media bogeyman and folk devils

In its brief 1920s heyday, the Second Klan could claim to play a key role in politics, even at the national level. 

Wade even claims, dubiously as it happens, that Warren G Harding was inducted into the organization in a special and secret White House ceremony while in office as President (p165).

Certainly, they helped defeat the candidacy of Al Smith, on account of his Catholicism, in 1924 and again in 1928 (p197-99). 

Some half-century later, during the 1980 presidential election campaign, the Klan again made a brief cameo, when each candidate sought to associate the Klan with their opponent, and thereby discredit him. Thus, Reagan was accused of insensitivity for praising “states’ rights, to which Reagan retorted by accusing his opponent, inaccurately as it happens, of opening his campaign in the city that “gave birth to and is the parent body of the Ku Klux Klan”. 

This led Grand Dragon Bill Wilkinson to declare triumphantly: 

We’re not an issue in this Presidential race because we’re insignificant” (p388). 

Yet what Wilkinson failed to grasp, or at least refused to publicly admit, was that the Klan’s role was now wholly negative. Neither candidate actually had any actual Klan links; each sought to link the Klan only with their opponent.

Whereas in the 1920s, candidates for elective office had actively and openly courted Klan votes, by the time of the 1980 Presidential election to have done so would have been electoral suicide. 

The Klan’s role, then, was as bogeymen and folk devils – roughly analogous to that played by Willie Horton in the 1988 presidential campaign; the role NAMBLA plays in the debate over gay rights; or, indeed, the role communists played during the First and Second Red Scares.[23] 

Indeed, although in modern America lynching has fallen into disfavour, one suspects that, if it were ever to re-emerge as a popular American pastime and application of participatory democracy to the judicial process, then, among the first contemporary folk devils to be hoisted from a tree, alongside paedophiles and other classes of sex offender, would surely be Klansmen and other unreconstructed white racists. 

Likewise, today, if a group of Klansmen attempt to march in any major city in America then a police presence is required, not to protect innocent blacks, Jews and Catholics from rampaging Klansmen, but rather to protect the Klansmen themselves from angry assailants of all ethnicities, but mostly white. 

Indeed, the latter, styling themselves Antifa (an abbreviation of anti-fascist), despite their positively fascist opposition to freedom of speech, expression and assembly, have even taken, like Klansmen of old, to wearing masks to disguise their identities

Perhaps anti-masking laws, first enacted to defeat the First Klan, and later resurrected to tackle later Klan revivals, must be revived once again, but this time employed, without prejudice, against the contemporary terror, and totalitarianism, of the militant left. 

Endnotes

[1] The only trace of possible illiteracy in the name is found in the misspelling of ‘clan’ as ‘klan’, presumably, again, for alliterative purposes, or perhaps reflecting a legitimate spelling in the nineteenth century when the group was founded.

[2] The popular alt-right meme that there are literally no white-on-black rapes is indeed untrue, and reflects the misreading of a table in a government report that actually involved only a small sample. However, the government does not release data on the prevalence of interracial rape. However, there is no doubt that black-on-white rape is much more common than white-on-black rape. Similarly, in the US prison system, where male-male rape is endemic, such assaults disproportionately involve non-white assaults on white inmates, as discussed by a Human Rights Watch report.

[3] The then-president Woodrow Wilson (a noted historian of the reconstruction period, of Southern background, and sympathies, whose five-volume book, A History of the American People, on the reconstruction period is actually quoted in one of the movie’s title cards) was later quoted as describing the movie, supposedly the first moving picture he had ever seen as: 

History [writ] with lightning. My only regret is that it is all so terribly true” (p126). 

However, during the controversy following the film’s release, Wilson himself later issued a denial that he had ever uttered any such words, insisting that he had only agreed to the viewing as a “courtesy extended to an old acquaintance” and that:

The President was entirely unaware of the character of the play before it was presented and has at no time expressed his approbation of it” (p137).

[4] Like so many other aspects of what is today considered Klan ritual, there is no evidence that cross-burning, or cross-lighting as devout Christian Klansmen prefer to call it, was ever practised by the original Reconstruction-era Klan. However, unlike other aspects of Klan ritualism, it had been invented, not by Simmons, but by novelist Thomas Dixson (by way of Walter Scott’s The Lady of the Lake), in imitation of an ostensible Scottish tradition, for his book, The Clansman: A Historical romance of the Ku Klux Klan, upon which novel the movie Birth of a Nation was based. The new Klan was eventually granted an easement in perpetuity over Stone Mountain, allowing it to repeat this ritual.

[5] A conviction may be regarded as unsafe, and even as a wrongful conviction, even if we still believe the defendant might be guilty of the crime with which s/he is charged. After all, the burden is on the prosecution to prove that the defendant is guilty beyond reasonable doubt. If there remains reasonable doubt, then the defendant should not have been convicted. Steve Oney, who researched the case intensively for his book, And the Dead Shall Rise, concedes that “the case [against Frank] is not as feeble as most people say it is”, but nevertheless concludes that Frank was probably innocent, “but there is enough doubt to leave the door ajar” (Berger, Leo Frank Case Stirs Debate 100 Years After Jewish Lynch Victim’s Conviction, Forward, August 30, 2013).

[6] The ADL ’s role in Wade’s narrative does not end here, since the ADL would later play a key role in fighting later incarnations of the Klan.

[7] Indeed, even from a modern racial egalitarian perspective, the era is arguably misnamed. After all, from a racial egalitarian perspective, the plantation era, when slavery was still practised, was surely worse, as surely was the period of bloody conflict between Native Americans and European colonists.

[8] Even among open racialists, support for slavery is rare. Therefore, few American racists openly pine for a return to the plantation era. Segregation is, then, then next best thing, short of the actual expulsion of blacks back to Africa. Thus, it is common to hear white American racialists hold up early twentieth century America as lost Eden. For example, many blame the supposed decline of the US public education system on desegregation.

[9] It is thus a myth that oppressed peoples invariably revolt against their oppressors. In reality, truly oppressed peoples, like blacks in the South in this period, tend to maintain a low profile precisely so as to avoid incurring the animosity of their oppressors. It is only when they sense weakness in their oppressors, or ostensible oppressors, that insurrections tend to occur. This then explains the paradox that black militancy in America seems to be inversely proportional to the actual extent of black oppression. Thus, the preeminent black leader in America at the height of the Jim Crow era was Booker T Washington, by modern standards a conservative, if not an outright Uncle Tom. Yet, today, when blacks are the beneficiaries, not the victims of discrimination, in the form of what is euphemistically called affirmative action, and it is whites who are ‘walking on eggshells’ and in fear of losing their jobs if they say something offensivee to certain protecyed groups, American blacks are seemingly more militant and belligerent than ever, as the recent BLM riots have shown only too well. 

[10] This disavowal may have been disingenuous and reflected the fact that, by this time, Simmons had lost control of the then-lucrative cash-cow.

[11] Thus, in Ireland, the Protestant minority opposed ‘Home Rule’ for Ireland (a form of devolution, or self-government, that fell short of full independence) on the grounds that it would supposedly amount, in effect, to Rome Rule, due to the Catholic majority in Ireland.

[12] Interestingly, unlike the Klan, another initially anti-Catholic fraternal order, Junior Order of United American Mechanics, successfully jettisoned both its earlier anti-Catholicism, and a similar association with violence, to reinvent itself as a respectable, non-sectarian beneficent group. However, the Klan was ultimately unable to achieve the same feat. 

[13] Of course, other forms of intergroup prejudice have been altogether more intransigent and long-lasting. Indeed, even anti-Catholicism itself had a long history. Pierre van den Berghe, in his excellent The Ethnic Phenomenon (which I have reviewed here and here), argues that assimilation is possible on in specific circumstances, namely when the groups to be assimilated are: 

Similar in physical appearance and culture to the group to which it assimilates, small in proportion to the total population, of low status and territorially dispersed” (The Ethnic Phenomenon: p219). 

Thus, those hoping other forms of intergroup prejudice (e.g. anti-black sentiment in the USA, or indeed the continuing animosity between Catholics and Protestants in Northern Ireland) can be similarly overcome in such a short period of time in coming years are well-advised not to hold their breaths.

[14] In the many often graphic images of lynchings of black victims accessible via the internet, I have yet to find one in which the lynch-mobs are dressed in the ceremonial regalia of the Klan. On the contrary, far from wearing masks, the perpetrators often proudly face the camera, evidently feeling no fear of retribution or legal repercussions for their vigilantism.

[15] The question of the religious beliefs, if any, of Hitler is one of some controversy. Certainly, many leading  figures in the National Socialist regime, including Martin Bormann and Alfred Rosenberg, were hostile to Christianity. Likewise, Hitler is reported as making anti-Christian statements in private, in both Hitler’s Table Talk, and by such confidents as Speer in his memoirs. Hitler talked of postponing his Kirchenkampf, or settling of accounts with the churches, until after the War, not wishing to fight enemies on multiple fronts.

[16] To clarify, it has been claimed that the Catholic Church faced persecution in National Socialist Germany. However, this persecution did not extend to individual Catholics, save those, including some priests, who opposed the regime and its policies, in which case the persecution reflected their political activism rather than their religion as such. Although Hitler was indeed hostile to Christianity, Catholicism very much included, Nazi conflict with the Church seems to have reflected primarily the fact that the Nazis, as a totalitarian regime, sought to control all aspects of society and culture in Germany, including those over which the Church had formerly claimed hegemony (e.g. education).

[17] In a later era, this was among the reasons given by David Duke in his autobiography for his abandonment of the Klan brand, since his own largely non-violent Klan faction was, he complained, invariably confused with, and tarred with the same brush as, other violent Klan factions through guilt by association

[18] Duke later had a better idea for a name for his organization – namely, the National Organization For European American Rights, which he intended to be known by the memorable acronym, NO-FEAR. Unfortunately for him, however, the clothing company who had already registered this name as a trademark thought better of it and forced him to change the group’s name to the rather less memorable European-American Unity and Rights Organization (or EURO).

[19] What Wade does not mention is that perhaps the most prominent of the “liberals and nazi haters” who advocated for the formation of the HUAC in order persecute fascists and Klansmen, and who, as the joint-chairman of the ‘Special Committee on Un-American Activities’, the precursor to the HUAC, from 1934 to 1937, did indeed use the Committee to target fascists, albeit mostly imaginary ones, was congressman Samuel Dickstein, who was himself a paid Soviet agent, hence proving that McCarthyist concerns regarding communist infiltration and subversion at the highest level of American public life were no delusion.

[20] Indeed, according to Wade, it was the Klan that were the first victims of cointelpro, for whom the programme was designed, with leftist groups being subjected to the same harassment only later. Thus, Wade writes:

After developing Cointelpro for the Klan, the FBI also used it against the Black Panthers, civil rights leaders, and antiwar demonstrators” (p363).

Certainly, the Klan was henceforth a major target of the FBI. Indeed, the FBI were even accused, in a sting operation apparently funded by the ADL, of provoking one Klan bombing in which a woman, Kathy Ainsworth, herself one of the bombers and an active, militant Klanswoman, was killed (p363). The FBI was also implicated in another Klan killing, namely that of civil rights campaigner Viola Liuzzo, since an FBI agent was present with the killers in the car from which the fatal shots were fired (p347-54). Indeed, Wade reports that “about 6 percent of all Klansmen in the late 1960s worked for the FBI” (p362).

[21] Thus, former Klan leader David Duke, in his autobiographical My Awakening, reports that, when he and other arrestees were outed as Klansmen in a Louisiana prison, the black prisoners, far attacking them, were initially cowed by the revelation: 

At first, it seemed my media reputation intimidated them. The Klan had a reputation, although undeserved, like that of the mafia. Some of the Black inmates obviously thought that if they did anything to harm me, a “Godfather” type of character, they might soon end up with their feet in cement at the bottom of the Mississippi.

[22] All but one of those killed, Wade reports, were leaders of the Maoist group responsible for the anti-Klan rally (p381). Wade uses this to show that the violence was premeditated, having been carefully planned and coordinated by the Klansmen and neo-Nazis. However, the fact that they were leading figures in this Maoist group would also likely mean that they were hardly innocent victims, at least in the eyes of conservative white jurors in North Carolina. In fact, the victims were indeed highly unsympathetic, not merely on account of their politics, but also on account of the fact that they had seemingly deliberately provoked the Klan attack, openly challenging the Klan to attend their provocatively titled ‘Death to the Klan’ rally (p379), and, though ultimately heavily outgunned, they themselves seem to have first initiated the violence by attacking the cars carrying Klansmen with placards (p381).

[23] This was the same role that the Klan was to play once again during the recent Trump presidential campaigns, as journalists trawled the South in search of grizzled, self-appointed Grand Dragons willing, presumably in return for a few drinks, to offer their unsolicited endorsement of the Trump candidature and thereby, in the journalists’ own minds, and that of some of their readers, discredit him through guilt-by-association.

‘Alas Poor Darwin’: How Stephen Jay Gould Became an Evolutionary Psychologist and Steven Rose a Scientific Racist

Steven Rose and Hillary Rose (eds.), Alas Poor Darwin: Arguments against Evolutionary Psychology, London: Jonathan Cape, 2000.

Alas Poor Darwin: Arguments against Evolutionary Psychology’ is an edited book composed of multiple essays by different authors, from different academic fields, brought together for the purpose of ostensibly all critiquing the emerging science of evolutionary psychology. This multiple authorship makes it difficult to provide an overall review, since the authors approaches to the topic differ markedly.  

Indeed, the editors admit as much, conceding that the contributors “do not speak with a single voice” (p9). This seems to a tacit admission that they frequently contradict one another. 

Thus, for example, feminist biologist Anne Fausto-Sterling attacks evolutionary psychologists such as Donald Symons as sexist for arguing that the female orgasm as a mere by-product of the male orgasm and not an adaptation in itself, complaining that, according to Symons, women “did not even evolve their own orgasms” (p176). 

Yet, on the other hand, scientific charlatan Stephen Jay Gould criticizes evolutionary psychologists for the precise opposite offence, namely for (supposedly) viewing all human traits and behaviours as necessarily adaptations and ignoring the possibility of by-products (p103-4).

Meanwhile, some chapters are essentially irrelevant to the project of evolutionary psychology

For example, one, that of full-time ‘Dawkins-stalker’ (and part-time philosopher) Mary Midgley critiques the quite separate approach of memetics

Likewise, one singularly uninsightful chapter by ‘disability activist’ Tom Shakespeare and a colleague seems to say nothing with which the average evolutionary psychologist would likely disagree. Indeed, they seem to say little of substance at all. 

Only at the end of their chapter do they make the obligatory reference to just-so stories, and, more bizarrely, to the “single-gene determinism of the biological reductionists” (p203).

Yet, as anyone who has ever read any evolutionary psychology is surely aware, evolutionary psychologists, like other evolutionary biologists, emphasize to the point of repetitiveness that, while they may talk of ‘genes for’ certain characteristics as a form of scientific shorthand, nothing in their theories implies a one-to-one concordance between single genes and behaviours. 

Indeed, the irrelevance of some chapters to their supposed subject-matter (i.e. evolutionary psychology) makes one wonder whether some of the contributors to the volume have ever actually read any evolutionary psychology, or even any popularizations of the field – or whether their entire limited knowledge of the field was gained by reading critiques of evolutionary psychology by other contributors to the volume. 

Annette Karmiloff-Smith’s chapter, entitled ‘Why babies’ brains are not Swiss army knives’, is a critique of what she refers to as nativism, namely the belief that certain brain structures (or modules) are innately hardwired into the brain at birth.

This chapter, perhaps alone in the entire volume, may have value as a critique of some strands of evolutionary psychology.

Any analogy is imperfect; otherwise it would not be an analogy but rather an identity. However, given that even a modern micro-computer has been criticized as an inadequate model for the human brain, comparing human brains to a Swiss army knives is obviously an analogy that should not be taken too far.

However, the nativist, massive modularity thesis that Karmiloff-Smith associates with evolutionary psychology, while indeed typical of what we might call the narrow ‘Tooby and Cosmides brand’ of evolutionary psychology is rejected by many evolutionary psychologists (e.g. the authors of Human Evolutionary Psychology) and is not, in my view, integral to evolutionary psychology as a discipline or approach.

Instead, evolutionary psychology posits that behaviour have been shaped by natural selection to maximise the reproductive success of organisms in ancestral environments. It therefore allows us to bypass the proximate level of causation in the brain by recognising that, howsoever the brain is structured and produces behaviour in interaction with its environment, given that this brain evolved through a process of natural selection, it must be such as to produce behaviour which maximizes the reproductive success of its bearer, at least under ancestral conditions. (This is sometimes called the phenotypic gambit.) 

Stephen Jay Gould’s Deathbed Conversion?

Undoubtedly the best known, and arguably the most prestigious, contributor to the Roses’ volume is the famed palaeontologist and popular science writer Stephen Jay Gould. Indeed, such is his renown that Gould evidently did not feel it necessary to contribute an original chapter for this volume, instead simply recycling, and retitling, what appears to be a book review, previously published in The New York Review of Books (Gould 1997). 

This is a critical review of a book Darwin’s Dangerous Idea: Evolution and the Meanings of Life by philosopher Daniel Dennett that is itself critical of Gould, a form of academic self-defence. Neither the book, nor the review, deal primarily with the topic of evolutionary psychology, but rather with more general issues in evolutionary biology. 

Yet the most remarkable revelation of Gould’s chapter – especially given that it appears in a book ostensibly critiquing evolutionary psychology – is that the best-known and most widely-cited erstwhile opponent of evolutionary psychology is apparently no longer any such thing. 

On the contrary, he now claims in this essay: 

‘Evolutionary psychology’… could be quite useful, if proponents would change their propensity for cultism and ultra-Darwinian fealty for a healthy dose of modesty” (p98). 

Indeed, even more remarkably, Gould even acknowledges: 

The most promising theory of evolutionary psychology [is] the recognition that differing Darwinian requirements for males and females imply distinct adaptive behaviors centred on male advantage in spreading sperm as widely as possible… and female strategy for extracting time and attention from males… [which] probably does underlie some different, and broadly general, emotional propensities oof human males and females” (p102). 

In other words, it seems that Gould now accepts the position of evolutionary psychologists in that most controversial of areas – innate sex differences

In this context, I am reminded of John Tooby and Leda Cosmides’s observation that critics of evolutionary psychology, in the course of their attacks on evolutionary psychology, often make concessions that, if made in any context other than that of an attack on evolutionary psychology, would cause them to themselves be labelled (and attacked) as evolutionary psychologists (Tooby and Cosmides 2000). 

Nevertheless, Gould’s backtracking is a welcome development, notwithstanding his usual arrogant tone.[1]

Given that he passed away only a couple of years after the current volume was published, one might almost, with only slight hyperbole, characterise his backtracking as a deathbed conversion. 

Ultra-Darwinism? Hyper-Adaptationism?

On the other hand, Gould’s criticisms of evolutionary psychology have not evolved at all but merely retread familiar gripes which evolutionary psychologists (and indeed so-called sociobiologists before them) dealt with decades ago. 

For example, he accuses evolutionary psychologists of viewing every human trait as adaptive and ignoring the possibility of by-products (p103-4). 

However, this claim is easily rebutted by simply reading the primary literature in the field. 

Thus, for example, Martin Daly and Margo Wilson view the high rate of abuse perpetrated by stepparents, not as itself adaptive, but as a by-product of the adaptive tendency for stepparents to care less for their stepchildren than they would for their biological children (see The Truth about Cinderella: which I have reviewed here).  

Similarly, Donald Symons argued that the female orgasm is not itself adaptive, but rather is merely a by-product of the male orgasm, just as male nipples are a non-adaptive by-product of female nipples (see The Evolution of Human Sexuality: which I have reviewed here).  

Meanwhile, Randy Thornhill and Craig Palmer are divided as to whether human rape is adaptive or merely a by-product of men’s greater desire for commitment-free promiscuous sex (A Natural History of Rape: which I have reviewed here). 

However, unlike Gould himself, evolutionary psychologists generally prefer the term ‘by-product’ to Gould’s unhelpful coinage ‘spandrel’. The former term is readily intelligible to any educated person fluent in English. Gould’s preferred terms is needless obfuscation. 

As emphasized by Richard Dawkins, the invention of jargon to baffle non-specialists (e.g. referring to animal rape as “forced copulation” as the Roses advocate: p2) is the preserve of fields suffering from physics-envy, according to ‘Dawkins’ First Law of the Conservation of Difficulty’, whereby “obscurantism in an academic subject expands to fill the vacuum of its intrinsic simplicity”. 

Untestable? Unfalsifiable?

Gould’s other main criticism of evolutionary psychology is his claim that sociobiological theories are inherently untestable and unfalsifiable – i.e. what Gould calls Just So Stories

However, one only has to flick through copies of journals like Evolution and Human Behavior, Human Nature, Evolutionary PsychologyEvolutionary Psychological Science, and many other journals that regularly publish research in evolutionary psychology, to see evolutionary psychological theories being tested, and indeed often falsified, every month. 

As evidence for the supposed unfalsifiability of sociobiological theories, Gould cites, not such primary research literature, but rather a work of popular science, namely Robert Wright’s The Moral Animal

Thus, he quotes Robert Wright as asserting in this book that our “sweet tooth” (i.e. taste for sugar), although maladaptive in the contemporary West because it leads to obesity, diabetes and heart disease, was nevertheless adaptive in ancestral environments (i.e. the EEA) where, as Wright put it, “fruit existed but candy didn’t” (The Moral Animal: p67). 

Yet, Gould protests indignantly, in support of this claim, Wright cites “no paleontological data about ancestral feeding” (p100). 

However, Wright is a popular science writer, not an academic researcher, and his book, The Moral Animal, for all its many virtues, is a work of popular science. As such, Wright, unlike someone writing a scientific paper, is not to be expected to cite a source for every claim he makes. 

Moreover, is Gould, a palaeontologist, really so ignorant of human history that he seriously believes we really need “paleontological data” in order to demonstrate that fruit is not a recent invention but that candy is? Is this really the best example he can come up with? 

From ‘Straw Men’ to Fabricated Quotations 

Rather than arguing against the actual theories of evolutionary psychologists, contributors to ‘Alas Poor Darwin’ instead resort to the easier option of misrepresenting these theories, so as to make the task of arguing against them less arduous. This is, of course, the familiar rhetorical tactic of constructing of straw man

In the case of co-editor, Hilary Rose, this crosses the line from rhetorical deceit to outright defamation of character when, on p116, she falsely attributes to sociobiologist David Barash an offensive quotation violating the naturalistic fallacy by purporting to justify rape by reference to its adaptive function

Yet Barash simply does not say the words she attributes to him on the page she cites (or any other page) in Whisperings Within, the book form which the quotation claims be drawn. (I know, because I own a copy of said book.) 

Rather, after a discussion of the adaptive function of rape in ducks, Barash merely tentatively ventures that, although vastly more complex, human rape may serve an analogous evolutionary function (Whisperings Within: p55). 

Is Steven Rose a Scientific Racist? 

As for Steven Rose, the book’s other editor, unlike Gould, he does not repent his sins and convert to evolutionary psychology. However, in maintaining his evangelical crusade against evolutionary psychology, sociobiology and all related heresies, Rose inadvertently undergoes a conversion, in many ways, even more dramatic and far reaching in its consequences. 

To understand why, we must examine Rose’s position in more depth. 

Steven Rose, it goes almost without saying, is not a creationist. On the contrary, he is, in addition to his popular science writing and leftist political activism, a working neuroscientist who very much accepts Darwin’s theory of evolution. 

Rose is therefore obliged to reconcile his opposition to evolutionary psychology with the recognition that the brain is, like the body, a product of evolution. 

Ironically, this leads him to employ evolutionary arguments against evolutionary psychology. 

For example, Rose mounts an evolutionary defence of the largely discredited theory of group selection, whereby it is contended that traits sometimes evolve, not because they increase the fitness of the individual possessing them, but rather because they aid the survival of the group of which s/he is a member, even at a cost to the fitness of the individual themselves (p257-9). 

Indeed, Rose even goes further, even going so far as to assert: 

Selection can occur at even higher levels – that of the species for example” (p258). 

Similarly, in the book’s introduction, co-authored with his wife Hillary, the Roses dismiss the importance of evolutionary psychological concept of the ‘environment of evolutionary adaptedness’ (or ‘EEA’).[2] 

This term refers to the idea that we evolved to maximise our reproductive success, not in the sort of contemporary Western societies in which we now so often find ourselves, but rather in the sorts of environments in which our ancestors spent most of our evolutionary history, namely as Stone Age hunter-gatherers. 

On this view, much behaviour in modern Western societies is recognized as maladaptive, reflecting a mismatch between the environment to which we are adapted and that in which we find ourselves, simply because we have not had sufficient time to evolve psychological mechanisms for dealing with such ‘evolutionary novelties’ as contraception, paternity tests and chocolate bars. 

However, the Roses argue that evolution can occur much faster than this. Thus, they point to: 

The huge changes produced by artificial selection by humans among domesticated animals – cattle, dogs and… pigeons – in only a few generations. Indeed, unaided natural selection in Darwin’s own Islands, the Galapagos, studied over several decades by the Grants is enough to produce significant changes in the birds’ beaks and feeding habits in response to climate change” (p1-2). 

Finally, Rose rejects the modular’ model of the human mind championed by some evolutionary psychologists, whereby the brain is conceptualized as being composed of many separate ‘domain-specific modules’, each specialized for a particular class of adaptive problem faced by ancestral humans.  

As evidence against this thesis, Rose points to the absence of a direct one-to-one relationship between the modules postulated by evolutionary psychologists and actual regions of the brain as identified by neuroscientists (p260-2). 

Whether such modules are more than theoretical entities is unclear, at least to most neuroscientists. Indeed evolutionary psychologists such as Pinker go to some lengths to make it clear that the ‘mental modules’ they invent do not, or at least do not necessarily, map onto specific brain structures” (p260). 

Thus, Rose protests: 

Evolutionary psychology theorists, who… are not themselves neuroscientists, or even, by and large, biologists, show as great a disdain for relating their theoretical concepts to material brains as did the now discredited behaviorists they so despise” (p261). 

Yet there is an irony here – namely, in employing evolutionary arguments against evolutionary psychology (i.e. emphasizing the importance of group selection and of recently evolved adaptations), Rose, unlike many of his co-contributors, actually implicitly accepts the idea of an evolutionary approach to understanding human behaviour and psychology. 

In other words, if Rose is indeed right about these matters (group selection, recently evolved adaptations and domain general psychological mechanisms), this would suggest, not the abandonment of an evolutionary approach in psychology, but rather the need to develop a new evolutionary psychology that gives appropriate weight to such factors as group selection, recently evolved adaptations and domain general psychological mechanisms

Actually, however, as we will see, this ‘new’ evolutionary psychology may not be all that new and Rose may find he has unlikely bedfellows in this endeavour. 

Thus, group selection – which tends to imply that conflict between groups such as races and ethnic groups is inevitable – has already been defended by race theorists such as Philippe Rushton and Kevin MacDonald

For example, Rushton, author of Race, Evolution and Behavior (which I have reviewed here), a notorious racial theorist known for arguing that black people are genetically predisposed to crime, promiscuity and low IQ, has also authored papers with titles like ‘Genetic similarity, human altruism and group-selection’ (Rushton 1989) and ‘Genetic similarity theory, ethnocentrism, and group selection’ (Rushton 1998), which defend and draw on the concept of group selection to explain such behaviours as racism and ethnocentrism.

Similarly, Kevin Macdonald, a former professor of psychology widely accused of anti-Semitism, has also championed the theory of group selection, and even developed a theory of cultural group selection to explain the survival and prospering of the Jewish people in diaspora in his book, A People That Shall Dwell Alone: Judaism as a Group Evolutionary Strategy (which I have reviewed here and here) and its more infamous, and theoretically flawed, sequel, The Culture of Critique (which I have reviewed here). 

Similarly, the claim that sufficient time has elapsed for significant evolutionary change to have occurred since the Stone Age (our species’ primary putative environment of evolutionary adaptedness) necessarily also entails recognition that sufficient time has also elapsed for different human populations, including different races, to have significantly diverged in, not just their physiology, but also their psychology, behaviour and cognitive ability.[3]

Finally, rejection of a modular conception of the human mind is consistent with an emphasis on what is perhaps the ultimate domain-general factor in human cognition, namely general factor of intelligence, as championed by psychometriciansbehavioural geneticists, intelligence researchers and race theorists such as Arthur Jensen, Richard Lynn, Chris Brand, Philippe Rushton and the authors of The Bell Curve (which I have reviewed here, here and here), who believe that individuals and groups differ in intellectual ability, that some individuals and groups are more intelligent across the board, and that these differences are partly genetic in origin.

Thus, Kevin Macdonald specifically criticizes mainstream evolutionary psychology for its failure to give due weight to the importance of domain-general mechanisms, in particular general intelligence (Macdonald 1991). 

Indeed, Rose himself elsewhere acknowledges that: 

The insistence of evolutionary psychology theorists on modularity puts a strain on their otherwise heaven-made alliance with behaviour geneticists” (p261).[4]

Thus, in rejecting the tenets of mainstream evolutionary psychology, Rose inadvertently advocates, not so much a new form of evolutionary psychology, as rather an old form of scientific racism.

Of course, Steven Rose is not a racist. On the contrary, he has built a minor, if undistinguished, literary career smearing those he characterises as such.[5]

However, descending to Rose’s own level of argumentation (e.g. employing guilt by association and argumenta ad hominem), he is easily characterised as such. After all, his arguments against the concept of the EEA, and in favour of group-selectionism directly echo those employed by the very scientific racists (e.g. Rushton) whom Rose has built a minor literary career out of attacking. 

Thus, by rejecting many claims of mainstream evolutionary psychologists – about the environment of evolutionary adaptedness, about group-selectionism and about modularity – Rose ironically plays into the hands of the very ‘scientific racists’ whom he purportedly opposes.

Thus, if his friend and comrade Stephen Jay Gould, in own his recycled contribution to ‘Alas Poor Darwin’, underwent a surprising but welcome deathbed conversion to evolutionary psychology, then Steven Rose’s transformation proves even more dramatic but rather less welcome. He might, moreover, find his new bedfellows less good company than he expected. 

Endnotes

[1] Throughout his essay, Gould, rather than admit he was wrong with respect to sociobiology, the then-emerging approach that came to dominate research in animal behaviour but was rashly rejected by Gould and other leftist activists, instead makes no such concession. Rather, he seems to imply, even if he does not directly state, that it was his constructive criticism of sociobiology which led to advances in the field and indeed to the development of evolutionary psychology from human sociobiology. Yet, as anyone who followed the controversies over sociobiology and evolutionary psychology, and read Gould’s writings on these topics will be aware, this is far from the case.

[2] Actually, the term environment of evolutionary adaptedness was coined, not by evolutionary psychologists, but rather by psychoanalyst and attachment theorist, John Bowby.

[3] This is a topic addressed in such controversial recent books as Cochran and Harpending’s The 10,000 Year Explosion: How Civilization Accelerated Human Evolution and Nicholas Wade’s A Troublesome Inheritance: Genes, Race and Human History. It is also a central theme of Sarich and Frank Miele’s Race: The Reality of Human Differences (which I have reviewed here, here and here). Papers discussing the significance of recent and divergent evolution in different populations for the underlying assumptions of evolutionary psychology include Winegard et al (2017) and Frost (2011). Evolutionary psychologists in the 1990s and 2000s, especially those affiliated with Tooby and Cosmides at UCSB, were perhaps guilty of associating the environment of evolutionary adaptedness too narrowly with Pleistocene hunter-gatherers on the African savanna. Thus, Tooby and Cosmides have written our modern skulls house a stone age mind. However, while embracing this catchy if misleading soundbite, in the same article Tooby and Cosmides also write more accurately:

“The environment of evolutionary adaptedness, or EEA, is not a place or time. It is the statistical composite of selection pressures that caused the design of an adaptation. Thus the EEA for one adaptation may be different from that for another” (Cosmides and Tooby 1997).

Thus, the EEA is not a single time and place that a researcher could visit with the aid of a map, a compass, a research grant and a time machine. Rather a range of environments, and also that the relevant range of environments may differ in respect of different adaptations.

[4] This reference to the “otherwise heaven-made alliance” between evolutionary psychologists and behavioural geneticists, incidentally, contradicts Rose‘s own acknowledgement, made just a few pages earlier, that:

Evolutionary psychologists are often at pains to distinguish themselves from behaviour geneticists and there is some hostility between the two” (p248). 

As we have seen, consistency is not Steven Rose’s strong point. See Kanazawa 2004 the alternative view that general intelligence is itself, paradoxically, a domain-specific module.

[5] I feel the need to emphasise that Rose is not a racist, not least for fear that he might sue me for defamation if I suggest otherwise. And if you think the idea of a professor suing some random, obscure blogger for a blog post is preposterous, then just remember – this is a man who once threatened legal action against publishers of a comic book – yes, a comic book – and forced the publishers to append an apology to some 10,000 copies of the said comic book, for supposedly misrepresenting his views in a speech bubble in said comic book, complaining “The author had literally [sic] put into my mouth a completely fatuous statement” (Brown 1999) – an ironic complaint given the fabricated quotation, of a genuinely defamatory nature, attributed to David Barash by his Rose’s own wife Hillary in the current volume: see above, for which Rose himself, as co-editor, is vicariously responsible. Rose is an open opponent of free speech. Indeed, Rose even stands accused by German scientist, geneticist and intelligence researcher Volkmar Weiss of actively instigating the infamously repressive communist regime in East Germany (Weiss 1991). This is moreover an allegation that Rose has, to my knowledge, never denied or brought legal action in respect, despite his known penchant for threatening legal action against the publishers of comic books.

References 

Brown (1999) Origins of the speciousGuardian, November 30.
Frost (2007) Human nature or human natures? Futures 43(8): 740-74.
Gould (1997) Darwinian Fundamentalism, New York Review of Books, June 12.
Kanazawa, (2004) General Intelligence as a Domain-Specific Module, Psychological Review 111(2):512-523. 
Macdonald (1991) A perspective on Darwinian psychology: The importance of domain-general mechanisms, plasticity, and individual differencesEthology and Sociobiology 12(6): 449-480.
Rushton (1989) Genetic similarity, human altruism and group-selectionBehavioral and Brain Sciences 12(3) 503-59.
Rushton (1998). Genetic similarity theory, ethnocentrism, and group selection. In I. Eibl-Eibesfeldt & F. K. Salter (Eds.), Indoctrinability, Ideology and Warfare: Evolutionary Perspectives (pp369-388). Oxford: Berghahn Books.
Tooby & Cosmides (1997) Evolutionary Psychology: A Primer, published at the Center for Evolutionary Psychology website, UCSB.
Tooby & Cosmides (2000) Unpublished Letter to the Editor of New Republic, published at the Center for Evolutionary Psychology website, UCSB.
Weiss (1991) It could be Neo-Lysenkoism, if there was ever a break in continuity! Mankind Quarterly 31: 231-253.
Winegard et al (2007) Human Biological and Psychological Diversity. Evolutionary Psychological Science 3:159–180.

Edward O Wilson’s ‘Sociobiology: The New Synthesis’: A Book Much Read About, But Rarely Actually Read

Edward O Wilson, Sociobiology: The New Synthesis Cambridge: Belknap, Harvard 1975

Sociobiology – The Field That Dare Not Speak its Name? 

From its first publication in 1975, the reception accorded Edward O Wilson’s ‘Sociobiology: The New Synthesis’ has been divided. 

On the one hand, among biologists, especially those specialist in the fields of ethology, zoology and animal behaviour, the reception was almost universally laudatory. Indeed, my 25th Anniversary Edition even proudly proclaims on the cover that it was voted by officers and fellows of the Animal Behavior Society as the most important ever book on animal behaviour, supplanting even Darwin’s own seminal On The Expression of Emotions in Man and Animals

However, on the other side of the university campus, in social science departments, the reaction was very different. 

Indeed, the hostility that the book provoked was such that ‘sociobiology’ became almost a dirty word in the social sciences, and ultimately throughout the academy, to such an extent that ultimately the term fell into disuse (save as a term of abuse) and was replaced by largely synonymous euphemisms like behavioral ecology and evolutionary psychology.[1]

Sociobiology thus became, in academia, ‘the field that dare not speak its name’. 

Similarly, within the social sciences, even those researchers whose work carried on the sociobiological approach in all but name almost always played down the extent of their debt to Wilson himself. 

Thus, books on evolutionary psychology typically begin with disclaimers acknowledging that the sociobiology of Wilson was, of course, crude and simplistic, and that their own approach is, of course, infinitely more sophisticated. 

Indeed, reading some recent works on evolutionary psychology, one could be forgiven for thinking that evolutionary approaches to understanding human behaviour began around 1989 with the work of Tooby and Cosmides

Defining the Field 

What then does the word ‘sociobiology’ mean? 

Today, as I have mentioned, the term has largely fallen into disuse, save among certain social scientists who seem to employ it as a rather indiscriminate term of abuse for any theory of human behaviour that they perceive as placing too great a weight on hereditary or biological factors, including many areas of research only tangentially connected to with sociobiology as Wilson originally conceived of it (e.g. behavioral genetics).[2]

The term ‘sociobiology’ was not Wilson’s own coinage. It had occasionally been used by biologists before, albeit rarely. However, Wilson was responsible for popularizing – and perhaps, in the long-term, ultimately unpopularizing it too, since, as we have seen, the term has largely fallen into disuse.[3] 

Wilson himself defined ‘sociobiology’ as: 

The systematic study of the biological basis of all social behavior” (p4; p595). 

However, as the term was understood by other biologists, and indeed applied by Wilson himself, sociobiology came to be construed more narrowly. Thus, it was associated in particular with the question of why behaviours evolved and the evolutionary function they serve in promoting the reproductive success of the organism (i.e. just one of Tinbergen’s Four Questions). 

The hormonal, neuroscientific, or genetic causes of behaviours are just as much a part of “the biological basis of behavior” as are the ultimate evolutionary functions of behaviour. However, these lie outside of scope of sociobiology as the term was usually understood. 

Indeed, Wilson himself admitted as much, writing in ‘Sociobiology: The New Synthesis’ itself of how: 

Behavioral biology… is now emerging as two distinct disciplines centered on neurophysiology and… sociobiology” (p6). 

Yet, in another sense, Wilson’s definition of the field was also too narrow. 

Thus, behavioural ecologists have come to study all forms of behaviour, not just social behaviour.  

For example, optimal foraging theory is a major subfield within behavioural ecology (the successor field to sociobiology), but concerns feeding behaviour, which may be an entirely solitary, non-social activity. 

Indeed, even some aspects of an organism’s physiology (as distinct from behaviour) have come to be seen as within the purview of sociobiology (e.g. the evolution of the peacock’s tail). 

A Book Much Read About, But Rarely Actually Read 

Sociobiology: The New Synthesis’ was a massive tome, numbering almost 700 pages. 

As Wilson proudly proclaims in his glossary, it was: 

Written with the broadest possible audience in mind and most of it can be read with full understanding by any intelligent person whether or not he or she has had any formal training in science” (p577). 

Unfortunately, however, the sheer size of the work alone was probably enough to deter most such readers long before they reached p577 where these words appear. 

Indeed, I suspect the very size of the book was a factor in explaining the almost universally hostile reception that the book received among social scientists. 

In short, the book was so large that the vast majority of social scientists had neither the time nor the inclination to actually read it for themselves, especially since a cursory flick through its pages showed that the vast majority of them seemed to be concerned with the behaviour of species other than humans, and hence, as they saw it, of little relevance to their own work. 

Instead, therefore, their entire knowledge of the sociobiology was filtered through to them via the critiques of the approach authored by other social scientists, themselves mostly hostile to sociobiology, who presented a straw man caricature of what sociobiology actually represented. 

Indeed, the caricature of sociobiology presented by these authors is so distorted that, reading some of these critiques, one often gets the impression that included among those social scientists not bothering to read the book for themselves were most of the social scientists nevertheless taking it upon themselves to write critiques of it. 

Meanwhile, the fact that the field was so obviously misguided (as indeed it often was in the caricatured form presented in the critiques) gave most social scientists yet another reason not to bother wading through its 700 or so pages for themselves. 

As a result, among sociologists, psychologists, anthropologists, public intellectuals, and other such ‘professional damned fools’, as well as the wider the semi-educated, reading public, ‘Sociobiology: The New Synthesis’ became a book much read about – but rarely actually read (at least in full). 

As a consequence, as with other books falling into this category (e.g. the Bible and The Bell Curve) many myths have emerged regarding its contents which are quite contradicted on actually taking the time to read it for oneself. 

The Many Myths of Sociobiology 

Perhaps the foremost myth is that sociobiology was primarily a theory of human behaviour. In fact, as is revealed by even a cursory flick through the pages of Wilson’s book, sociobiology was, first and foremost, a theoretical approach to understanding animal behaviour. 

Indeed, Wilson’s decision to attempt to apply sociobiological theory to humans as well was, it seems, almost something of an afterthought, and necessitated by his desire to provide a comprehensive overview of the behaviour of all social animals, humans included. 
 
This is connected to the second myth – namely, that sociobiology was Wilson’s own theory. In fact, rather than a single theory, sociobiology is better viewed as a particular approach to a field of study, the field in question being animal behaviour. 
 
Moreover, far from being Wilson’s own theory, the major advances in the understanding of animal behaviour that gave rise to what came to be referred to as ‘sociobiology’ were made in the main by biologists other than Wilson himself.  
 
Thus, it was William Hamilton who first formulated inclusive fitness theory (which came to be known as the theory of kin selection); John Maynard Smith who first introduced economic models and game theory into behavioural biology; George C Williams who was responsible for displacing a crude group-selection in favour of a new focus on the gene itself as the principal unit of selection; while Robert Trivers was responsible for such theories such as reciprocal altruismparent-offspring conflict and differential parental investment theory
 
Instead, Wilson’s key role was to bring the various strands of the emerging field together, give it a name and, in the process, take far more than his fair share of the resulting flak. 
 
Thus, far from being a maverick theory of a single individual, what came to be known as ‘sociobiology’ was, if not based on accepted biological theory at the time of publication, then at least based on biological theory that came to be recognised as mainstream within a few years of its publication. 
 
Controversy attached almost exclusively to the application of these same principles to explain human behaviour. 

Applying Sociobiology to Humans 

In respect of Wilson’s application of sociobiological theory to humans, misconceptions again abound. 

For example, it is often asserted that Wilson only extended his theory to apply to human behaviour in his infamous final chapter, entitled, ‘Man: From Sociobiology to Sociology’. 

Actually, however, Wilson had discussed the possible application of sociobiological theory to humans several times in earlier chapters. 
 
Often, this was at the end of a chapter. For example, his chapter on “Roles and Castes” closes with a discussion of “Roles in Human Societies” (p312-3). Similarly, the final subsection of his chapter on “Aggression” is titled “Human Aggression” (p 254-5). 
 
Other times, however, humans get a mention in mid-chapter, as in Chapter Fifteen, which is titled ‘Sex and Society’, where Wilson discusses the association between adultery, cuckoldry and violent retribution in human societies, and rightly prophesizes that “the implications for the study of humans” of Trivers’ theory of differential parental investment “are potentially great” (p327). 
 
Another misconception is that, while he may not have founded the approach that came to be known as sociobiology, it was Wilson who courted controversy, and bore most of the flak, because he was the first biologist brave, foolish, ambitious, farsighted or naïve enough to attempt to apply sociobiological theory to humans. 
 
Actually, however, this is untrue. For example, a large part of Robert Trivers’ seminal paper on reciprocal altruism published in 1971 dealt with reciprocal altruism in humans and with what are presumably specifically human moral emotions, such as guilt, gratitude, friendship and moralistic anger (Trivers 1971). 
 
However, Trivers’ work was published in the Journal of Theoretical Biology and therefore presumably never came to the attention of any of the leftist social scientists largely responsible for the furore over sociobiology, who, being of the opinion that biological theory was wholly irrelevant to human behaviour, and hence to their own field, were unlikely to be regular readers of the journal in question. 

Yet this is perhaps unfortunate since Trivers, unlike the unfortunate Wilson, had impeccable left-wing credentials, which may have deflected some of the overtly politicized criticism (and pitchers of water) that later came Wilson’s way. 

Reductionism vs Holism

Among the most familiar charges levelled against Wilson by his opponents within the social sciences, and by contemporary opponents of sociobiology and evolutionary psychology, alongside the familiar and time-worn charges of ‘biological determinism’ and ‘genetic determinism’, is that sociobiology is inherently reductionist, something which is, they imply, very much a bad thing. 
 
It is therefore something of a surprise to find among the opening pages of ‘Sociobiology: The New Synthesis’, Wilson defending “holism”, as represented, in Wilson’s view, by the field of sociobiology itself, as against what he terms “the triumphant reductionism of molecular biology” (p7). 
 
This passage is particularly surprising for anyone who has read Wilson’s more recent work Consilience: The Unity of Knowledge, where he launches a trenchant, unapologetic and, in my view, wholly convincing defence of “reductionism” as representing, not only “the cutting edge of science… breaking down nature into its constituent components” but moreover “the primary and essential activity of science” and hence at the very heart of the scientific method (Consilience: p59). 

Thus, in a quotable aphorism, Wilson concludes: 

The love of complexity without reductionism makes art; the love of complexity with reductionism makes science” (Consilience: p59). 

Of course, whether ‘reductionism’ is a good or bad thing, as well as the extent to which sociobiology can be considered ‘reductionist’, ultimately depends on precisely how we define ‘reductionism’. Moreover, ‘reductionism’, how ever defined, is a surely matter of degree. 

Thus, philosopher Daniel Dennett, in his book Darwin’s Dangerous Idea, distinguishes what he calls “greedy reductionism”, which attempts to oversimplify the world (e.g. Skinnerian behaviourism, which seeks to explain all behaviours in terms of conditioning), from “good reductionism”, which attempts to understand it in all its complexity (i.e. good science).

On the other hand, ‘holistic’ is a word most often employed in defence of wholly unscientific approaches, such as so-called holistic medicine, and, for me, the word itself is almost always something of a red flag. 

Thus, the opponents of sociobiology, in using the term ‘reductionist’ as a criticism, are rejecting the whole notion of a scientific approach to understanding human behaviour. In its place, they offer only a vague, wishy-washy, untestable and frankly anti-scientific obscurantism, whereby any attempt to explain behaviour in terms of causes and effects is dismissed as reductionism and determinism

Yet explaining behaviour, whether the behaviour of organisms, atoms, molecules or chemical substances, in terms of causes and effects is the very essence, if not the very definition, of science. 

In other words, determinism (i.e. the belief that events are determined by causes) is not so much a finding of science as its basic underlying assumption.[4]

Yet Wilson’s own championing of “holism” in ‘Sociobiology: The New Synthesis’ can be made sense of in its historical context. 

In other words, just as Wilson’s defence of reductionism in ‘Concilience’ was a response to the so-called sociobiology debates of the 1970s and 80s in which the charge of ‘reductionism’ was wielded indiscriminately by the opponents of sociobiology, so Wilson’s defence of holism in ‘Sociobiology: The New Synthesis’ itself must be understood in the context, not of the controversy that this work itself provoked (which Wilson was, at the time, unable to foresee), but rather of a controversy preceded its publication. 

In particular, certain molecular biologists at Harvard, and perhaps elsewhere, led by the brilliant yet but abrasive molecular biologist James Watson, had come to the opinion that molecular biology was to be the only biology, and that traditional biology, fieldwork and experiments were positively passé. 

This controversy is rather less familiar to anyone outside of Harvard University’s biology department than the sociobiology debates, which not only enlisted many academics from outside of biology (e.g. psychologists, sociologists, anthropologists and even philosophers), but also spilled over into the popular media and even became politicized. 

However, within the ivory towers of Harvard University’s department of biology, this controversy seems to have been just as fiercely fought over.[5]

As is clear from ‘Sociobiology: The New Synthesis’, Wilson’s own envisaged “holism” was far from the wishy-washy obscurantism which one usually associates with those championing a ‘holistic approach’, and thoroughly scientific. 

Thus, in On Human Nature, Wislon’s follow-up book to ‘Sociobiology: The New Synthesis’, where he first concerned himself specifically to the application of sociobiological theory to humans, Wilson gives perhaps his most balanced description of the relative importance of reductionism and holism, and indeed of the nature of science, writing: 

Raw reduction is only half the scientific process… the remainder consist[ing] of the reconstruction of complexity by an expanding synthesis under the control if laws newly demonstrated by analysis… reveal[ing] the existence of novel emergent phenomena” (On Human Nature: p11). 

It is therefore in this sense, and in contrast to the reductionism of molecular biology, that Wilson saw sociobiology as ‘holistic’. 

Group Selection? 

One of the key theoretical breakthroughs that formed the basis for what came to be known as sociobiology was the discrediting of group-selectionism, largely thanks to the work of George C Williams, whose ideas were later popularized by Richard Dawkins in The Selfish Gene (which I have reviewed here).[6] 
 
A focus the individual, or even the gene, as the primary, or indeed the only, unit of selection, came to be viewed as an integral component of the sociobiological worldview. Indeed, it was once seriously debated on the pages of the newsletter of the European Sociobiological Society whether one could truly be both a ‘sociobiologist’ and a ‘group-selectionist’ (Price 1996). 

It is therefore something of a surprise to discover that the author of ‘Sociobiology: The New Synthesis’, responsible for christening the emerging field, was himself something of a group-selectionist. 

Wilson has recently ‘come out’ as a group-selectionist by co-authoring a paper concerning the evolution of eusociality in ants (Nowak et al 2010). However, reading ‘Sociobiology: The New Synthesis’ leads one to suspect that Wilson had been a closet, or indeed a semi-out, group-selectionist all along. 

Certainly, Wilson repeats the familiar arguments against group-selectionism popularised by Richard Dawkins in The Selfish Gene (which I have reviewed here), but first articulated by George C Williams in Adaptation and Natural Selection (see p106-7). 

However, although he offers no rebuttal to these arguments, this does not prevent Wilson from invoking, or at least proposing, group-selectionist explanations for behaviours elsewhere in the remainder of the book (e.g. p275). 

Moreover, Wilson concludes: 

Group selection and higher levels of organization, however intuitively implausible… are at least theoretically possible under a wide range of conditions” (p30). 

 
Thus, it is clear that, unlike, say, Richard Dawkins, Wilson did not view group-selectionism as a terminally discredited theory. 

Man: From Sociobiology to Sociology… and Perhaps Evolutionary Psychology 

What then of Wilson’s final chapter, entitled ‘Man – From Sociobiology to Sociology’? 

It was, of course, the only one to focus exclusively on humans, and, of course, the chapter that attracted by far the lion’s share of the outrage and controversy that soon ensued. 

Yet, reading it today, over forty years after it was first written, it is, I feel, rather disappointing. 

Let me be clear, I went in very much wanting to like it. 

After all, Wilson’s general approach was basically right. Humans, like all other organisms, have evolved through a process of natural selection. Therefore, their behaviour, no less than their physiology, or the physiology or behaviour of non-human organisms, must be understood in the light of this fact. 

Moreover, not only were almost all of the criticisms levelled at Wilson misguided, wrongheaded and unfair, but they often bordered upon persecution as well.

The most famous example of this leftist witch hunting was when, during a speech at the annual meeting of the American Association for the Advancement of Science, he was drenched him with a pitcher of water by leftist demonstrators. 

However, this was far from an isolated event. For example, an illustration from the book The Moral Animal shows a student placard advising protesters to “bring noisemakers” in order to deliberately disrupt one of Wilson’s speaking engagements (The Moral Animal: illustration p341). 

In short, Wilson seems to have been an early victim of what would today be called ‘deplatorming’ and ‘cancel culture’, phenomena that long predated the coining of these terms

Thus, one is tempted to see Wilson in the role of a kind of modern Galileo, being, like Galileo, persecuted for his scientific theories, which, like those of Galileo, turned out to be broadly correct. 

Moreover, Wilson’s views were, in some respects, analogous to those of Galileo. Both disputed prevailing orthodoxies in such a way as to challenge the view that humans were somehow unique or at the centre of things, Galileo by suggesting the earth was not at the centre of the solar system, and Wilson by showing that human behaviour was not all that different from that of other animals.[7]

Unfortunately, however, the actual substance of Wilson’s final chapter is rather dated.

Inevitably, any science book will be dated after forty years. However, while this is also true of the book as a whole, it seems especially true of this last chapter, which bears little resemblance to the contents of a modern textbook on evolutionary psychology

This is perhaps inevitable. While the application of sociobiological theory to understanding and explaining the behaviour other species was already well underway, the application of sociobiological theory to humans was, the pioneering work of Robert Trivers on reciprocal altruism notwithstanding, still very much in its infancy. 

Yet, while the substance of the chapter is dated, the general approach was spot on.

Indeed, even some of the advances claimed by evolutionary psychologists as their own were actually anticipated by Wilson. 

Thus, Wilson recognises:

One of the key questions [in human sociobiology] is to what extent the biogram represents an adaptation to modern cultural life and to what extent it is a phylogenetic vestige” (p458). 

He thus anticipates the key evolutionary psychological concept of the Environment of Evolutionary Adaptedness or EEA, whereby it is theorized that humans are evolutionarily adapted, not to the modern post-industrial societies in which so many of us today find ourselves, but rather to the ancestral environments in which our behaviours first evolved.

Wilson proposes examine human behavior from the disinterested perspective of “a zoologist from another planet”, and concludes: 

In this macroscopic view the humanities and social sciences shrink to specialized branches of biology” (p547). 

Thus, for Wilson: 

Sociology and the other social sciences, as well as the humanities, are the last branches of biology waiting to be included in the Modern Synthesis” (p4). 

Indeed, the idea that the behaviour of a single species is alone exempt from principles of general biology, to such an extent that it must be studied in entirely different university faculties by entirely different researchers, the vast majority with little or no knowledge of general biology, nor of the methods and theory of researchers studying the behaviour of all other organisms, reflects an indefensible anthropocentrism

However, despite the controversy these pronouncements provoked, Wilson was actually quite measured in his predictions and even urged caution, writing 

Whether the social sciences can be truly biologicized in this fashion remains to be seen” (p4) 

The evidence of the ensuing forty years suggests, in my view, that the social sciences can indeed be, and are well on the way to being, as Wilson puts it, ‘biologicized’. The only stumbling block has proven to be social scientists themselves, who have, in some cases, proven resistant. 

‘Vaunting Ambition’? 

Yet, despite these words of caution, the scale of Wilson’s intellectual ambition can hardly be exaggerated. 

First, he sought to synthesize the entire field of animal behavior under the rubric of sociobiology and in the process produce the ‘New Synthesis’ promised in the subtitle, by analogy with the Modern Synthesis of Darwinian evolution and Mendelian genetics that forms the basis for the entire field of modern biology. 

Then, in a final chapter, apparently as almost something of an afterthought, he decided to add human behaviour into his synthesis as well. 

This meant, not just providing a new foundation for a single subfield within biology (i.e. animal behaviour), but for several whole disciplines formerly virtually unconnected to biology – e.g. psychology, cultural anthropology, sociology, economics. 

Oh yeah… and moral philosophy and perhaps epistemology too. I forgot to mention that. 

From Sociobiology to… Philosophy?

Indeed, Wilson’s forays into philosophy proved even more controversial than those into social science. Though limited to a few paragraphs in his first and last chapter, they were among the most widely quoted, and critiqued, in the whole book. 

Not only were opponents of sociobiology (and philosophers) predictably indignant, but even those few researchers bravely taking up the sociobiological gauntlet, and even applying it to humans, remained mostly skeptical. 

In proposing to reconstruct moral philosophy on the basis of biology, Wilson was widely accused of committing what philosophers call the naturalistic fallacy or appeal to nature fallacy

This refers to the principle that, if a behaviour is natural, this does not necessarily make it right, any more than the fact that dying of tuberculosis is natural means that it is morally wrong to treat tuberculosis with such ‘unnatural’ interventions as vaccination or antibiotics. 

In general, evolutionary psychologists have generally been only too happy to reiterate the sacrosanct inviolability of the fact-value chasm, not least because it allowed them to investigate the evolutionary function of such morally dubious, or indeed morally reprehensible, behaviours as infidelity, rape, war, sexual infidelity and child abuse, while denying they are thereby providing a justification for the behaviours in question. 

Yet this begs the question: if we cannot derive values from facts, whence can values be arrived at? Can they be derived only from other values? If so, then whence are our ultimate moral values, from which all others are derived, themselves ultimately derived? Must they be simply taken on faith? 

Wilson has recently controversially argued, in his excellent Consilience: The Unity of Knowledge, that, in this context: 

The posing of the naturalistic fallacy is itself a fallacy” (Consilience: p273). 

Leaving aside this controversial claim, it is clear that his point in ‘Sociobiology’ is narrower. 

In short, Wilson seems to be arguing that, in contemplating the appropriateness of different theories of prescriptive ethics (e.g. utilitarianism, Kantian deontology), moral philosophers consult “the emotional control centers in the hypothalamus and limbic system of the brain” (p3). 

Yet these same moral philosophers take these emotions largely for granted. They treat the brain as a “black box” rather than a biological entity the nature of which is itself the subject of scientific study (p562). 

Yet, despite the criticism Wilson’s suggestion provoked among many philosophers, the philosophical implications of recognising that moral intuitions are themselves a product of the evolutionary process have since become an serious and active area of philosophical enquiry. Indeed, among the leading pioneers in this field of enquiry has been the philosopher of biology Michael Ruse, not least in collaboration Wilson himself (Ruse & Wilson 1986). 

Yet if moral philosophy must be rethought in the light of biology and the evolved nature of our psychology, then the same is also surely true of arguably the other main subfield of contemporary philosophy – namely epistemology.  

Yet Wilson’s comments regarding the relevance of sociobiological theory to epistemology are even briefer than the few sentences he devotes in his opening and closing chapters to moral philosophy, being restricted to less than a sentence – a mere five-word parenthesis in a sentence primarily discussing moral philosophy and philosophers (p3). 

However, what humans are capable of knowing is, like morality, ultimately a product of the human brain – a brain which is a itself biological entity that evolved through a process of natural selection. 

The brain, then, is designed not for discovering ‘truth’, in some abstract, philosophical sense, but rather for maximizing the reproductive success of the organism whose behaviour it controls and directs. 

Of course, for most purposes, natural selection would likely favour psychological mechanisms that produce, if not ‘truth’, then at least a reliable model of the world as it actually operates, so that an organism can modify its behaviour in accordance with this model, in order to produce outcomes that maximizes its inclusive fitness under these conditions. 

However, it is at least possible that there are certain phenomena that our brains are, through the very nature of their wiring and construction, incapable of fully understanding (e.g. quantum mechanics or the hard question of consciousness), simply because such understanding was of no utility in helping our ancestors to survive and reproduce in ancestral environments. 

The importance of evolutionary theory to our understanding of epistemology and the limits of human knowledge is, together with the relevance of evolutionary theory to moral philosophy, a theme explored in philosopher Michael Ruse’s book, Taking Darwin Seriously, and is also the principal theme of such recent works as The Case Against Reality: Why Evolution Hid the Truth from Our Eyes by Donald D Hoffman. 

Dated? 

Is ‘Sociobiology: The New Synthesis’ worth reading today? At almost 700 pages, it represents no idle investment of time. 

Wilson is a wonderful writer even in a purely literary sense, and has the unusual honour for a working scientist of being a twice Pulitzer-Prize winner. However, apart from a few provocative sections in the opening and closing chapters, ‘Sociobiology: The New Synthesis’ is largely written in the form of a student textbook, is not a book one is likely to read on account of its literary merits alone. 

As a textbook, Sociobiology is obviously dated. Indeed, the extent to which it has dated is an indication of the success of the research programme it helped inspire. 

Thus, one of the hallmarks of true science is the speed at which cutting-edge work becomes obsolete.  

Religious believers still cite holy books written millennia ago, while adherents of pseudo-sciences like psychoanalysis and Marxism still paw over the words of Freud and Marx. 

However, the scientific method is a cumulative process based on falsificationism and is moreover no respecter of persons.

Scientific works become obsolete almost as fast as they are published. Modern biologists only rarely cite Darwin. 

If you want a textbook summary of the latest research in sociobiology, I would instead recommend the latest edition of Animal Behavior: An Evolutionary Approach or An Introduction to Behavioral Ecology; or, if your primary interest is human behavior, the latest edition of David Buss’s Evolutionary Psychology: The New Science of the Mind

The continued value of ‘Sociobiology: The New Synthesis’ lies in the field, not of science, but history of science In this field, it will remain a landmark work in the history of human thought, for both the controversy, and the pioneering research, that followed in its wake. 

Endnotes

[1] Actually, ‘evolutionary psychology’ is not quite a synonym for ‘sociobiology’. Whereas the latter field sought to understand the behaviour of all animals, if not all organisms, the term ‘evolutionary psychology’ is usually employed only in relation to the study of human behaviour. It would be more accurate, then, to say ‘evolutionary psychology’ is a synonym, or euphemism, for ‘human sociobiology’.

[2] Whereas behavioural geneticists focus on heritable differences between individuals within a single population, evolutionary psychologists largely focus on behavioural adaptations that are presumed to be pan-human and universal. Indeed, it is often argued that there is likely to be minimal heritable variation in human psychological adaptations, precisely because such adaptations have been subject to such strong selection pressure as to weed out suboptimal variation, such that only the optimal genotype remains. On this view, substantial heritable variation is found only in respect of traits that have not been subject to intense selection pressure (see Tooby & Cosmides 1990). However, this fails to be take into account such phenomena as frequency dependent selection and other forms of polymorphism, whereby different individuals within a breeding population adopt, for example, quite different reproductive strategies. It is also difficult to reconcile with the finding of behavioural geneticists that there is substantial heritable variation in intelligence as between individuals, despite the fact that the expansion of human brain-size over the course of evolution suggests that intelligence has been subject to strong selection pressures.

[3] For example, in 1997, the journal Ethology and Sociobiology, which had by then become, and remains, the leading scholarly journal in the field of what would then have been termed ‘human sociobiology’, and now usually goes by the name of ‘evolutionary psychology’, changed its name to Evolution and Human Behavior.

[4] An irony is that, while science is built on the assumption of determinism, namely the assumption that observed phenomena have causes that can be discovered by controlled experimentation, one of the findings of science is that, at least at the quantum level, determinism is actually not true. This is among the reasons why quantum theory is paradoxically popular among people who don’t really like science (and who, like virtually everyone else, don’t really understand quantum theory). Thus, Richard Dawkins has memorably parodied quantum mysticism as as based on the reasoning that: 

Quantum mechanics, that brilliantly successful flagship theory of modern science, is deeply mysterious and hard to understand. Eastern mystics have always been deeply mysterious and hard to understand. Therefore, Eastern mystics must have been talking about quantum theory all along.”

[5] Indeed, although since reconciled, Wilson and Watson seem to have shared a deep personal animosity for one another, Wilson once describing how he had once considered Watson, with whom he later reconciled, “the most unpleasant human being I had ever met” – see Wilson’s autobiography, Naturalist. A student of Watson’s describes how, when Wilson was granted tenure at Harvard before Watson:

It was a big, big day in our corridor” as “Watson could be heard coming up the stairwell…  shouting ‘fuck, fuck, fuck” (Watson and DNA: p98)  

Wilson’s description of Watson’s personality in his memoir is interesting in the light of the later controversy regarding the latters comments regarding the economic implications of racial differences in intelligence, with Wilson writing: 

Watson, having risen to historic fame at an early age, became the Caligula of biology. He was given license to say anything that came to his mind and expect to be taken seriously. And unfortunately, he did so, with a casual and brutal offhandedness.” 

In contrast, geneticist David Reich suggests that Watson’s abrasive personality predated his scientific discoveries and may even have been partly responsible for them, writing: 

His obstreperousness may have been important to his success as a scientist” (Who We are and how We Got Here: p263).

[6] Group selection has recently, however, enjoyed something of a resurgence in the form of multi-level selection theory. Wilson himself is very much a supporter of this trend.

[7] Of course, it goes without saying that the persecution to which Wilson was subjected was as nothing compared to that to which Galileo was subjected (see my post, A Modern McCarthyism in Our Midst). 

References 

Nowak et al (2010) The evolution of eusociality Nature 466:1057–1062. 

Price (1996) ‘In Defence of Group Selection, European Sociobiological Society Newsletter. No. 42, October 1996 

Ruse & Wilson (1986) Moral Philosophy as Applied SciencePhilosophy 61(236):173-192 

Tooby & Cosmides (1990) On the Universality of Human Nature and the Uniqueness of the Individual: The Role of Genetics and AdaptationJournal of Personality 58(1): 17-67. 

Trivers (1971) The evolution of reciprocal altruism. Quarterly Review of Biology 46:35–57 

Donald Symons’ ‘The Evolution of Human Sexuality’: A Founding Work of Modern Evolutionary Psychology

The Evolution of Human Sexuality by Donald Symons (Oxford University Press 1980). 

Research over the last four decades in the field that has come to be known as evolutionary psychology has focused disproportionately on mating behaviour. Geoffrey Miller (1998) has even argued that it is the theory of sexual selection rather than that of natural selection which, in practice, guides most research in this field. 

This does not reflect merely the prurience of researchers. Rather, given that reproductive success is the ultimate currency of natural selection, mating behaviour is, perhaps along with parental investment, the form of behaviour most directly subject to selective pressures.

Almost all of this research traces its ancestry ultimately to Donald Symons’ ‘The Evolution of Human Sexuality’ by Donald Symons. Indeed, much of it was explicitly designed to test claims and predictions formulated by Symons himself in this very book.

Age Preferences 

For example, in his discussion of the age at which women are perceived as most attractive by males, Symons formulated two alternative hypotheses. 

First, if human evolutionary history were characterized by fleeting one-off sexual encounters (i.e. one-night standscasual sex and hook-ups), then, he reasoned, men would have evolved to find women most attractive when the latter are at the age of their maximum fertility

For women, fertility is said to peak around when a woman reaches her mid-twenties since, although women still in their teens have high pregnancy rates, they also experience greater risk of birth complications

However, if human evolutionary history were characterized instead by long-term pair bonds, then men would have evolved to be maximally attracted to somewhat younger women (i.e. those at the beginning of their reproductive careers), so that, by entering a long-term relationship with the woman at this time, a male is potentially able to monopolize her entire lifetime reproductive output (p189). 

More specifically, males would have evolved to prefer females, not of maximal fertility, but rather of maximal reproductive value, a term borrowed from demography and population genetics which refers to a person’s expected future reproductive output given their current age. Unlike fertility, a woman’s reproductive value peaks around her mid- to late-teens.  

On the basis of largely anecdotal evidence, Symons concludes that human males have evolved to be most attracted to females of maximal reproductive value rather than maximal fertility.  

Subsequent research designed to test between Symons’s rival hypotheses has largely confirmed his speculative hunch that it is younger females in their mid- to late-teens who are perceived by males as most attractive (e.g. Kenrick and Keefe 1992). 

Why Average is Attractive 

Symons is also credited as the first person to recognize that a major criterion of attractiveness is, paradoxically, averageness, or at least the first to recognize the significance of, and possible evolutionary explanation for, this discovery.[1] Thus, Symons argues that: 

“[Although] health and status are unusual in that there is no such thing as being too healthy or too high ranking… with respect to most anatomical traits, natural selection produces the population mean” (p194). 

On this view, deviations from the population mean are interpreted as the result of deleterious mutations or developmental instability, and hence bad genes.[2]

Concealed Ovulation

Support has even emerged for some of Symons’ more speculative hunches. 
 
For example, one of Symons’ two proposed scenarios for the evolution of concealed ovulation, in which he professed “little confidence” (p141), was that this had evolved so as to impede male mate-guarding and enable females select a biological father for their offspring different from their husbands (p139-141). 
 
Consistent with this theory, studies have found that women’s mate preferences vary throughout their menstrual cycle in a manner compatible with a so-called ‘dual mating strategy’, preferring males evidencing a willingness to invest in offspring at most times, but, when at their most fertile, preferring characteristics indicative of genetic quality (e.g. Penton-Voak et al 1999). 

Meanwhile, a questionnaire distributed via a women’s magazine found that women engaged in extra-marital affairs do indeed report engaging in ‘extra-pair copulations’ (EPCs) at times likely to coincide with ovulation (Bellis and Baker 1990).[3]

The Myth of Female Choice

Interestingly, Symons even anticipated some of the mistakes evolutionary psychologists would be led into. 
 
Thus, he warns that researchers in modern western societies may be prone to overestimate the importance of female choice as a factor in human evolution, because, in their own societies, this is a major factor, if not the major factor, in determining marriage and sexual and romantic relationships (p203).[4]
 
However, in ancestral environments (i.e. what evolutionary psychologists now call the Environment of Evolutionary Adaptedness or EEA) arranged marriages were likely the norm, as they are in most premodern cultures around the world today (p168).[5] 
 
Thus, Symons concludes: 

“There is no evidence that any features of human anatomy were produced by intersexual selection [i.e. female choice]. Human physical sex differences are explained most parsimoniously as the outcome of intrasexual selection (the result of male-male competition)” (p203). 

Thus, human males have no obvious analogue of the peacock’s tail, but they do have substantially greater levels of upper-body strength and violent aggression as compared to females.[6]
 
This was a warning almost entirely ignored by subsequent generations of researchers before being forcefully reiterated by Puts (2010)

Homosexuality as a ‘Test-Case 

An idea of the importance of Symons’s work can be ascertained by comparing it with contemporaneous works addressing the same subject-matter. 
 
Edward O Wilson’s  On Human Nature was first published in 1978, only a year before Symons’s ‘The Evolution of Human Sexuality’. 

However, whereas Symons’s book set out much of the theoretical basis for what would become the modern science of evolutionary psychology, Wilson’s chapter on “Sex” has dated rather less well, and a large portion of chapter is devoted to introducing a now faintly embarrassing theory of the evolution of homosexuality which has subsequently received no empirical support (see Bobrow & Bailey 2001).[7] 
 
In contrast, Symons’s own treatment of homosexuality is innovative. It is also characteristic of his whole approach and illustrates why ‘The Evolution of Human Sexuality‘ has been described by David Buss as “the first major treatise on evolutionary psychology proper” (Handbook of Evolutionary Psychology: p251). 
 
Rather than viewing all behaviours as necessarily adaptive (as critics of evolutionary psychology, such as Stephen Jay Gould, have often accused sociobiologists of doing),[8] Symons instead focuses on admittedly non-adaptive (or, indeed, even maladaptive) behaviours, not because he believes them to be adaptive, but rather because they provide a unique window on the nature of human sexuality.
 
Accordingly, Symons does not concern himself with how homosexuality evolved, implicitly viewing it as a rare and maladaptive malfunctioning of normal sexuality. Yet the behaviour of homosexuals is of interest to Symons because it provides a window on the nature of male and female sexuality as it manifests itself when freed from the constraints imposed by the conflicting desires of the opposite sex. 
 
On this view, the rampant promiscuity manifested by many homosexual men (e.g. cruising and cottaging in bathhouses and public lavatories, or Grindr hookups) reflects the universal male desire for sexual variety when freed from the constraints imposed by the conflicting desires of women. 

This desire for sexual variety is, of course, obviously reproductively unproductive among homosexual men themselves. However, it evolved because it enhanced the reproductive success of heterosexual men by motivating them to attempt to mate with multiple females and thereby father multiple offspring. 
 
In contrast, burdened with pregnancy and lactation, women’s potential reproductive rate is more tightly constrained than that of men. They therefore have little to gain reproductively by mating with multiple males, since they can usually gestate, and nurse, only one offspring at a time. 
 
It is therefore notable that, among lesbians, there is little evidence of the sort of rampant promiscuity common among gay men. Instead, lesbian relationships seem to be characterized by much the same features as heterosexual coupling (i.e. long-term pair-bonds).
 
The similarity of heterosexual coupling to that of lesbians, and the striking contrast with that of male homosexuals, suggests that it is women, not men, who exert decisive influence in dictating the terms of heterosexual coupling.[9] 
 
Thus, Symons reports:

There is enormous cross-cultural variation in sexual customs and laws and the extent of male control, yet nowhere in the world do heterosexual relations begin to approximate those typical of homosexual men This suggests that, in addition to custom and law, heterosexual relations are structured to a substantial degree by the nature and interests of the human female” (p300). 

This conclusion is, of course, diametrically opposite to the feminist contention that it is men who dictate the terms of heterosexual coupling and for whose exclusive benefit such relationships are structured. 
 
It also suggests, again contrary to feminist assumptions of male dominance, that most men are ultimately frustrated in achieving their sexual ambitions to a far greater extent than are most women. 

Thus, Symons concludes: 

The desire for sexual variety dooms most human males to a lifetime of unfulfilled longing” (p228). 

Here, Symons anticipates Camille Paglia who was later to famously observe: 

Men know they are sexual exiles. They wander the earth seeking satisfaction, craving and despising, never content. There is nothing in that anguished motion for women to envy” (Sexual Personae: p19). 

Criticisms of Symons’s Use of Homosexuality as a Test-Case

There is, however, a potential problem with Symons’s use of homosexual behaviour as a window onto the nature of male and female sexuality as they manifest themselves when freed from the conflicting desires of the opposite sex. The whole analysis rests on a questionable premise – namely that homosexuals are, their preference for same-sex partners aside, otherwise similar, if not identical, to heterosexuals of their own sex in their psychology and sexuality. 
 
Symons defends this assumption, arguing: 

There is no reason to suppose that homosexuals differ systematically from heterosexuals in any way other than their sexual object choice” (p292). 

Indeed, in some respects, Symons seems to see even “sexual object choice” as analogous among homosexuals and heterosexuals of the same sex. 
 
For example, he observes that, unlike women, both homosexual and heterosexual men tend to evaluate prospective mates primarily on the basis their physical appearance and youthfulness (p295). 

Thus, in contrast to the failure of periodicals featuring male nudes to attract a substantial female audience (see below), Symons notes the existence of a market for gay pornography parallel in most respects to heterosexual porn – i.e. featuring young, physically attractive models in various states of undress (p301). 
 
This, of course, contradicts the feminist notion that men are led to ‘objectify’ women only due to the sexualized portrayal of the latter in the media. 
 
Instead, Symons concludes: 

That homosexual men are at least as likely as heterosexual men to be interested in pornography, cosmetic qualities and youth seems to me to imply that these interests are no more the result of advertising than adultery and alcohol consumption are the result of country and western music” (p304).[10] 

However, this assumption of the fundamental similarity of heterosexual and homosexual male psychology has been challenged by David Buller in his book, Adapting Minds: Evolutionary Psychology and the Persistent Quest for Human Nature
 
Buller cites evidence that male homosexuals are ‘feminized’ in many aspects of their behaviour.

For example, one recent study found that male homosexuals have more female-typical occupation interests than do heterosexual males (Ellis & Ratnasingam 2012).

Thus, one of the few consistent early correlates of homosexuality is gender non-conformity in childhood and some evidence (e.g. digit ratios, the fraternal birth order effect) has been interpreted to suggest that the level of prenatal exposure to masculinizing androgens (e.g. testosterone) in utero affects sexual orientation (see Born Gay: The Pyschobiology of Sexual Orientation).

As Buller notes, although gay men seem, like heterosexual men, to prefer youthful sexual partners, they also appear to prefer sexual partners who are, in other respects highly masculine.[11]

Thus, Buller observes: 

“The males featured in gay men’s magazines embody very masculine, muscular physiques, not pseudo-feminine physiques” (Adapting Minds: p227).

Indeed, the models in such magazines seem in most respects similar in physical appearance to the male models, pop stars, actors and other ‘sex symbols’ and celebrities fantasized about by heterosexual women and girls.
 
How then are we to resolve this apparent paradox? 
 
One possible explanation that some aspects of the psychology of male homosexuals are feminized but not others – perhaps because different parts of the brain are formed at different stages of prenatal development, at which stages the levels of masculinizing androgens in the womb may vary. 
 
Indeed, there is even some evidence that homosexual males may be hyper-masculinized in some aspects of their physiology.

For example, it has been found that homosexual males report larger penis-sizes than heterosexual men (Bogaert & Hershberger 1999). 
 
This, researchers Glenn Wilson and Qazi Rahman propose, may be because: 

If it is supposed that the barriers against androgens with respect to certain brain structures (notably those concerned with homosexuality) lead to increased secretion in an effort to break through, or some sort of accumulation elsewhere… then there may be excess testosterone left in other departments” (Born Gay: The Psychobiology of Sex Orientation: p80). 

Another possibility is that male homosexuals actually lie midway between heterosexual men and women in their degree of masculinization.  

On this view, homosexual men come across as relatively feminine only because we naturally tend to compare them to other men (i.e. heterosexual men). However, as compared to women, they may be relatively masculine, as reflected in the male-typical aspects of their sexuality focused upon by Symons. 
 
Interestingly, this latter interpretation suggests the slightly disturbing possibility that, freed from the restraints imposed by women, heterosexual men would be even more indiscriminately promiscuous than their homosexual counterparts.

Evidence consistent with this interpretation is provided by one study from the 1980s which found that, when approached by a female stranger (also a student), on a University campus, with a request to go to bed with them, fully 72% of male students agreed (Clark and Hatfield 1989). 

In contrast, in the same study, not a single one of the 96 females approached by male strangers with the same request on the same university campus agreed to go to bed with the male stranger.

Yet what percentage of the female students subsequently sued the university for sexual harassment was not reported.

Pornography as a “Natural Experiment

For Symons, fantasy represents another window onto sexual and romantic desires. Like homosexuality, fantasy is, by its very nature, unconstrained by the conflicting desires of the opposite sex (or indeed by anything other than the imagination of the fantasist). 

Symons later collaborated in an investigation into sexual fantasy by means of a questionnaire (Ellis and Symons 1990). 

However, in the present work, he investigates fantasy indirectly by focusing on what he calls “the natural experiment of commercial periodical publishing” – i.e. pornographic magazines (p182). 
 
In many respects, this approach is preferable to a survey because, even in an anonymous questionnaire, individuals may be less than honest when dealing with a sensitive topic such as their sexual fantasies. On the other hand, they are unlikely to regularly spend money on a magazine unless they are genuinely attracted by its contents. 
 
Before the internet age, softcore pornographic magazines, largely featuring female nudes, commanded sizeable circulations. However, their readership (if indeed ‘readership’ is the right words, since there was typically little reading involved) was almost exclusively male. 
 
In contrast, there was little or no female audience for magazines containing pictures of naked males. Instead, magazines marketed towards women (e.g. fashion magazines) contain, mostly, pictures of other women. 
 
Indeed, when, in the 1970s, attempts were made, in the misguided name of feminism and ‘women’s liberation’, to market magazines featuring male nudes to a female readership, one such title, Viva, abandoned publishing male nudes after just a few years due to lack of interest or demand, then subsequently went bust just a few years after that, while the other, Playgirl, although it did not entirely abandon male nudes, was notorious, as a consequence, for attracting a readership composed in large part of homosexual men. 
 
Symons thus concludes forcefully and persuasively: 

The notion must be abandoned that women are simply repressed men waiting to be liberated” (p183). 

Indeed, though it has been loudly and enthusiastically co-opted by feminists, this view of women, and of female sexuality – namely women as “repressed men waiting to be liberated” – represents an obviously quintessentially male viewpoint. 

Indeed, taken to extremes, it has even been used as a justification for rape.

Thus, the curious, sub-Freudian notion that female rape victims actually secretly enjoy being raped seems to rest ultimately on the assumption that female sexuality is fundamentally the same as that of men (i.e. indiscriminately enjoying of promiscuous sex) and that it is only women’s sexual ‘repression’ that prevents them admitting as much.

Romance Literature 

Unfortunately, however, there is notable omission in Symons’s discussion of pornography as a window into male sexuality – namely, he omits to consider whether there exists any parallel artistic genre that offers equivalent insight into the female psyche. 
 
Later writers on the topic have argued that romance novels (e.g. Mills and Boon, Jane Austin), whose audience is as overwhelmingly female as pornography’s is male, represent the female equivalent of pornography, and that analysis of the the content of such works provides insights into female mate preferences parallel to those provided into male psychology by pornography (e.g. Kruger et al 2003; Salmon 2004; see also Warrior Lovers: Erotic Fiction, Evolution and Female Sexuality, co-authored by Symons himself). 

Female Orgasm as Non-Adaptive

An entire chapter of ‘The Evolution of Human Sexuality’, namely Chapter Three (entitled, “The Female Orgasm: Adaptation or Artefact”), is devoted to rejecting the claim that the female orgasm represents a biological adaptation. 
 
This is perhaps excessive. However, it does at least conveniently contradicts the claim of some critics of evolutionary psychology, and of sociobiology, such as Stephen Jay Gould that the field is ‘ultra-Darwinian’ or ‘hyper-adaptionist’ and committed to the misguided notion that all traits are necessarily adaptive.[12]
 
In contrast, Symons champions the thesis that the female capacity for orgasm is a simply non-adaptive by-product of the male capacity to orgasm, the latter of which is of course adaptive. 
 
On this view, the female orgasm (and clitoris) is, in effect, the female equivalent of male nipples (only more fun). 
 
Certainly, Symons convincingly critiques the romantic notion, popularized by Desmond Morris among others, that the female orgasm functions as a mechanism designed to enhance ‘pair-bonding’ between couples. 
 
However, subsequent generations of evolutionary psychologists have developed less naïve models of the adaptive function of female orgasm. 
 
For example, Geoffrey Miller argues that the female orgasm functions as an adaptation for mate choice (The Mating Mind: p239-241). 
 
Of course, at first glance, experiencing orgasm during coitus may appear to be a bit late for mate choice, since, by the time coitus has occurred, the choice in question has already been made. However, given that, among humans, most sexual intercourse is non-reproductive (i.e. does not result in conception), the theory is not altogether implausible. 
 
On this view, the very factors which Symons views as suggesting female orgasm is non-adaptive – such as the relative difficultly of stimulating female orgasm during ordinary vaginal sex – are positive evidence for its adaptive function in carefully discriminating between suitors/lovers to determine their desirability as father for a woman ’s offspring. 
 
Nevertheless, at least according to the stringent criteria set out by George C Williams in his classic Adaptation and Natural Selection, as well as the more general principle of parsimony (also known as Occam’s Razor), the case for female orgasm as an adaptation remains unproven (see also Sherman 1989; Case Of The Female Orgasm: Bias in the Science of Evolution).

Out-of-Date?

Much of Symons’ work is dedicated to challenging the naïve group-selectionism of Sixties ethologists, especially Desmond Morris. Although scientifically now largely obsolete, Morris’s work still retains a certain popular resonance and therefore this aspect of Symons’s work is not entirely devoid of contemporary relevance. 
 
In place of Morris‘s rather idyllic notion that humans are a naturally monogamous ‘pair-bonding’ species, Symons advocates instead an approach rooted in the individual-level (or even gene-level) selection championed Richard Dawkins in The Selfish Gene (reviewed here). 
 
This leads to some decidedly cynical conclusions regarding the true nature of sexual and romantic relations among humans. 
 
For example, Symons argues that it is adaptive for men to be less sexually attracted to their wives than they are to other women – because they are themselves liable to bear the cost of raising offspring born to their wives but not those born to other women with whom they mate (e.g. those mated to other males). 
 
Another cynical conclusion is that the primary emotion underlying the institution of marriage, both cross-culturally and in our own society, is neither love nor even lust, but rather male sexual jealousy and proprietariness (p123). 

Marriage, then, is an institution borne not of love, but of male sexual jealousy and the behaviour known to biologists as mate-guarding
 
Meanwhile, in his excellent chapter on ‘Copulation as a Female Service’ (Chapter Eight), Symons suggests that many aspects of heterosexual romantic relationships may be analogous to prostitution
 
As well as its excessive focus on debunking sixties ethologists like Morris, ‘The Evolution of Human Sexuality’ is also out-of-date in a more serious respect Namely, it fails to incorporate the vast amount of empirical research on human sexuality from a sociobiological perspective which has been conducted since the first publication of his work. 
 
For a book first published thirty years ago, this is inevitable – not least because much of this empirical research was inspired by Symons’ own ideas and specifically designed to test theories formulated in this very work. 
 
In addition, potentially important new factors in human reproductive behaviour that even Symons did not foresee have been identified, for example role of levels of fluctuating asymmetry functioning as a criterion for, or at least correlate of, physical attractiveness. 
 
For an updated discussion of the evolutionary psychology of human sexual behaviour, complete with the latest empirical data, readers should consult the latest edition of David Buss’s The Evolution Of Desire: Strategies of Human Mating
 
In contrast, in support of his theories Symons relies largely on classical literary insight, anecdote and, most importantly, a review of the ethnographic record. 
 
However, this latter focus ensures that, in some respects, the work remains of more than merely of historical interest. 
 
After all, one of the more legitimate criticisms levelled against recent research in evolutionary psychology is that it is insufficiently cross-cultural and, with several notable exceptions (e.g. Buss 1989), relies excessively on research conducted among convenience samples of students at western universities. 
 
Given costs and practicalities, this is inevitable. However, for a field that aspires to understand a human nature presumed to be universal, such a method of sampling is highly problematic. 
 
The Evolution of Human Sexuality’ therefore retains its importance for two reasons. 

First, is it the founding work of modern evolutionary psychological research into human sexual behaviour, and hence of importance as a landmark and classic text in the field, as well as in the history of science more generally. 

Second, it also remains of value to this day for the cross-cultural and ethnographic evidence it marshals in support of its conclusions. 

Endnotes

[1] Actually, the first person to discover this, albeit inadvertently, was the great Victorian polymath, pioneering statistician and infamous eugenicist Francis Galton, who, attempting to discover abnormal facial features possessed by the criminal class, succeeded in morphing the faces of multiple convicted criminals. The result was, presumably to his surprise, an extremely attractive facial composite, since all the various minor deformities of the many convicted criminals whose faces he morphed actually balanced one another out to produce a face with few if any abnormalities or disproportionate features.

[2] More recent research in this area has focused on the related concept of fluctuating asymmetry.

[3] However, recent meta-analyses have called into question the evidence for cyclical fluctuations in female mate preferences (Wood et al 2014; cf. Gildersleeve et al 2014), and it has been suggested that such findings may represent casualties of the so-called replication crisis in psychology. It has also been questioned whether ovulation in humans is indeed concealed, or is actually detectable by subtle cues (e.g. Miller et al 2007), for example, changes in face shape (Oberzaucher et al 2012), breast symmetry (Scutt & Manning 1996) and body scent (Havlicek et al 2006).

[4] Another factor leading recent researchers to overestimate the importance of female choice in human evolution is their feminist orientation, since female choice gives women an important role in human evolution, even, paradoxically, in the evolution of male traits.

[5] Actually, in most cultures, only a girl’s first marriage is arranged on her behalf by her parents. Second- and third-marriages are usually negotiated by the woman herself. However, since female fertility peaks early, it is a girl’s first marriage that is usually of the most reproductive, and hence Darwinian, significance.

[6] Indeed, the human anatomical trait in humans that perhaps shows the most evidence of being a product of intersexual selection is a female one, namely the female breasts, since the latter are, unlike the mammary glands of most other mammals, permanently present from puberty on, not only during lactation, and composed primarily of fatty tissues, not milk (Møller 1995; Manning et al 1997; Havlíček et al 2016

[7] Wilson terms his theory “the kin selection theory hypothesis of the origin of homosexuality” (p145). However, a better description might be the ‘helper at the nest theory of homosexuality’, the basic idea being that, like sterile castes in some insects, and like older siblings in some bird species where new nest sites are unavailable, homosexuals, rather than reproducing themselves, direct their energies towards assisting their collateral kin in successfully raising, and provisioning, their own offspring (p143-7). The main problem with this theory is that there is no evidence that homosexuals do indeed devote any greater energies towards assisting their kin in this respect. On the contrary, homosexuals instead seem to devote much of their time and resources towards their own sex life, much as do heterosexuals (Bobrow & Bailey 2001).

[8] As we will see, contrary to the stereotype of evolutionary psychologists as viewing all traits as necessarily adaptive, as they are accused of doing by the likes of Gould, Symons also argued that the female orgasm and menopause are non-adaptive, but rather by-products of other adaptations.

[9] This is not necessarily to say that rampant, indiscriminate promiscuity is a male utopia, or the ideal of any man, be he homosexual or heterosexual. On the contrary, the ideal mating system for any individual male is harem polygyny in which the chastity of his own partners is rigorously policed (see Despotism and Differential Reproduction: which I have reviewed here and here). However, given an equal sex ratio, this would condemn other males to celibacy. Similarly, Symons reports that “Homosexual men, like most people, usually want to have intimate relationships”. However, he observes:

Such relationships are difficult to maintain, largely owing to the male desire for sexual variety; the unprecedented opportunity to satisfy this desire in a world of men, and the male tendency towards sexual jealousy” (p297).  

It does indeed seem to be true that homosexual relationships, especially those of gay males, are, on average, of shorter duration than are heterosexual relationships. However, Symons’ claim regarding “the male tendency towards sexual jealousy” is questionable. Actually, subsequent research in evolutionary psychology has suggested that men are no more prone to jealousy than women, but rather that it is sorts of behaviours which most intensely provoke such jealousy that differentiate the sexes (Buss 1992). However, many gay men practice open relationships, which seems to suggest a lack of jealousy – or perhaps this simply reflects a recognition of the difficulty of maintaining relationships given, as Symons puts it, “the male desire for sexual variety [and] the unprecedented opportunity to satisfy this desire in a world of men”. 

[10] Indeed, far from men being led to objectify women due to the portrayal of women in a sexualized manner in the media, Symons suggests:

There may be no positive feedback at all; on the contrary, constant exposure to pictures of nude and nearly nude female bodies may to some extent habituate men to these stimuli” (p304).

[11] Admittedly, some aspects of body-type typically preferred by gay males (especially the twink) do reflect apparently female traits, especially a relative lack of body-hair. However, lack of body-hair is also obviously indicative of youth. Moreover, a relative lack of body-hair also seems to be a trait favoured in men by heterosexual women. For a discussion of the relative preference on the part of (heterosexual) females for masculine versus feminine traits in male sex partners, see the final section of this review.

[12] Incidentally, Symons also rejects the theory that the female menopause is adaptive, a theory which has subsequently become known as the grandmother hypothesis (p13). Also, although it does not directly address the issue, Symons’ discussion of human rape (p276-85), has also been interpreted as implicitly favouring the theory that rape is a by-product of the greater male desire for commitment free promiscuous sex, rather than the product of a specific rape adaptation in males (see Palmer 1991; and A Natural History of Rape: reviewed here). 

References 

Bellis & Baker (1990). Do females promote sperm competition?: Data for humans. Animal Behavior, 40: 997-999 
Bobrow & Bailey (2001). Is male homosexuality maintained via kin selection? Evolution and Human Behavior, 22: 361-368 
Bogaert & Hershberger (1999) The relation between sexual orientation and penile size. Archives of Sexual Behavior 1999 Jun;28(3) :213-21. 
Buss (1989). Sex differences in human mate preferences: Evolutionary hypotheses tested in 37 cultures. Behavioral and Brain Sciences 12: 1-49
Ellis & Ratnasingam (2012) Gender, Sexual Orientation, and Occupational Interests: Evidence of Androgen Influences. Mankind Quarterly  53(1): 36–80
Ellis & Symons (1990) Sex differences in sexual fantasy: An evolutionary psychological approach, Journal of Sex Research 27(4): 527-555.
Gildersleeve, Haselton & Fales (2014) Do women’s mate preferences change across the ovulatory cycle? A meta-analytic review. Psychological Bulletin 140(5):1205-59.
Havlíček, Dvořáková, Bartos & Fleg (2006) Non‐Advertized does not Mean Concealed: Body Odour Changes across the Human Menstrual Cycle. Ethology 112(1):81-90.
Havlíček et al (2016) Men’s preferences for women’s breast size and shape in four cultures. Evolution and Human Behavior 38(2): 217–226 
Kenrick & Keefe (1992). Age preferences in mates reflect sex differences in human reproductive strategies. Behavioral and Brain Sciences, 15: 75-133. 
Kruger et al (2003) Proper and Dark Heroes as Dads and Cads. Human Nature 14(3): 305-317 
Manning et al (1997) Breast asymmetry and phenotypic quality in women. Ethology and Sociobiology 18(4): 223–236 
Miller (1998). How mate choice shaped human nature: A review of sexual selection and human evolution. In C. Crawford & D. Krebs (Eds.), Handbook of Evolutionary Psychology: Ideas, Issues, and Applications (pp. 87-129). Mahwah, NJ: Lawrence Erlbaum
Miller, Tybur & Jordan (2007). Ovulatory cycle effects on tip earnings by lap dancers: economic evidence for human estrous? Evolution and Human Behavior. 28(6):375–381 
Møller et al (1995) Breast asymmetry, sexual selection, and human reproductive success. Ethology and Sociobiology 16(3): 207-219 
Palmer (1991) Human Rape: Adaptation or By-Product? Journal of Sex Research 28(3): 365-386 
Penton-Voak et al (1999) Menstrual cycle alters face preferences, Nature 399 741-2. 
Puts (2010) Beauty and the Beast: Mechanisms of Sexual Selection in Humans. Evolution and Human Behavior 31 157-175 
Salmon (2004) The Pornography Debate: What Sex Differences in Erotica Can Tell Us About Human Sexuality. In Evolutionary Psychology, Public Policy and Personal Decisions (London: Lawrence Erlbaum Associates, 2004) 
Scutt & Manning (1996) Symmetry and ovulation in women. Human Reproduction 11(11):2477-80
Sherman (1989) The clitoris debate and levels of analysis, Animal Behaviour, 37: 697-8
Wood et al (2014). Meta-analysis of menstrual cycle effects on women’s mate preferencesEmotion Review, 6(3), 229–249.

Judith Harris’s ‘The Nurture Assumption’: By Parent or Peers

Judith Harris, The Nurture Assumption: Why Children Turn Out the Way They Do. Free Press, 1998.

Almost all psychological traits on which individual humans differ, from personality and intelligence to mental illness, are now known to be substantially heritable. In other words, individual differences in these traits are, at least in part, a consequence of genetic differences between individuals. 

This finding is so robust that it has even been termed by Eric Turkenheimer the First Law of Behviour Genetics and, although once anathema to most psychologists save a marginal fringe of behavioural geneticists, it has now, under the sheer weight of evidence produced by the latter, belatedly become the new orthodoxy. 

On reflection, however, this transformation is not entirely a revelation. 

After all, it was only in the mid-twentieth century that the curious notion that individual differences were entirely the product of environmental differences first arose, and, even then, this delusion was largely restricted to psychologists, sociologists, feminists and other such ‘professional damned fools’, along with those among the semi-educated public who seek to cultivate an air of intellectualism by aping the former’s affections. 

Before then, poets, peasants and laypeople alike had long recognized that ability, insanity, temperament and personality all tended to run in families, just as physical traits like stature, complexion, hair and eye colour also do.[1]

However, while the discovery of a heritable component to character and ability merely confirms the conventional wisdom of an earlier age, another behavioural genetic finding, far more surprising and counterintuitive, has passed relatively unreported. 

This is the discovery that the so-called shared family environment (i.e. the environment shared by siblings, or non-siblings, raised in the same family home) actually has next to no effect on adult personality and behaviour. 

This we know from such classic study designs in behavioural genetics as twin studiesadoption studies and family studies.  

In short, individuals of a given degree of relatedness, whether identical twins, fraternal twins, siblings, half-siblings or unrelated adoptees, are, by the time they reach adulthood, no more similar to one another in personality or IQ when they are raised in the same household than when they are raised in entirely different households. 

The Myth of Parental Influence 

Yet parental influence has long loomed large in virtually every psychological theory of child development, from the Freudian Oedipus complex and Bowby’s attachment theory to the whole literary genre of books aimed at instructing anxious parents on how best to raise their children so as to ensure that the latter develop into healthy, functional, successful adults. 

Indeed, not only is the conventional wisdom among psychologists overturned, but so is the conventional wisdom among sociologists – for one aspect of the shared family environment is, of course, household income and social class

Thus, if the family that a person is brought up in has next to no impact on their psychological outcomes as an adult, then this means that the socioeconomic status of the family home in which they are raised also has no effect. 

Poverty, or a deprived upbringing, then, has no effect on IQ, personality or the prevalence of mental illness, at least by the time a person has reached adulthood.[2]

Neither is it only leftist sociologists who have proved mistaken. 

Thus, just as leftists use economic deprivation as an indiscriminate, catch-all excuse for all manner of social pathology (e.g. crime, unemployment, educational underperformance) so conservatives are apt to place the blame on divorcefamily breakdown, having children out of wedlock and the consequential increase in the prevalence of single-parent households

However, all these factors are, once again, part of the shared family environment – and according to the findings of behavioural genetics, they have next to no influence on adult personality or intelligence. 

Of course, chaotic or abusive family environments do indeed tend to produce offspring with negative life outcomes. 

However, none of this proves that it was the chaotic or abusive family environment that caused the negative outcomes. 

Rather, another explanation is at hand – perhaps the offspring simply biologically inherit the personality traits of their parents, the very personality traits that caused their family environment to be so chaotic and abusive in the first place.[3] 

For example, parents who divorce or bear offspring out-of-wedlock likely differ in personality from those who first get married then stick together, perhaps being more impulsive or less self-disciplined and conscientious (e.g. less able refrain from having children from a relationship that was destined to be fleeting, or less able to persevere and make the relationship last). 

Their offspring may, then, simply biologically inherit these undesirable personality attributes, which then themselves lead to the negative social outcomes associated with being raised in single-parent households or broken homes. The association between family breakdown and negative outcomes for offspring might, then, reflect simply the biological inheritance of personality. 

Similarly, as leftists are fond of reminding us, children from economically-deprived backgrounds do indeed have lower recorded IQs and educational attainment than those from more privileged family backgrounds, as well as other negative outcomes as adults (e.g. lower earnings, higher rates of unemployment). 

However, this does not prove that coming from a deprived family background necessarily itself depresses your IQ, educational attainment or future salary. 

Rather, an equally plausible possibility is simply that offspring simply biologically inherit the low intelligence of their parents – the very low intelligence which was likely a factor causing the low socioeconomic status of their parents, since intelligence is known to correlate strongly with educational and occupational advancement.[4]

In short, the problem with all of this body of research which purports to demonstrate the influence of parents and family background on psychology and behavioural outcomes for offspring is that they fail to control for the heritability of personality and intelligence, an obvious confounding factor

The Non-Shared Environment

However, not everything is explained by heredity. As a crude but broadly accurate generalization, only about half the variation for most psychological traits is attributable to genes. This leaves about half of the variation in intelligence, personality and mental illness to be explained environmental factors.  

What are these environmental factors if they are not to be sought in the shared family environment

The obvious answer is, of course, the non-shared family environment – i.e. the ways in which even children brought up in the same family-home nevertheless experience different micro-environments, both within the home and, perhaps more importantly, outside it. 

Thus, even the fairest and most even-handed parents inevitably treat their different offspring differently in some ways.  

Indeed, among the principal reasons that parents treat their different offspring differently is precisely because the different offspring themselves differ in their own behaviour.  

Corporal punishment 

Rather than differences in the behaviour of different children resulting from differences in how their parents treat them, it may be that differences in how parents treat their children may reflect responses to differences in the behaviour of the children themselves. 

In other words, the psychologists have the direction of causation precisely backwards. 

Take, for example, one particularly controversial issue, namely the physical chastisement of children by their parents as a punishment for bad behaviour (e.g. spanking). 

Thus, some psychologists have sometimes argued that physical chastisement actually causes misbehaviour. 

As evidence, they cite the fact that children who are spanked more often by their parents or caregivers on average actually behave worse than those whose caregivers only rarely or never spank the children entrusted to their care.  

This, they claim, is because, in employing spanking as a form of discipline, caregivers are inadvertently imparting the message that violence is a good way of solving your problems. 

Actually, however, I suspect children are more than capable of working out for themselves that violence is often an effective means of getting your way, at least if you have superior physical strength to your adversary. Unfortunately, this is something that, unlike reading, arithmetic and long division, does not require explicit instruction by teachers or parents. 

Instead, a more obvious explanation for the correlation between spanking and misbehaviour in children is not that spanking causes misbehaviour, but rather that misbehaviour causes spanking. 

Indeed, once one thinks about it, this is in fact rather obvious: If a child never seriously misbehaves, then a parent likely never has any reason to spank that child, even if the parent is, in principle, a strict disciplinarian; whereas, on the other hand, a highly disobedient child is likely to try the patience of even the most patient caregiver, whatever his or her moral opposition to physical chastisement in principle. 

In other words, causation runs in exactly the opposite direction to that assumed by the naïve psychologists.[5] 

Another factor may also be at play – namely, offspring biologically inherit from their parents the personality traits that cause both the misbehaviour and the punishment. 

In other words, parents with aggressive personalities may be more likely to lose their temper and physically chastise their children, while children who inherit these aggressive personalities are themselves more likely to misbehave, not least by behaving in an aggressive or violent manner. 

However, even if parents treat their different offspring differently owing to the different behaviour of the offspring themselves, this is not the sort of environmental factor capable of explaining the residual non-shared environmental effects on offspring outcomes. 

After all, this merely begs the question as to what caused these differences in offspring behaviour in the first place? 

If the differences in offspring behaviour exist prior to differences in parental responses to this behaviour, then these differences cannot be explained by the differences in parental responses.  

Peer Groups 

This brings us back to the question of the environmental causes of offspring outcomes – namely, if about half the differences among children’s IQs and personalities are attributable to environmental factors, but these environmental factors are not to be found in the shared family environment (i.e. the environment shared by children raised in the same household), then where are these environmental factors to be sought? 

The search for environmental factors affecting personality and intelligence has, thus far, been largely unsuccessful. Indeed, some behavioural geneticists have almost gone as far as conceding scholarly defeat in identifying correlates for the environmental portion of the variance. 

Thus, leading contemporary behavioural geneticist Robert Plomin in his recent book, Blueprint: How DNA Makes Us Who We Are, concludes that those environmental factors that affect cognitive ability, personality, and the development of mental illness are, as he puts it, ‘unsystematic’ in nature. 

In other words, he seems to be saying that they are mere random noise. This is tantamount to accepting that the null hypothesis is true. 

Judith Harris, however, has a quite different take. According to Harris, environmental causes must be sought, not within the family home, but rather outside it – in a person’s interactions with their peer-group and the wider community.[6]

Environment ≠ Nurture 

Thus, Harris argues that the so-called nature-nurture debate is misnamed, since the word ‘nurture’ usually refers to deliberate care and moulding of a child (or of a plant or animal). But many environmental effects are not deliberate. 

Thus, Harris repeatedly references behaviourist John B. Watson’s infamous boast: 

Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in and I’ll guarantee to take any one at random and train him to become any type of specialist I might select—doctor, lawyer, artist, merchant-chief and, yes, even beggar-man and thief, regardless of his talents, penchants, tendencies, abilities, vocations, and race of his ancestors.” 

Yet what strikes me as particularly preposterous about Watson’s boast is not its radical environmental determinism, nor even its rather convenient unfalsifiability.[7] 

Rather, what most strikes me as most preposterous about Watson’s claim is its frankly breath-taking arrogance. 

Thus, Watson not only insisted that it was environment alone that entirely determined adult personality. In this same quotation, he also proclaimed that he already fully understood the nature of these environmental effects to such an extent that, given omnipotent powers to match his evidently already omniscient understanding of human development, he could produce any outcome he wished. 

Yet, in reality, environmental effects are anything but clear-cut. Pushing a child in a certain direction, or into a certain career, may sometimes have the desired effect, but other times have the exact opposite effect to that desired, provoking the child to rebel against parental dictates. 

Thus, even to the extent that environment does determine outcomes, the precise nature of the environmental factors implicated, and their interaction with one another, and with the child’s innate genetic endowment, is surely far more complex than the simple mechanisms proposed by behaviourists like Watson (e.g. reinforcement and punishment). 

Language Acquisition 

The most persuasive evidence for Harris’s theory of the importance of peer groups comes from an interesting and widely documented peculiarity of language acquisition

The children of immigrants, whose parents speak a different language inside the family home, and may even themselves be monolingual, nevertheless typically grow up to speak the language of their host culture rather better than they do the language to which they were first exposed in the family home. 

Indeed, while their parents may never achieve fluency in the language of their host culture, having missed out on the Chomskian critical period for language acquisition, their children often actually lose the ability to speak their parent’s language, often much to the consternation of parents and grandparents. 

Yet, from an sociobiological or evolutionary psychological perspective, such an outcome is obviously adaptive. 

If a child is to succeed in wider society, they must master its language, whereas, if their parent’s first language is not spoken anywhere in their host society except in their family, then it is of limited utility, and, once their parents themselves become proficient in the language of the host culture, becomes entirely redundant (see The Ethnic Phenomenon (reviewed herehere and here): p258). 

Code-Switching 

Harris suggests that the same applies to personality. Just as the child of immigrants switches between one language and another at home and school, so they also adopt different personalities. 

Thus, many parents are surprised to be told by their children’s teachers at parents’ evenings that their offspring is quiet and well-behaved at school, since, as they themselves report, he or she isn’t at all like that at home. 

Yet, at home, a child has only, at most, a sibling or two with whom to compete for his parents’ attention. In contrast, at school, he or she has a whole class with whom to compete for their teacher’s attention.

It is therefore unsurprising that most children are less outgoing at school than they are at home with their parents. 

For example, an older sibling might be able push his little brother around at home. But, if he is small for his age, he is unlikely to be able to get away with the same behaviour among his peers at school. 

Children therefore adopt two quite different personalities – one for interactions with family and siblings, and another for among their peers.

This then, for Harris, explains why, perhaps surprisingly, birth-order has generally been found to have little if any effect on personality, at least as personality manifests itself outside the family home. 

An Evolutionary Theory of Socialization? 

Interestingly, even evolutionary psychologists have not been immune from the delusion of parental influence. Thus, in one influential paper, anthropologists Patricia Draper and Henry Harpending argued that offspring calibrate their reproductive strategy by reference to the presence or absence of a father in their household (Draper & Harpending 1982). 

On this view, being raised in a father-absent household is indicative of a social environment where low male parental investment is the norm, and hence offspring adjust their own reproductive strategy accordingly, adopting a promiscuous, low-investment mating strategy characterized by precocious sexual development and an inability to maintain lasting long-term relationships (Draper & Harpending 1982Belsky et al 1991). 

There is indeed, as these authors amply demonstrate, a consistent correlation between father-absence during development and both earlier sexual development and more frequent partner-switching in later life. 

Yet there is also another, arguably more obvious, explanation readily at hand to explain this association. Perhaps offspring simply inherit biologically the personality traits, including sociosexual orientation, of their parents. 

On this view, offspring raised in single-parent households are more likely to adopt a promiscuous, low-investment mating strategy simply because they biologically inherit the promiscuous sociosexual orientation of their parents, the very promiscuous sociosexual orientation that caused the latter to have children out-of-wedlock or from relationships that were destined to break down and hence caused the father-absent childhood of their offspring. 

Moreover, even on a priori theoretical grounds, Draper, Harpending and Belsky’s reasoning is dubious. 

After all, whether you personally were raised in a one- or two-parent family is obviously a very unreliable indicator of the sorts of relationships prevalent in the wider community into which you are born, since it represents a sample size of just one. 

Instead, therefore, it would be far more reliable to calibrate your reproductive strategy in response to the prevalence of one-parent households in the wider community at large, rather than the particular household type into which you happen to have been born.  

This, of course, directly supports Harris’s own theory of ‘peer group socialization’. 

In short, to the extent that children do adapt to the environment and circumstances of their upbringing (and they surely do), they must integrate into, adopt the norms of, and a reproductive strategy to maximize their fitness within, the wider community into which they are born, rather than the possibly quite idiosyncratic circumstances and attitudes of their own family. 

Absent Fathers, from Upper-Class to Under-Class 

Besides language-acquisition among the children of immigrants, another example cited by Harris in support of her theory of ‘peer group socialization’ is the culture, behaviours and upbringing of British upper-class males.

Here, boys were, and, to some extent, still are, reared primarily, not by their parents, but rather by nanniesgovernoresses and, more recently, in exclusive fee-paying all-male boarding schools

Yet, despite having next to no contact with their fathers throughout most of their childhood, these boys nevertheless managed somehow to acquire manners, attitudes and accents similar, if not identical, to those of their upper-class fathers, and not at all those of the middle-class nannies, governoresses and masters with whom they spent most of their childhood being raised. 

Yet this phenomenon is by no means restricted to the British upper-classes. On the contrary, rather than citing the example of the British upper-classes in centuries gone by, Harris might just as well have cited that of contemporary underclass in Britain and elsewhere, since what was once true of the British upper-classes, is now equally true of the underclass

Just as the British upper-classes were once raised by governoresses, nannies and in private schools with next to no contact with their fathers, so contemporary underclass males are similarly raised in single-parent households, often to unwed mothers, and typically have little if any contact with their biological fathers. 

Here, as Warren Farrell observes in his seminal The Myth of Male Power (which I have reviewed here and here), there is a now a “a new nuclear family: woman, government and child”, what Farrell terms “Government as a Substitute Husband”. 

Yet, once again, these underclass males, raised by single parents with the assistance of the state, typically turn out much like their absent fathers with whom they have had little if any contact, often going on to promiscuously father a succession of offspring themselves, with whom they likewise have next to no contact. 

Abuse 

But what of actual abuse? Surely this has a long-term devastating psychological impact on children. This, at any rate, is the conventional wisdom, and questioning this wisdom is tantamount to contemporary heresy, with attendant persecution

Take, for example, what is perhaps the form of child abuse that provokes the most outrage and disgust – namely, sexual abuse. Here, it is frequently asserted that paedophiles were almost invariably themselves abused as children, which creates a so-called ‘cycle of abuse’. 

However, there are at least three problems with this claim. 

First, it cannot explain how the first person in this cycle became a paedophile. 

Second, we might doubt whether it is really true that paedophiles are disproportionately likely to have themselves been abused as children. After all, abuse is something that almost invariably happens surreptitiously ‘behind closed doors’ and is therefore difficult to verify or disprove. 

Thus, even if most paedophiles claim to have been victims of abuse, it is possible that they are simply lying in order to elicit sympathy or excuse or shift culpability for their own offending. 

Finally, even if paedophiles can be shown to be disproportionately likely to have themselves been victimized as children, this by no means proves that their victimization caused their sexual orientation. 

Rather, since most abuse is perpetrated by parents or other close family members, an alternative possibility is that victims simply biologically inherit the sexual orientation of their abuser. After all, if homosexuality is partially heritable, as is now widely accepted, then why not paedophilia as well? 

However, the finding that the shared family environment accounts for hardly any of the variance in outcomes among adults does not preclude the possibility that severe abuse may indeed have an adverse effect on adult outcomes. 

After all, adoption studies can only tell us what percent of the variance is caused by heredity or by shared or unshared environments within a specific population as a whole. 

Perhaps the shared family environment accounts for so little of the variance precisely because the sort of severe abuse that does indeed have a devastating long-term effect on personality and mental health is, thankfully, so very rare in modern societies. 

Indeed, it may be especially rare within the families used in adoption studies precisely because adoptive families are carefully screened for suitability before being allowed to adopt. 

Moreover, Harris emphasizes an important caveat: Even if abuse does not have long-term adverse psychological effects, this does not mean that abuse causes no harm, and nor does it in any way excuse such abuse. 

On the contrary, the primary reason we shouldn’t mistreat children (and should severely punish those who do) is not on account of some putative long-term psychological effect on the adults whom the children subsequently become, but rather because of the very real pain and suffering inflicted on a child at the time the abuse takes place. 

Race Differences in IQ 

Finally, Harris even touches upon that most vexed area of the (so-called) nature-nurture debate – race differences in intelligence

Here, the politically-correct claim that differences in intelligence between racial groups, as recorded in IQ tests, are of purely environmental origin runs into a problem, since the sorts of environmental effects that are usually posited by environmental determinists as accounting for the black-white test score gap in America (e.g. differences in rates of poverty and socioeconomic status) have been shown to be inadequate because, even after controlling for these factors, there remains a still unaccounted for gap in test-scores. 

Thus, as Arthur R. Jensen laments: 

This gives rise to the hypothesizing of still other, more subtle environmental factors that either have not been or cannot be measured—a history of slavery, social oppression, and racial discrimination, white racism, the ‘black experience,’ and minority status consciousness [etc]” (Straight Talk About Mental Tests: p223). 

The problem with these explanations, however, is that none of these factors has yet been demonstrated to have any effect on IQ scores. 

Moreover, some of the factors proposed as explanations are formulated in such a vague form (e.g. “white racism, the ‘black experience’”) that it is difficult to conceive of how they could ever be subjected to controlled testing in the first place.[8] 

Jensen has termed this mysterious factor the ‘X-factor’. 

In coining this term, Jensen was emphasizing its vague, mysterious and unfalsifiable nature. Jensen did not actually believe that this posited ‘X-factor’, whatever it was, really did account for the test-score gap. Rather, he thought heredity explained most, if not all, of the remaining test-score gap. 

However, Harris takes Jensen at his word. Thus, she announces: 

I believe I know what this X factor is… I can describe it quite clearly. Black kids and white kids identify with different groups that have different norms. The differences are exaggerated by group contrast effects and have consequences that compound themselves over the years. That’s the X factor” (p248-9). 

Interestingly, although she does not develop it, Harris’s claim is actually compatible with, and potentially reconciles, the conflicting findings of two of the most widely-cited studies in this vexed area of research and debate. 

First, in the more recent of these two studies, Minnesota Transracial Adoption Study, the same differences in IQ were observed among black, white and mixed-race children adopted into upper-middle class white families as are found among the respective among black, white and mixed-race populations in society at large (Scarr & Weinberg 1976). 

Moreover, although, when tested during childhood, the children’s adoptive households did seem to have had a positive effect on their IQ scores, by the time they reached the cusp of adulthood, the black teenagers who had been adopted into upper-middle-class white homes actually scored no higher in IQ than did blacks in the wider population not raised in upper-middle class white families (Weinberg, Scarr & Waldman 1992). 

This study is often cited by hereditarians as evidence for innate racial differences (e.g. Levin 1994Lynn 1994Whitney 1996). 

However, in the light of the findings of the behavioural genetics studies discussed by Harris in ‘The Nurture Assumption’, the fact that white upper-middle-class adoptive homes had no effect on the adult IQs of the black children adopted into them is, in fact, hardly surprising. 

After all, as we have seen, the shared family environment generally has no effect on IQ, at least by the time the person being tested has reached adulthood. One would therefore not expect adoptive homes, howsoever white and upper-middle-class, to have any effect on adult IQs of the black children adopted into them, or indeed of the white or mixed-race children adopted into them. 

In short, adoptive homes have no effect on adult IQ, whether or not the adoptees, or adoptive families, are black, white, brown, yellow, green or purple! 

But, if race differences in intelligence are indeed entirely environmental in origin, then where are these environmental causes to be found, if not in the family environment? 

Harris has an answer – black culture. 

According to her, the black adoptees, although raised in white adoptive families, nevertheless still come to identify as black, and to identify with the wider black culture and social norms. In addition, they may, on account of their racial identification, come to socialize with other blacks in school and elsewhere. 

As a result of this acculturation to African-American norms and culture, they therefore come to score lower in IQ than their white peers and adoptive siblings. 

But how can we test this theory? Perhaps we could look at the IQ scores of black children raised in white families where there is no wider black culture with which to identify, and few if any black peers with whom to socialize?  

This brings us to the second of the two studies which Harris’s theory potentially reconciles, namely the Eyferth study.  

Here, it was found that the mixed-race children fathered by black American servicemen who had had sexual relationships with German women during the Allied occupation of Germany after World War Two had almost exactly the same average IQ scores as a control group of offspring fathered by white US servicemen during the same time period (Eyferth 1959). 

The crucial difference from the Minnesota study may be that these children, raised in monoracial Germany in the mid-twentieth century, had no wider African-American culture with which to identify or whose norms to adopt, and few if any black or mixed-race peers in their vicinity with whom to socialize. 

This then is perhaps the last lifeline for the radical environmentalist theory of race differences in intelligence – namely the theory that African-American culture somehow depresses intelligence. 

Unfortunately, however, this proposition is likely almost as politically unpalatable to politically-correct liberals as is the notion that race differences in intelligence reflect innate genetic differences.[9] 

Endnotes

[1] Thus, this ancient wisdom is reflected, for example, in many folk sayings, such as the apple does not fall far from the tree, a chip off the old block and like father, like son, many of which long predate either Darwin’s theory of evolution, and Mendel’s work on heredity, let alone the modern work of behavioural geneticists.

[2] It is important to emphasize here that this applies only to psychological outcomes, and not, for example, economic outcomes. For example, a child raised by wealthy parents is indeed likely to be wealthier than one raised in poverty, if only because s/he is likely to inherit (some of) the wealth of his parents. It is also possible that s/he may, on average, obtain a better job as a consequence of the opportunities opened by his privileged upbringing. However, his IQ will be no higher than had s/he been raised in relative poverty, and neither will s/he be any more or less likely to suffer from a mental illness. 

[3] Similarly, it is often claimed that children raised in care homes, or in foster care, tend to have negative life-outcomes. However, again, this by no means proves that it is care homes or foster care that causes these negative life-outcomes. On the contrary, since children who end up in foster care are typically either abandoned by their biological parents, or forcibly taken from their parents by social services on account of the inadequate care provided by the latter, or sometimes outright abuse, it is obvious that their parents represent an unrepresentative sample of society as a whole. An obvious alternative explanation, then, is that the children in question simply inherit the dysfunctional personality attributes of their biological parents, namely the very dysfunctional personality attributes that caused the latter to either abandon their children or have them removed by the social services.

[4] Likewise, the heritability of such personality traits as conscientiousness and self-discipline, in addition to intelligence, likely also partly account for the association between parental income and academic attainment among their offspring, since both academic attainment, and occupational success, require the self-discipline to work hard to achieve success. These factors, again in addition to intelligence, likely also contribute to the association between parental income and the income and socioeconomic status ultimately attained by their offspring.

[5] This possibility could, of course, be ruled out by longitudinal studies, which investigate whether the spanking preceded the misbehaviour, or vice versa. However, this is easier said than done, since, unless relying on the reports by caregivers or children themselves, which depends on both the memory and honesty of the caregivers and children themselves, it would have to involve intensive, long-term, and continued observation in order to establish which came first, namely the pattern of misbehaviour, or the adoption of physical chastisement as a method of discipline. This would, presumably, require continuous observation from birth onwards, so as to ensure that the very first instance of spanking or excessive misbehaviour were recorded. To my knowledge, such a careful and intensive long-term study of this sort has yet to be conducted, if even it is possible.

[6] The fact that the relevant environmental variables must be sought outside the family home is one reason why the terms ‘between-family environment’ and ‘within-family environment’, sometimes used as synonyms or alternatives for ‘shared’ and ‘non-shared family environment’ respectively, are potentially misleading. Thus, the ‘within-family environment’ refers to those aspects of the environment that differ for different siblings even within a single family. However, these factors may differ within a single family precisely because they occur outside, not within, the family itself. The terms ‘shared’ and ‘non-shared family environment’ are therefore to be preferred, so as to avoid any potential confusion these alternative terms could cause.

[7] Both practical and ethical considerations, of course, prevent Watson from actually creating his “own specified world” in which to bring up his “dozen healthy infants”. Therefore, no one is able to put his claim to the test. It is therefore unfalsifiable and Watson is therefore free to make such boasts, safe in the knowledge that there is no danger of his actually being made to make good on his claims or being proven wrong.

[8] Actually, at least some of these theories are indeed testable and potentially falsifiable. With regard to the factors quoted by Jensen (namely, “a history of slavery, social oppression, and racial discrimination, white racism… and minority status consciousness”), one way of testing these theories is to look at test scores in those countries where there is no such history. For example, in sub-Saharan Africa, as well as in Haiti and Jamaica, blacks are not in the majority, and are moreover in control of the government. Yet the IQ scores of the indigenous population of Africa is actually even lower than among blacks in the USA (see Richard Lynn’s Race Differences in Intelligence: reviewed here). True, most such countries still have a history of racial oppression and discrimination, albeit in the form of European colonialism rather than racial slavery or segregation in the American sense. However, the lower scores for black Africans is true even in those few sub-Saharan African countries that were not colonized by western powers, or only briefly colonized (e.g. Ethiopia). Moreover, this merely begs the question as to why Africa was so easily colonized by Europeans. Also, other minority groups ostensibly subject to racial discrimination and oppression (e.g. Jews, Overseas Chinese) actually score very high in IQ, and are economically successful. As for “the ‘black experience’”, this meanly begs the question as to why the ‘black experience’ has been so similar, and resulted in the same low IQs, in so many different parts of the world, something implausible unless unless the ‘black experience’ itself reflects innate aspects of black African psychology. 

[9] Thus, ironically, the recently deceased James Flynn, though always careful, throughout his career, to remain on the politically-correct radical environmentalist side of the debate with regard to the causes of race differences in intelligence, nevertheless recently found himself taken to task by the leftist, politically-correct British Guardian newspaper for a sentence in his recent book, Does Your Family Make You Smarter, where he described American blacks as coming from a “from a cognitively restricted subculture” (Wilby 2016). Thus, whether one attributes lower black IQs to biology or to culture, either answer is certain offend leftists, and the power of political correctness can, it seems, never be appeased.

References 

Belsky, Steinberg & Draper (1991) Childhood Experience, Interpersonal Development, and Reproductive Strategy: An Evolutionary Theory of Socialization Child Development 62(4): 647-670 

Draper & Harpending (1982) Father Absence and Reproductive Strategy: An Evolutionary Perspective Journal of Anthropological Research 38:3: 255-273 

Eyferth (1959) Eine Untersuchung der Neger-Mischlingskinder in WestdeutschlandVita Humana, 2, 102–114 

Levin (1994) Comment on Minnesota Transracial Adoption Study. Intelligence. 19: 13–20 

Lynn, R (1994) Some reinterpretations of the Minnesota Transracial Adoption Study. Intelligence. 19: 21–27 

Scarr & Weinberg (1976) IQ test performance of black children adopted by White familiesAmerican Psychologist 31(10):726–739 

Weinberg, Scarr & Waldman, (1992) The Minnesota Transracial Adoption Study: A follow-up of IQ test performance at adolescence Intelligence 16:117–135 

Whitney (1996) Shockley’s experiment. Mankind Quarterly 37(1): 41-60

Wilby (2006) Beyond the Flynn effect: New myths about race, family and IQ? Guardian, September 27.

A Modern McCarthyism in our Midst

Anthony Browne, The Retreat of Reason: Political Correctness and the Corruption of Public Debate in Modern Britain (London: Civitas, 2006) 

Western civilization has progressed. Today, unlike in earlier centuries, we no longer burn heretics at the stake

Instead, according to sociologist Steven Goldberg, himself no stranger to contemporary heresy, these days: 

“All one has to lose by unpopular arguments is contact with people one would not be terribly attracted to anyway” (Fads and Fallacies in the Social Sciences: p222). 

Unfortunately, however, Goldberg underplays, not only the psychological impact of ostracism, but also the more ominous consequences that sometimes attach to contemporary heresy. 
 
Thus, bomb and death threats were issued repeatedly to women such as Erin Pizzey and Suzanne Steinmetz for pointing out that women were just as likely, or indeed somewhat more likely, to perpetrate acts of domestic violence against their husbands and boyfriends as their husbands and boyfriends were to perpetrate acts of domestic violence against them – a finding now replicated in literally hundreds of studies (see also Domestic Violence: The 12 Things You Aren’t Supposed to Know). 
 
Similarly, in the seventies, Arthur Jensen, a psychology professor at the University of California, had to be issued with an armed guard on campus after suggesting, in a sober and carefully argued scientific paper, that it was a “not unreasonable” hypothesis that the IQ difference between blacks and whites in America was partly genetic in origin. 
 
Political correctness has also cost people their jobs. 

Academics like Chris BrandHelmuth NyborgLawrence SommersFrank EllisNoah Carl and, most recently, Bo Winegard have been forced to resign or lost their academic positions as a consequence of researching, or, in some cases, just mentioning, politically incorrect theories such as the possible social consequences of, or innate basis for, sex and race differences in intelligence

Indeed, even the impeccable scientific credentials of James Watson, a figure jointly responsible for among the most important scientific discoveries of the twentieth century, did not spare him this fate when he was reported in a newspaper as making some controversial but eminently defensible comments regarding population differences in cognitive ability and their likely impact on prospects for economic development.  

At the time of (re-)writing this piece, the most recent victim of this process of purging in academia is the celebrated historian, and long-term controversialist, David Starkey, excommunicated for some eminently sensible, if crudely expressed, remarks about slavery. 

Meanwhile, as proof of the one-sided nature of the witch-hunt, during the very same month as that in which Starkey was excommunicated from public life, a non-white leftist female academic, Priyamvada Gopal, tweeted the borderline genocidal tweet: 

“White lives don’t matter. As white lives.[1]

Yet the only repercussions the latter faced from her employer, Cambridge University, was to be almost immediately promoted to a full professorship

Cambridge University also, in response, issued a defence of their employees right to academic freedom, tweeting that: 

“[Cambridge] University defends the right of its academics to express their own lawful opinions which others might find controversial”

This is indeed an admirable and principled stance – if applied consistently. 

Unfortunately, however, although this tweet was phrased in general terms, and actually included no mention of Gopal by name, it was evidently not of general application. 

For Cambridge University is, not only among the institutions from which Starkey was forced to tender his resignation this very same year, but also itself the very same institution that, only a year before, had denied a visiting fellowship to Jordan Peterson, the eminent public intellectual, for his controversial stances and statements on a range of topics, and which, only two years before, had denied an academic fellowship to researcher Noah Carl, after a letter calling for his dismissal which was signed by, among others, none other than the loathsome Priyamvada Gopal herself. 

The inescapable conclusion is the freedom of “academics to express lawful opinions which others might find controversial” at Cambridge University applies, despite the general wording of the tweet from which these words are taken, only to those controversial opinions of which the leftist academic and cultural establishment currently approves. 

Losing Your Livelihood 

If I might be accused here of focusing excessively on freedom of speech in an academic context, this is only because academia is among the arenas where freedom of expression is most essential, as it is only if all ideas, however offensive to certain protected groups, are able to freely circulate, and compete, in the marketplace of ideas that knowledge is able to progress through a selective process of testing and falsification.[2]

However, although the university environment is, today, especially intolerant, nevertheless similar fates have also befallen non-academics, many of whom have been deprived of their livelihoods on account of their politics. 

For example, in The Retreat of Reason, first published in 2006, Anthony Browne points to the case of a British headmaster sacked for saying Asian pupils should be obliged to learn English, a policy that was then, only a few years later, actually adopted as official government policy (p50). 

In the years since the publication of ‘The Retreat of Reason’, such examples have only multiplied. 

Indeed, today it is almost taken for granted that anyone caught saying something controversial and politically incorrect on the internet in his own name, or even under a pseudonym if subsequently ‘doxed’, is liable to lose his job.

Likewise, Browne noted that police and prison officers in the UK were then barred from membership of the BNP, a legal and constitutional political party, but not from membership of Sinn Fein, who until quite recently had supported domestic terror against the British state, including the murder of soldiers, civilians and the police themselves, nor of various Marxist groups that advocate the violent overthrow of the whole capitalist system (p51-2). 

Today, meanwhile, even believing that a person cannot change their biological sex is said to be a bar on admission into the British police.

Moreover, employees sacked on account of their political views cannot always even turn to their unions for support. 
 
Instead, trade unions have themselves expelled members for their political beliefs (p52) – then successfully defended this action in the European Court of Human rights by citing the right to freedom of association (see ASLEF v UK [2007] ECHR 184). 

Yet, ironically, freedom of association is not only the precise freedom denied to employers by anti-discrimination laws, but also the very same freedom that surely guarantees a person’s right to be a member of a constitutional, legal political party, or express controversial political views outside of their work, without being at risk of losing their job. 

Browne concludes:

One must be very disillusioned with democracy not to find it at least slightly unsettling that in Europe in the twenty-first century government employees are being banned from joining certain legal political parties but not others, legal democratic party leaders are being arrested in dawn raids for what they have said and political parties leading the polls are being banned by judges” (p57). 

Of course, racists and members of parties like the BNP hardly represent a fashionable cause célèbre for civil libertarians. But, then, neither did other groups targeted for persecution at the time of their persecution. This is, of course, precisely what rendered them so vulnerable to persecution. 
 
Political correctness is often dismissed as a trivial issue, which only bigots and busybodies bother complaining about, when there are so many more serious problems and suffering around in the world. 

Yet free speech is never trivial. When people lose their jobs and livelihoods because of currently unfashionable opinions, what we are witnessing is a form of modern McCarthyism. 
 
Indeed, as American conservative David Horowitz observes: 

“The era of the progressive witch-hunt has been far worse in its consequences to individuals and freedom of expression than was the McCarthy era… [not least because] unlike the McCarthy era witch-hunt, which lasted only a few years, the one enforced by left-wing ‘progressives’ is now entering its third decade and shows no signs of abating” (Left Illusions: An Intellectual Odyssey).[3] 

Yet, while columnists, academics, and filmmakers delight in condemning, without fear of reprisals, a form of McCarthyism that ran out of steam over half a century ago (i.e. anti-communism during the Second Red Scare), few dare to incur the wrath of the contemporary inquisition by exposing a modern McCarthyism right here in our midst.  

Recent Developments 

Browne’s ‘The Retreat of Reason’ was first published in 2006. Unfortunately, however, in the intervening decade and a half, despite Browne’s wise counsel, the situation has only worsened. 

Thus, in 2006, Browne rightly championed New Media facilitated by the internet age, such as blogs, for disseminating controversial, politically-incorrect ideas and opinion, and thereby breaking the mainstream media monopoly on the dissemination of information and ideas (p85). 

Here, Browne was surely right. Indeed, new media, such as blogs, have not only been responsible for disseminating ideas that are largely taboo in the mainstream media, but even for breaking news stories that had been suppressed by mainstream media, such as the racial identity of those responsible for the 2015-2016 New Year’s Eve sexual assaults in Germany

However, in the decade and a half since ‘The Retreat of Reason’ was published, censorship has become increasingly restrictive even in the virtual sphere. 

Thus, internet platforms like YouTubePatreon, Facebook and Twitter increasingly deplatform content providers with politically incorrect viewpoints, and, in a particularly disturbing move, even some websites have been, at least temporarily, forced offline, or banished to the darkweb, by their web hosting providers.

Doctrinaire libertarians respond that this is not a free speech issue, because these are private business with the right to deny service to anyone with whom they choose not to contract.

In reality, however, platforms like Facebook and Twitter are far more than private businesses. As virtual market monopolies, they are part of the infrastructure of everyday life in the twenty-first century.

To be banned from communicating on Facebook is tantamount to being barred from communication in a public place.

Moreover, the problem is only exacerbated by the fact that the few competitors seeking to provide an alternative to these Big Tech monopolies with a greater commitment to free speech are themselves de-platormed by their hosting providers as a direct consequence of their commitment to free speech.

Likewise, the denial of financial services, such as banking or payment processing, to groups or individuals on the basis of their politics is particularly troubling, effectively making it all but impossible those afflicted to remain financially viable. The result is effectively tantamount to being made an ‘unperson’.

Moreover, far from remaining a hub of free expression, social media has increasingly provided a rallying and recruiting ground for moral outrage and repression, not least in the form of so-called twittermobs, intent on publicly shaming, harassing and denying employment opportunities to anyone of whose views they disapprove.

In short, if the internet has facilitated free speech, it has also facilitated political persecution, since today, it seems, one can enjoy all the excitement and exhilaration of joining a witchhunt without ever straying from the comfort of your computer screen.

Explaining Political Correctness 

For Browne, PC represents “the dictatorship of virtue” (p7) and replaces “reason with emotion” and subverts “objective truth to subjective virtue” (xiii). 

Political correctness is an assault on both reason and… democracy. It is an assault on reason, because the measuring stick of the acceptability of a belief is no longer its objective, empirically established truth, but how well it fits in with the received wisdom of political correctness. It is an assault on… democracy because [its] pervasiveness… is closing down freedom of speech” (p5). 

Yet political correctness is not wholly unprecedented. 
 
On the contrary, every age has its taboos. Thus, in previous centuries, it was compatibility with religious dogma rather than leftist orthodoxy that represented the primary “measuring stick of the acceptability of a belief” – as Galileo, among others, was to discover for his pains. 
 
Although, as a conservative, Browne might be expected to be favourably disposed to traditional religion, he nevertheless acknowledges the analogy between political correctness and the religious dogmas of an earlier age: 

Christianity… has shown many of the characteristics of modern political correctness and often went far further in enforcing its intolerance with violence” (p29). 

Indeed, this intolerance is not restricted to Christianity. Thus, whereas Christianity, in an earlier age, persecuted heresy with even greater intolerance than even the contemporary left, in many parts of the world Islam still does.  

As well as providing an analogous justification for the persecution of heretics, political correctness may also, Browne suggests, serve a similar psychological function to religion, in representing: 

A belief system that echoes religion in providing ready, emotionally-satisfying answers for a world too complex to understand fully and providing a gratifying sense of righteousness absent in our otherwise secular society” (p6).

Defining Political Correctness 

What, then, do we mean by ‘political correctness’? 

Political correctness evaluates a claim, not on its truth, but on its offensiveness to certain protected groups. Some views are held to be not only false, indeed sometimes not even false, but rather unacceptable, unsayable and beyond the bounds of acceptable opinion. 

Indeed, for the enforcers of the politically correct orthodoxy, the truth or falsehood of a statement is ultimately of little interest to them. 

Browne provides a useful definition of political correctness as: 

An ideology which classifies certain groups of people as victims in need of protection from criticism and which makes believers feel that no dissent should be tolerated” (p4). 

Refining this, I would say that, for an opinion to be politically incorrect, two criteria must be met:

1) The existence of a group to whom the opinion in question is regarded as ‘offensive’
2) The group in question must be perceived as ‘oppressed’

Thus, it is perfectly acceptable to disparage and offend supposedly ‘privileged’ groups (e.g. males, white people, Americans or the English), but groups with ‘victim-status’ are deemed sacrosanct and beyond reproach, at least as a group. 
 
Victim-status itself, however, is rather arbitrarily bestowed. 
 
Certainly, actual poverty or deprivation has little to do with it. 

Thus, it is perfectly acceptable to denigrate the white working-class. Thus, pejorative epithets aimed at the white working class, such as redneck, chav and ‘white trash’, are widely employed and considered socially-acceptable in polite conversation (see Jim Goad’s The Redneck Manifesto: How Hillbillies, Hicks, and White Trash Became America’s Scapegoats).

Yet the use of comparably derogatory terms in respect of, say, black people, is considered wholly beyond the pale, and sufficient to end media careers in Britain and America.

However, multi-millionaires who happen to be black, female or homosexual are permitted to perversely pose as ‘oppressed’, and wallow in their own ostensible victimhood. 
 
Thus, in the contemporary West, the Left has largely abandoned its traditional constituency, namely the working class, in favour of ethnic minorities, homosexuals and feminists.

In the process, the ‘ordinary working man’, once the quintessential proletarian, has found himself recast in leftist demonology as a racist, homophobic, wife-beating bigot.

Likewise, men are widely denigrated in popular culture. Yet, contrary to the feminist dogma which maintains that men have disproportionate power and are privileged, it is in fact men who are overwhelmingly disadvantaged by almost every sociological measure.

Thus, Browne writes: 

Men were overwhelmingly underachieving compared with women at all levels of the education system, and were twice as likely to be unemployed, three times as likely to commit suicide, three times as likely to be a victim of violent crime, four times as likely to be a drug addict, three times as likely to be alcoholic and nine times as likely to be homeless” (p49). 

Indeed, overt discrimination against men, such as the different ages at which men and women were then eligible for state pensions in the UK (p25; p60; p75) and the higher levels of insurance premiums demanded of men (p73) are widely tolerated.[4]

The demand for equal treatment only goes as far as it advantages the [ostensibly] less privileged sex” (p77). 

The arbitrary way in which recognition as an ‘oppressed group’ is accorded, together with the massive benefits accruing to demographics that have secured such recognition, has created a perverse process that Browne aptly terms “competitive victimhood” (p44). 

Few things are more powerful in public debate than… victim status, and the rewards… are so great that there is a large incentive for people to try to portray themselves as victims” (p13-4) 

Thus, groups currently campaigning for ‘victim status’ include, he reports, “the obese, Christians, smokers and foxhunters” (p14). 

The result is what economists call perverse incentives

By encouraging people to strive for the bottom rather than the top, political correctness undermines one of the main driving forces in society, the individual pursuit of self-improvement” (p45) 

This outcome can perhaps even be viewed as the ultimate culmination of what Nietzsche called the transvaluation of values

Euroscepticism & Brexit

Unfortunately, despite his useful definition of the phenomenon of political correctness, Browne goes on to use the term political correctness in a broader fashion that goes beyond this original definition, and, in my opinion, extends the concept beyond its sphere of usefulness. 

For example, he classifies Euroscepticism – i.e. opposition to the further integration of the European Union – as a politically incorrect viewpoint (p60-62). 

Here, however, there is no obvious ‘oppressed group’ in need of protection. 
 
Moreover, although widely derided as ignorant and jingoistic, Eurosceptical opinions have never been actually deemed ‘offensive’ or beyond the bounds of acceptable opinion.

On the contrary, they are regularly aired in mainstream media outlets, and even on the BBC, and recently scored a final victory in Britain with the Brexit campaign of 2016.  

Browne’s extension of the concept of political correctness in this way is typical of many critics of political correctness, who succumb to the temptation to define as ‘political correctness’ as any view with which they themselves happen to disagree. 
 
This enables them to tar any views with which they disagree with the pejorative label of ‘political correctness’. 
 
It also, perhaps more importantly, allows ostensible opponents of political correctness to condemn the phenomenon without ever actually violating its central taboos by discussing any genuinely politically incorrect issues. 

They can therefore pose as heroic opponents of the inquisition while never actually themselves incurring its wrath. 

The term ‘political correctness’ therefore serves a similar function for conservatives as the term ‘fascist’ does for leftists – namely a useful catchall label to be applied to any views with which they themselves happen to disagree.[5]

Jews, Muslims and the Middle East 

Another example of Browne’s extension of the concept of political correctness beyond its sphere of usefulness is his characterization of any defence of the policies of Israel as ‘politically incorrect’. 
 
Yet, here, the ad hominem and guilt-by-association methods of debate (or rather of shutting down debate), which Browne rightly describes as characteristic of political correctness (p21-2), are more often used by defenders of Israel than by her critics – though, here, the charge of ‘anti-Semitism’ is substituted for the usual refrain of ‘racism’.[6]
 
Thus, in the US, any suggestion that the US’s small but disproportionately wealthy and influential Jewish community influences US foreign policy in the Middle East in favour of Israel is widely dismissed as anti-Semitic and roughly tantamount to proposing the existence of a world Jewish conspiracy led by the elders of Zion. 
 
Admittedly, Browne acknowledges: 

The dual role of Jews as oppressors and oppressed causes complications for PC calculus” (p12).  

In other words, the role of the Jews as victims of persecution in National Socialist Germany conflicts with, and weighs against, their current role as perceived oppressors of the Palestinians in the Middle East. 

However, having acknowledged this complication, Browne immediately dismisses its importance, all too hastily going on to conclude in the very same sentence that: 

PC has now firmly transferred its allegiance from the Jews to Muslims” (p12). 

However, in many respects, the Jews retain their ‘victim-status’ despite their hugely disproportionate wealth and political power

Indeed, perhaps the best evidence of this is the taboo on referring to this disproportionate wealth and power. 
 
Thus, while the political Left never tires of endlessly recycling statistics demonstrating the supposed overrepresentation of ‘white males’ in positions of power and privilege, to cite similar statistics demonstrating the even greater per capita overrepresentation of Jews in these exact same positions of power and privilege is deemed somehow deemed beyond the pale, and evidence, not of leftist sympathies, but rather of being ‘far right’. 
 
This is despite the fact that the average earnings of American-Jews and their level of overrepresentation in influential positions in government, politics, media and business relative to population size surely far outstrips that of any other demographic – white males, and indeed White Anglo-Saxon Protestants, very much included.

The Myth of the Gender Pay Gap 

One area where Browne claims that the “politically correct truth” conflicts with the “factually correct truth” is the causes of the gender pay-gap (p8; p59-60). 
 
This is also included by philosopher David Conway as one of six issues, raised by Browne in the main body of the text, for which Conway provides supportive evidence in an afterword entitled ‘Commentary: Evidence supporting Anthony Browne’s Table of Truths Suppressed by PC’, included as a sort of appendix in later editions of Browne’s book. 
 
Although still standard practice in mainstream journalism at the time his book was written, it is regrettable that Browne himself offers no sources to back up the statistics he cites in his text.

This commentary section therefore provides the only real effort to provide sources or citations for many of Browne’s claims. Unfortunately, however, it covers only a few of the many issues addressed by Browne in preceding pages. 
 
In support of Browne’s contention that “different work/life choices” and “career breaks” underlie the gender pay gap (p8), Conway cites the work of sociologist Catherine Hakim (p101-103). 
 
Actually, more comprehensive expositions of the factors underlying the gender pay gap are provided by Warren Farrell in Why Men Earn More (which I have reviewed here, here and here) and Kingsley Browne in Biology at Work: Rethinking Sexual Equality (which I have reviewed here and here). 
 
Moreover, while it indeed true that the pay-gap can largely be explained by what economists call ‘compensating differentials’ – e.g. the fact that men work longer hours, in more unpleasant and dangerous working conditions, and for a greater proportion of their adult lives – Browne fails to factor in the final and decisive feminist fallacy regarding the gender pay gap, namely the assumption that, because men earn more money than women, this necessarily means they have more money than women and are wealthier.

In fact, however, although men earn more money than women, much of this money is then redistributed to women via such mechanisms as marriage, alimony, maintenance, divorce settlements and the culture of dating.

Indeed, as I have previously written elsewhere:

The entire process of conventional courtship is predicated on prostitution, from the social expectation that the man will pay for dinner on the first date, to the legal obligation that he continue to provide for his ex-wife through alimony and maintenance for anything up to ten or twenty years after he has belatedly rid himself of her.

Therefore, much of the money earnt by men is actually spent by, or on, their wives, ex-wives and girlfriends (not to mention daughters) such that, although women earn less than men, women have long been known to researchers in the marketing industry to dominate about 80% of consumer spending
 
Browne does usefully debunk another area in which the demand for equal pay has resulted in injustice – namely the demand for equal prizes for male and female athletes at the Wimbledon Tennis Championships (a demand since cravenly capitulated to). Yet, as Browne observes: 

Logically, if the prize doesn’t discriminate between men and women, then the competition that leads to those prizes shouldn’t either… Those who insist on equal prizes, because anything else is discrimination, should explain why it is not discrimination for men to be denied an equal right to compete for the women’s prize.” (p77) 

Thus, Browne perceptively observes: 

It would currently be unthinkable to make the same case for a ‘white’s only’ world athletics championship… [Yet] it is currently just as pointless being a white 100 metres sprinter in colour-blind sporting competitions as it would be being a women 100 metres sprinter in gender-blind sporting competitions” (p77). 

International Aid 

Another topic addressed by both Browne (p8) and Conway (p113-115) is the reasons for African poverty. 

The politically correct explanation, according to Browne, is that African poverty results from inadequate international aid (p8). However, Browne observes: 

No country has risen out of poverty by means of international aid and cancelling debts” (p20).[7]

Moreover, Browne points out that fashionable policies such as “writing off Third World debt” produce perverse incentives by “encourag[ing] excessive and irresponsible borrowing by governments” (p48), while international aid encourages economic dependence, bureaucracies and corruption (p114).

Actually, in my experience, the usual explanation given for African underdevelopment is not, as Conway suggests, inadequate international aid as such. After all, this explanation only begs the question as to how Western countries such as those in Europe achieved First World status back when there were no other wealthy First World countries around to provide them with international aid to assist with their development.

Instead, in my experience, most leftists blame African poverty and underdevelopment on the supposed legacy of European colonialism. Thus, it is argued that European nations, and indeed white people in general, are themselves to blame for the poverty of Africa. International aid is then reimagined as a form of recompense for past wrongs. 

Unfortunately, however, this explanation for African poverty fares little better. 
 
For one thing, it merely begs the question why it was that Africa was colonized by Europeans rather than vice versa?

The answer, of course, is that much of sub-Saharan Africa was ‘underdeveloped’ (i.e. socially and technologically backward) even before colonization. This was indeed precisely what allowed Africa to be so easily and rapidly conquered and colonized during the late-nineteenth and early-twentieth centuries. 
 
Moreover, if European colonization is really to blame for the poverty of so much of sub-Saharan Africa, then why is it that those few African countries largely spared European colonization, such as Liberia and Ethiopia, are among the most dysfunctional and worst-off in the whole sad and sorry continent? 

The likely answer is that they are worse off than their African neighbours precisely because they lack the infastructure (e.g. roads, railroads) that the much-maligned European colonial overlords were responsible for bequeathing other African states.

In other words, far from holding Africa back, European colonizers often built what little infrastructure and successful industry sub-Saharan Africa still has, and African countries are poor despite colonialism rather than because of it.

This is also surely why, prior to the transition to black-majority rule, South Africa and Rhodesia (now Zimbabwe) enjoyed some of the highest living-standards in Africa, with South Africa long regarded as the only ‘developed economy’ in the entire continent during the apartheid-era.

Further falsifying the assumption that the experience of European colonialism invariably impeded the economic development of those regions formerly subject to European colonial rule is the experience of former European colonies in parts of the world other than Africa.

Here, there have been many notable success stories, including Malaysia, Singapore, Hong Kong, even India, not to mention Canada, Australia, New Zealand, all of which were former European colonies, and many of which gained their independence around the same time of African polities.

An experience with European colonization is, it seems, no bar to economic development outside of Africa. Why then has the experience in Africa itself been so different?

Browne and Conway place the blame firmly on Africans themselves – but on African rulers rather than the mass of African people. The real reason for African is simply “bad governance” on the part of Africa’s post-colonial rulers (p8).

Poverty in African has been caused by misrule rather than insufficient aid” (p113).

Unfortunately, however, this is hardly a complete explanation, since it only merely begs the question as to why Africa has been so prone to “misrule” and “bad governance” in the first place.

It also begs the question as to why regions outside of Africa, but nevertheless populated by people of predominantly sub-Saharan African ancestry, such as Haiti and Jamaica (or even Baltimore and Detriot), are seemingly beset by just the same problems (e.g. chronic violent crime, poverty).

This latter observation, of course, suggests that the answer lies, not in African soil or geography, but rather in differences between races in personality, intelligence and behaviour.[8]

However, this is, one suspects, a conclusion too politically incorrect even for Browne himself to consider.

Is Browne a Victim of Political Correctness Himself? 

The forgoing discussion converges in suggesting a single overarching problem with Browne’s otherwise admirable dissection of the nature and effects of political correctness – namely that Browne, although ostensibly an opponent of political correctness, is, in reality, neither immune to the infection nor ever able to effect a full recovery. 
 
Brown himself observes: 

Political correctness succeeds, like the British Empire, through divide and rule… The politically incorrect often end up appeasing political correctness by condemning fellow travellers” (p37). 

Indeed, this is indeed a characteristic feature of witch-hunts, from Salem to McCarthy, whereby victims were able to partially absolve themselves by ‘outing’ fellow-travellers to be persecuted in their place. 
 
However, Browne himself provides a neat illustration of this very phenomenon when, having deplored the treatment of BNP supporters deprived of employment on account of their political views, he nevertheless issues the almost obligatory disclaimer, condemning the party as “odious” (p52).

In doing so, he thereby ironically perfectly illustrates the very appeasement of political correctness which he has himself identified as central to its power. 
 
Similarly, it is notable that, in his discussion of the suppression of politically incorrect facts and theories, Browne nevertheless fails to address any of the most incendiary such facts and theories, such as those that resulted in death threats to the likes of Jensen, Pizzey and Steinmetz
 
After all, to discuss the really taboo topics would not only bring upon him even greater opprobrium than that which he already faced, but also likely deny him a mainstream platform in which to express his views altogether. 
 
Browne therefore provides his ultimate proof of the power of political correctness, not through the topics he addresses, but rather through those he conspicuously avoids. 
 
In failing to address these issues, either out of fear of the consequences or genuine ignorance of the facts due to the media blackout on their discussion, Browne provides the definitive proof of his own fundamental thesis, namely the political correctness corrupts public debate and subverts free speech.

Endnotes

[1] After the resulting outcry, Gopal insisted she stood by her tweets, which, she insists, “were very clearly speaking to a structure and ideology, not about people”, something actually not at all clear from how she expressed herself, and arguably inconsistent with it, given that it is only people who have, and lose, “lives”, not institutions or ideology, and indeed only people, not institutions or ideology, who can properly be described as “white”.

At best, her tweet was incendiary and grossly irresponsible in a time of increasing anti-white animosity, violence and rioting. At worst, they could be interpreted as a coded exhortation to genocide. Similarly, as far-right philosopher Greg Johnson points out: 

“When the Soviets spoke of ‘eliminating the kulaks as a class’, that was simply a euphemism for mass murder” (The White Nationalist Manifesto: p21). 

Similarly, the Nazis typically referred to the genocide of European Jewry only by such coded euphemisms as resettlement in the East and the Final Solution to the Jewish Question. In this light, it is notable that those leftists like Noel Ignatiev who talk of “abolishing the white race” but insist they are only talking of deconstructing the concept of ‘whiteness’, which is, they argue, a social construct, strangely never talk about ‘abolishing the black race’, or indeed any other race than whites, even though, according to their own ideology, all racial categories are social constructs invented to justify oppression and hence similarly artificial and malignant.

[2] Thus, according to the sort of evolutionary epistemology championed by, among others, Karl Popper, it is only if different theories are tested and subjected to falsification that we are able to assess their merits and thereby choose between them, and scientific knowledge is able to progress. If some theories are simply deemed beyond the pale a priori, then clearly this process of testing and falsification cannot properly occur.

[3] The book in which Horowitz wrote these words was published in 2003. Yet, today, some seventeen years later, “the era of the progressive witch-hunt”, far from abating, seems to be going into overdrive. By Horowitz’s reckoning, then, “the era of the progressive witch-hunt” is now approaching its fourth decade.

[4] Discrimination against men in the provision of insurance policies remains legal in most jurisdictions (e.g. the USA). However, sex discrimination in the provision of insurance policies was belatedly outlawed throughout the European Union at the end of 2012, due to a ruling of the European Court of Justice. This was many years after other forms of sex discrimination had been outlawed in most member-states. For example, in the UK, most other forms of gender discrimination were outlawed almost forty years previously under the 1975 Sex Discrimination Act. However, section 45 of this Act explicitly exempted insurance companies from liability for sex discrimination if they could show that the discriminatory practice they employed was based on actuarial data and was “reasonable”. Yet actuarial data could also be employed to justify other forms of discrimination, such as employers deciding not to employ women of childbearing age. However, this remained unlawful. This exemption was preserved by Section 22 of Part 5 of Schedule 3 of the new Equality Act 2010. As a result, as recently as 2010 insurance providers routinely charged young male drivers double the premiums demanded of young female drivers. Yet, curiously, the only circumstances in which insurance policy providers were barred from discriminating on the grounds of sex was where the differences result from the costs associated with pregnancy or to a woman’s having given birth under section 22(3)(d) of Schedule 3 – in other words, the only readily apparent circumstance where insurance providers might be expected to discriminate against women rather than men. Interestingly, even after the ECJ ruling, there is evidence that indirect discrimination against males continues, simply by using occupation as a marker for gender.

[5] Actually, the term ‘fascist’ is sometimes employed in this way by conservatives as well, as when they refer to certain forms of Islamic fundamentalism as Islamofascism or indeed when they refer to the stifling of debate, and of freedom of expression, by leftists as a form of ‘fascism’. 

[6] This use of the phrase ‘anti-Semitism’ in the context of criticism of Israel’s policies towards the Palestinians is ironic, at least from a pedantic etymological perspective, since the Palestinian people actually have a rather stronger claim to being a ‘Semitic people’, in both a racial and a linguistic sense, than do either Ashkenazi or Sephardi (if not Mizrahi) Jews.

[7] Actually, international aid may sometimes be partially successful. For example, the Marshall Plan for post-WWII Europe is sometimes credited as a success story, though some economists disagree. The success, or otherwise, of foreign aid seems, then, to depend, at least in part, on the identity of the recipients.

[8] For more on this plausible but incendiary theory, see IQ and the Wealth of Nations by Richard Lynn and Tatu Vanhanen and Understanding Human History by Michael Hart.

Richard Lynn’s ‘Race Differences in Intelligence’: Useful as a Reference Work, But Biased as a Book

[Warning: Vastly overlong book review. Casual reader beware.]

Race Differences in Intelligence: An Evolutionary Analysis, by Richard Lynn (Augusta, GA: Washington Summit, 2006) 

Richard Lynn’s ‘Race Differences in Intelligence’ is structured around his massive database of IQ studies conducted among different populations. This collection seems to be largely recycled from his earlier IQ and the Wealth of Nations, and subsequently expanded, revised and reused again in IQ and Global Inequality, The Global Bell Curve, and The Intelligence of Nations (as well as a newer edition of Race Differences in Intelligence, published in 2015). 

Thus, despite its subtitle, “An Evolutionary Analysis”, the focus is very much on documenting the existence of race differences in intelligence, not explaining how or why they evolved. The “Evolutionary Analysis” promised in the subtitle is actually almost entirely confined to the last three chapters. 

The choice of this as a subtitle is therefore misleading and presumably represents an attempt to cash in on the recent rise in, and popularity of, evolutionary psychology and other sociobiological explanations for human behaviours. 

However, whatever the inadequacies of Lynn’s theory of how and why race differences in intelligence evolved (discussed below), his documentation of the existence of these differences is indeed persuasive. The sheer number of studies and the relative consistency over time and place suggests that the differences are indeed real and there is therefore something to be explained in the first place. 

In this respect, it aims to do something similar to what was achieved by Audrey Shuey’s The Testing of Negro Intelligence, first published in 1958, which brought together a huge number of studies, and a huge amount of data, regarding the black-white test score gap in the US. 

However, whereas Shuey focused almost exclusively on the black-white test score gap in North America, Lynn’s ambition is much broader and more ambitious – namely, to review data relating to the intelligences of all racial groups everywhere across the earth. 

Thus, Lynn declares that: 

“The objective of this book [is] to broaden the debate from the local problem of the genetic and environmental contributions to the difference between whites and blacks in the United States to the much larger problem of the determinants of the global differences between the ten races whose IQs are summarised” (p182). 

Therefore, his book purports to be: 

“The first fully comprehensive review… of the evidence on race differences in intelligence worldwide” (p2). 

Racial Taxonomy

Consistent with this, Lynn includes in his analysis data for many racial groups that rarely receive much if any coverage in previous works on the topic of race differences in intelligence. 

Relying on both morphological criteria and genetic data gathered by Cavalli-Sforz et al in The History and Geography of Human Genes, Lynn identifies ten separate human races. These are: 

1) “Europeans”; 
2) “Africans”; 
3) “Bushmen and Pygmies”; 
4) “South Asians and North Africans”; 
5) “Southeast Asians”; 
6) “Australian Aborigines”; 
7) “Pacific Islanders”; 
8) “East Asians”; 
9) “Artic Peoples”; and 
10) “Native Americans”.

Each of these racial groups receives a chapter of their own, and, in each of the respective chapters, Lynn reviews published (and occasionally unpublished) studies that provide data on each group’s: 

  1. IQs
  2. Reaction times when performing elementary cognitive tasks; and
  3. Brain size

Average IQs 

The average IQs reported by Lynn are, he informs us, corrected for the Flynn Effect – i.e. the rise in IQs over the last century (p5-6).  

However, the Flynn Effect has occurred at different rates in different regions of the world. Likewise, the various environmental factors that have been proposed as possible explanations for the phenomenon (e.g. improved nutrition and health as well as increases in test familiarity, and exposure to visual media) have varied in the extent to which they are present in different places. Correcting for the Flynn Effect is therefore easier said than done. 

IQs of “Hybrid populations

Lynn also discusses the average IQs of racially-mixed populations, which are, he reports, consistently intermediate between the average IQs of the two (or more) parent populations. 

However, both, on the one hand, hybrid vigour or heterosis and, on the other, hybrid incompatibility or outbreeding depression could potentially complicate the assumption that racial hybrids should have average IQs intermediate between the average IQs of the two (or more) parent populations. 

However, Lynn only alludes to the possible effect of hybrid vigour in relation to biracial people in Hawaii, not in relation to other hybrid populations whose IQs he discusses, and never discusses the possible effect of hybrid incompatibility or outbreeding depression at all. 

Genotypic IQs 

Finally, Lynn also purports to estimate what he calls the “genotypic IQ” of at least some of the races discussed. This is a measure of genetic potential, distinguished from their actual realized phenotypic IQ. 

He defines the “genotypic IQ” of a population as the average score of a population if they were raised in environments identical to those of the group with whom they are being compared. 

Thus, he writes: 

“The genotypic African IQ… is the IQ that Africans would have if they were raised in the same environment as Europeans” (p69). 

The fact that lower-IQ groups generally provide their offspring with inferior environmental conditions is therefore irrelevant for determining their “genotypic IQ”. However, as Lynn himself later points out: 

“It is problematical whether the poor nutrition and health that impair the intelligence of many third world peoples should be regarded as a purely environmental effect or as to some degree a genetic effect arising from the low intelligence of the populations that makes them unable to provide good nutrition and health for their children” (p193). 

Also, Lynn does not explain why he uses Europeans as his comparison group – i.e. why the African genotypic IQ is “the IQ that Africans would have if they were raised in the same environment as Europeans”, as opposed to, say, if they were raised in the same environments East Asians, Middle Eastern populations or indeed their own environments. 

Presumably this reflects historical reasons – namely, Europeans were the first racial group to have their IQs systematically measured – the same reason that European IQs are arbitrarily assigned an average score of 100. 

Reaction Times 

Reaction times refer to the time taken to perform so-called elementary cognitive tasks. These are tests where everyone can easily work out the right answer, but where the speed with which different people get there correlates with IQ. 

Arthur Jensen has championed reaction time as a (relatively more) direct measure of one key cognitive process underlying IQ, namely speed of mental processing. 

Yet individuals with quicker reaction times would presumably have an advantage in sports, since reacting to, say, the speed and trajectory of a ball in order to strike or catch it is analogous to an elementary cognitive task. 

However, despite lower IQs, African-Americans, and blacks resident in other western economies, are vastly overrepresented among elite athletes. 
 
To explain this paradox, Lynn distinguishes “reaction time proper” – i.e. when one begins to move one’s hand towards the correct button to press – from “movement time” – how long one’s hand takes to get there. 

Whereas whites generally react faster, Lynn reports that blacks have faster movement times (p58-9).[1] Thus, Lynn concludes: 

“The faster movement times of Africans may be a factor in the fast sprinting speed of Africans shown in Olympic records” (p58). 

However, psychologist Richard Nisbett reports that: 

“Across a host of studies, movement times are just as highly correlated with IQ as reaction times” (Intelligence and How to Get It: p222). 

Brain Size

Lynn also reviews data regarding the brain-size of different groups. 

The correlation between brain-size and IQ as between individuals is well-established (Rushton and Ankney 2009). 
 
As between species, brain-size is also thought to correlate with intelligence, at least after controlling for body-size. 

Indeed, since brain tissue is highly metabolically expensive, increases in brain-size would surely never have evolved with conferring some countervailing selective advantage. 

Thus, in the late-1960s, biologist HJ Jerison developed an equation to estimate an animal’s intelligence from its brain- and body-size alone. This is called the animal’s encephalization quotient
 
However, comparing the intelligence of different species poses great difficulties.[2]

In short, if you think a ‘culture fair’ IQ test is an impossibility, then try designing a ‘species fair’ test! 
 
Moreover, dwarves have smaller absolute brain-sizes but usually larger brains relative to body-size, but usually have normal IQs. 

Sex differences in IQ, meanwhile, are smaller than those between races even though differences in brain-size are greater, at least before one introduces controls for body-size. 
 
Also, Neanderthals had larger brains than modern humans, despite a shorter, albeit more robust, stature.

One theory has it that population differences in brain-size reflect a climatic adaptation that evolved in order to regulate temperature, in accordance with Bermann’s Rule. This seems to be the dominant view among contemporary biological anthropologists, at least those who deign (or dare) to even discuss this politically charged topic.[3] 

Thus, in one recent undergraduate textbook in biological anthropology, authors Mielke, Konigsberg and Relethford contend: 

“Larger and relatively broader skulls lose less heat and are adaptive in cold climates; small and relatively narrower skulls lose more heat and are adaptive in hot climates” (Human Biological Variation: p285). 

On this view, head size and shape represents a means of regulating the relative ratio of surface-area-to-volume, since this determines the proportion of a body that is directly exposed to the elements.

Thus, Stephen Molnar, the author of another competing undergraduate textbook in biological anthropology, observes

“The closer a structure approaches a spherical shape, the lower will be the surface-to-volume ratio. The reverse is true as elongation occurs—a greater surface area to volume is formed, which results in more surface to dissipate heat generated within a given volume. Since up to 80 percent of our body heat may be lost through our heads on cold days, one can appreciate the significance of shape” (Human Variation: Races, Types and Ethnic Groups, 5th Ed: p188).

The BermannAllen rules likely also explain at least some of the variation in body-size and stature as between racial groups. 

For example, Eskimos tend to be short and stocky, with short arms and legs and flat faces. This minimizes the ratio of surface-area-to-volume, ensures only a minimal proportion of the body is directly exposed to the elements, and also minimizes the extent of extremities (e.g. arms, legs, noses), which are especially vulnerable to the cold. 

In contrast, populations from tropical climates, such as African blacks and Australian Aboriginals, tend to have relatively long arms and legs as compared to trunk size, a factor which likely contributes towards their success in some athletic events. 

Yet, interestingly, Beals et al report that:

“Braincase volume is more highly correlated with climate than any of the summative measures of body-size” (Beals et al 1984: p305).

Yet, contrary to popular wisdom, humans do not lose an especially high proportion of our body heat through our heads, certainly not “up to 80 percent of our body heat”, as claimed in the anthropology textbook quoted above, a preposterous figure given that the head comprises only about 10% of the body’s overall surface area.

Indeed, the amount of heat lost through our head is relatively higher than that lost through other parts of the body only because other parts of the body are typically covered by clothes.

At any rate, it is surely implausible that an increase in brain tissue, which is metabolically highly expensive, would have evolved solely for the purpose of regulating temperature, when the same result could surely have been achieved by modifying only the external shape of the skull. 
 
Conversely, even if race differences in brain-size did evolve purely for temperature regulation, differences in intelligence could still have emerged as a by-product of such selection.

In other words, if larger brains did evolve among populations inhabiting colder latitudes solely for the purposes of temperature regulation, the extra brain tissue that resulted may still have resulted in greater levels of cognitive ability among these populations, even if there was no direct selection for increased cognitive ability itself.

Europeans

The first racial group discussed by Lynn are those he terms “Europeans” (i.e. white Caucasians). He reviews data on IQ both in Europe and among diaspora populations elsewhere in the world (e.g. North America, Australia). 

The results are consistent, almost always giving an average IQ of about 100 – though this figure is, of course, arbitrary and reflects the fact that IQ tests were first normed by reference to European populations. This is what James Thompson refers to as the ‘Greenwich mean IQ’ and the IQs of all other populations in Lynn’s book are calculated by reference to this figure. 
 
Southeast Europeans, however, score slightly lower. This, Lynn argues, is because: 

“Balkan peoples are a hybrid population or cline, comprising a genetic mix between the Europeans and South Asians in Turkey” (p18). 

Therefore, as a hybrid population, their IQs are intermediate between those of the two parent populations, and, according to Lynn, South Asians score somewhat lower in IQ than do white European populations (see below).[4]

In the newer 2015 edition, Lynn argues that IQs are somewhat lower elsewhere in southern Europe, namely southern Spain and Italy, for much the same reason, namely because: 

“The populations of these regions are a genetic mix of European people with those from the Near East and North Africa, with the result that their IQs are intermediate between the parent populations” (Preface, 2015 Edition).[5]

An alternative explanation is that these regions (e.g. Balkan countries, Southern Italy) have lower living-standards. 

However, instead of viewing differences in living standards as causing differences in recorded IQs as between populations, Lynn argues that differences in innate ability themselves cause differences in living standards, because, according to Lynn, more intelligent populations are better able to achieve high levels of economic development (see IQ and the Wealth of Nations).[6]

Moreover, Lynn observes that in Eastern Europe, living standards are substantially below elsewhere in Europe as a consequence of the legacy of communism. However, populations from Eastern Europe score only slightly below those from elsewhere in Europe, suggesting that even substantial differences in living-standards may have only a minor impact on IQ (p20). 

Portuguese 

The Portuguese also, Lynn claims, score lower than elsewhere in Europe. 

However, he cites just two studies. These give average IQs of 101 and 88 respectively, which Lynn averages to give an average of 94.5 (p19). 

Yet these two results are actually highly divergent, the former being slightly higher than the average for north-west Europe. This suggests an inadequate basis on which to posit a genetic difference in ability. 

However, Lynn provocatively concludes: 

“Intelligence in Portugal has been depressed by the admixture of sub-Saharan Africans. Portugal was the only European country to import black slaves from the fifteenth century onwards” (p19). 

This echoes Arthur De Gobineau’s infamous theory that empires decline because, through their empires, they conquer large numbers of inferior peoples, who then inevitably interbreed with their conquerors, which, according to De Gobineau, results in the dilution the very qualities that permitted their imperial glories in the first place. 

In support of Lynn’s theory, mitochondrial DNA studies have indeed found higher frequency of sub-Saharan African Haplogroup L in Portugal than elsewhere in Europe (e.g. Pereira et al 2005). 

Ireland and ‘Selective Migration 

IQs are also, Lynn reports, somewhat lower than elsewhere in Europe in Ireland. 

Lynn cites four studies of Irish IQs which give average scores of 87, 97, 93 and 91 respectively. Again, these are rather divergent but nevertheless consistently below the European average, all but one substantially so. 
 
Of course, in England, in less politically correct times, the supposed stupidity of the Irish was once a staple of popular humour, Irish jokes being the English equivalent of Polish jokes in America.[7]
 
This seems anomalous given the higher average IQs recorded elsewhere in North-West Europe, especially the UK, Ireland’s next-door neighbour, whose populations are closely related to those in Ireland. 
 
Of course, historically Ireland was, until relatively recently, quite poor by European standards. 

It is also sparsely populated and a relatively high proportion of the population live in rural areas, and there is some evidence that people from rural areas have lower average IQs than those from urban areas

However, economic deprivation cannot explain the disparity. Today, despite the 2008 economic crash, and inevitable British bailout, Ireland enjoys, according to the UN, a higher Human Development Index than does the UK, and has done for some time. Indeed, by this measure, Ireland enjoys one of the highest standards of living in the world

Moreover, although formerly Ireland was much poorer, the studies cited by Lynn were published from 1973 to 1993, yet show no obvious increase over time.[8] 
 
Lynn himself attributes the depressed Irish IQ to what he calls ‘selective migration’, claiming: 

“There has been some tendency for the more intelligent to migrate, leaving less intelligent behind” (p19). 

Of course, this would suggest, not only that the remaining Irish would have lower average IQs, but also that the descendants of Irish émigrés in Britain, Australia, America and other diaspora communities would have relatively higher IQs than other white people. 

In support of this, Americans reporting Irish ancestry do indeed enjoy higher relative incomes as compared to other white American ethnicities. 

Interestingly, Lynn also invokes “selective migration” to explain the divergences in East Asian IQs. Here, however, it was supposedly the less intelligent who chose to migrate (p136; p138; p169).[9]

Meanwhile, other hereditarians have sought to explain away the impressive academic performance of recent African immigrants to the West, and their offspring, by reference to selective immigration of high IQ Africans, an explanation which is wholly inadequate on mathematical grounds alone (see Chisala 2015b; 2019).

It certainly seems plausible that migrants differ in personality from those who choose to remain at home. It is likely that they are braver, have greater determination, drive and willpower than those who choose to stay behind. They may also perhaps be less ethnocentric, and more tolerant of foreign cultures.[10]

However, I see no obvious reason they would differ in intelligence.

As Chanda Chisala writes:

“Realizing that life is better in a very rich country than in your poor country is never exactly the most g-loaded epiphany among Africans” (Chisala 2015b).

Likewise, it likely didn’t take much brain-power for Irish people to realize during the Irish Potato Famine that they were less likely to starve to death if they emigrated abroad.

Of course, wealth is correlated with intelligence and may affect the decision to migrate.

The rich usually have little economic incentive to migrate, while the poor may be unable to afford the often-substantial costs of migration (e.g. transportation).

However, without actual historical data showing certain socioeconomic classes or intellectual ability groups were more likely to migrate than others, Lynn’s claims regarding ‘selective migration’ represent little more than a post-hoc rationalization for IQ differences that are otherwise anomalous and not easily explicable in terms of heredity. 

Ireland, Catholicism and Celibacy

Interestingly, in the 2015 edition of ‘Race Differences in Intelligence’, Lynn also proposes, in addition, a further explanation for the low IQs supposedly found in Ireland, namely the clerical celibacy demanded under Catholicism. Thus, Lynn argues:

“There is a dysgenic effect of Roman Catholicism, in which clerical celibacy has reduced the fertility of some of the most intelligent, who have become priests and nuns” (2015 Edition; see also Lynn 2015). 

Of course, this theory presupposes that it was indeed the most intelligent among the Irish people who became priests. However, this is a questionable assumption, especially given the well-established inverse correlation between intelligence and religiosity (Zuckerman et al 2013).

However, it is perhaps arguable that, in an earlier age, when religious dogmas were relentlessly enforced, religious scholarship may have been the only form of intellectual endeavour that it was safe for intellectually-minded people to engage in.

Anyone investigating more substantial matters, such as whether the earth revolved around the sun or vice versa, was liable to be burnt at the stake if he reached the wrong (i.e. the right) conclusion.

However, such an effect would surely also apply in other historically Catholic countries as well.

Yet there is little if any evidence of depressed IQs in, say, France or Austria, although the populaions of both these countries were, until recently, like that of Ireland, predominantly Catholic.[11]

Africans 

The next chapter is titled “Africans”. However, Lynn uses this term to refer specifically to black Africans – i.e. those formerly termed ‘Negroes’. He therefore excludes from this chapter, not only the predominantly ‘Caucasoid’ populations of North Africa, but also African Pygmies and the Khoisan of southern Africa, who are considered separately in a chapter of their own. 

Lynn’s previous estimate of the average sub-Saharan African IQ as just 70 provoked widespread incredulity and much criticism. However, undeterred, Lynn now goes even further, estimating the average African IQ even lower, at just 67.[12]

Curiously, according to Lynn’s data, populations from the Horn of Africa (e.g. Ethiopia and Somalia) have IQs no higher than populations elsewhere in sub-Saharan Africa.[13]

Yet populations from the Horn of Africa are known to be partly, if not predominantly, Caucasoid in ancestry, having substantial genetic affinities with populations from the Middle East.[14].

Therefore, just as populations from Southern Europe have lower average IQs than other Europeans because, according to Lynn, they are genetically intermediate between Europeans and Middle Eastern populations, so populations from the Horn of Africa should score higher than those from elsewhere in sub-Saharan Africa because of intermixture with Middle Eastern populations.

However, Lynn’s data gives average IQs for Ethiopia and Somalia of just 68 and 69 respectively – no higher than elsewhere in sub-Saharan Africa (The Intelligence of Nations: p87; p141-2).

On the other hand, blacks resident in western economies score rather higher, with average IQs around 85. 

The only exception, strangely, are the Beta Israel, who also hail from the Horn of Africa, but are now mostly resident in Israel, yet who score no higher than those blacks still resident in Africa. From this, Lynn concludes:

“These results suggest that education in western schools does not benefit the African IQ” (p53). 

However, why then do blacks resident in other western economies score higher? Are blacks in Ethiopia somehow treated differently than those resident in the UK, USA or France? 

For his part, Lynn attributes the higher scores of blacks resident in these other Western economies to both superior economic conditions and, more controversially, to racial admixture. 

Thus, African-Americans in particular are known to be a racially-mixed population, with substantial European ancestry (usually estimated at around 20%) in addition to their African ancestry.[15]

Therefore, Lynn argues that the higher IQs of African-Americans reflect, in part, the effect of the European portion of their ancestry. 

However, this explanation is difficult to square with the observation that recent African immigrants to the US, themselves presumably largely of unmixed African descent, actually consistently outperform African-Americans (and sometimes whites as well) both academically and  economically (Chisala 2015a2015cAnderson 2015).[16]

Musical Ability” 

Lynn also reviews the evidence pertaining to one class of specific mental ability not covered in most previous reviews on the subject – namely, race differences in musical ability. 

The accomplishments of African-Americans in twentieth century jazz and popular music are, of course, much celebrated. To Lynn, however, this represents a paradox, since musical abilities are known to correlate with general intelligence and African-Americans generally have low IQs. 
 
In addressing this perceived paradox, Lynn reviews the results of various psychometric measures of musical ability. These tests include: 

  • Recognizing a change in pitch; 
  • Remembering a tune; 
  • Identifying the constituent notes in a chord; and 
  • Recognizing whether different songs have similar rhythm (p55). 

In relation to these sorts of tests, Lynn reports that African-Americans actually score somewhat lower in most elements of musical intelligence than do whites, and their musical ability is indeed generally commensurate with their general low IQs. 

The only exception is for rhythmical ability. 

This is, of course, congruent with the familiar observation that black musical styles place great emphasis on rhythm. 

However, even with respect to rhythmical ability, blacks score no higher than whites. Instead, blacks’ scores on measures of rhythmical ability are exceptional only in that this is the only form of musical ability on which blacks score equal to, but no higher than, whites (p56). 

For Lynn, the low scores of African-Americans in psychometric tests of musical ability are, on further reflection, little surprise. 

“The low musical abilities of Africans… are consistent with their generally poor achievements in classical music. There are no African composers, conductors, or instrumentalists of the first rank and it is rare to see African players in the leading symphony orchestras” (p57). 

However, who qualifies as a composer, conductor or instrumentalist “of the first rank” is, ultimately, unlike the results of psychometric testing, a subjective assessment, as are all artistic judgements. 

Moreover, why is achievement in classical music, an obviously distinctly western genre of music, to be taken as the sole measure of musical accomplishment? 

Even if we concede that the ability required to compose and perform classical music is greater than that required for other genres (e.g. jazz and popular music), musical intelligence surely facilitates composition and performance in other genres too – and, given the financial rewards offered by popular music often dwarf those enjoyed by players and composers of classical music, the more musically-gifted race would have every incentive to dominate this field too. 

Perhaps, then, these psychometric measures fail to capture some key element of musical ability relevant to musical accomplishment, especially in genres other than classical. 

In this context, it is notable that no lesser champion of standardized testing than Arthur Jensen has himself acknowledged that intelligence tests are incapable of measuring creativity (Langan & LoSasso 2002: p24-5). 

In particular, one feature common to many African-American musical styles, from rap freestyling to jazz, is improvisation.  

Thus, Dinesh D’Souza speculates tentatively that: 

“Blacks have certain inherited abilities, such as improvisational decision making, that could explain why they predominate in… jazz, rap and basketball” (The End of Racism: p440-1). 

Steve Sailer rather less tentatively expands upon this theme, positing an African advantage in: 

“Creative improvisation and on-the-fly interpersonal decision-making” (Sailer 1996). 

On this basis, Sailer concludes that: 

“Beyond basketball, these black cerebral superiorities in ‘real time’ responsiveness also contribute to black dominance in jazz, running with the football, rap, dance, trash talking, preaching, and oratory” (Sailer 1996). 

Bushmen and Pygmies” 

Grouped together as the subjects of the next chapter are black Africans’ sub-Saharan African neighbours, namely San Bushmen and Pygmies

Quite why these two populations are grouped together by Lynn in a single chapter is unclear. 

He cites Cavalli-Sforza et al in The History and Geography of Human Genes as providing evidence that: 

“These two peoples have distinctive but closely related genetic characteristics and form two related clusters” (p73). 

However, although both groups are obviously indigenous to sub-Saharan Africa and quite morphologically distinct from the other black African populations who today represent the great majority of the population of sub-Saharan Africa, they share no especial morphological similarity to one another.[17]

Moreover, since Lynn acknowledges that they have “distinctive… genetic characteristics and form two… clusters”, they presumably should each of merited chapters of their own.[18]

One therefore suspects that they are lumped together more for convenience than on legitimate taxonomic grounds. 

In short, both are marginal groups of hunter-gatherers, now few in number, few if any of whom have been exposed to the sort of standardized testing necessary to provide a useful estimate of their average IQs. Therefore, since his data on neither group alone is really sufficient to justify its own chapter, he groups them together in a single chapter.  

However, the lack of data on IQ for either group means that even this combined chapter remains one of the shorter chapters in Lynn’s book, and, as we will see, the paucity of reliable data on the cognitive ability of either group almost leads one to suspect that he might almost have been better omitting both groups from his survey of race differences in cognitive ability altogether. 

San Bushmen 

It may be some meagre consolation to African blacks that, at least in Lynn’s telling, they no longer qualify as the lowest scoring racial group when it comes to IQ. Instead, this dubious honour is now accorded their sub-Saharan African neighbours, San Bushmen
 
In Race: The Reality of Human Differences (which I have reviewed here and here), authors Vincent Sarich and Frank Miele quote anthropologist and geneticist Henry Harpending as observing: 

“All of us have the impression that Bushmen are really quick and clever and are quite different from their [Bantu] neighbors… Bushmen don’t look like their black African neighbors either. I expect that there will soon be real data from the Namibian school system about the relative performance of Bushmen… and Bantu kids – or more likely, they will suppress it” (Race: The Reality of Human Differences (reviewed here): p227). 

Today, however, some fifteen or so years after Sarich and Miele published this quotation, the only such data I am aware of is that reported by Lynn in this book, which suggests, at least according to Lynn, a level of intelligence even lower than that of other sub-Saharan Africans. 

Unfortunately, however, the data in question is very limited and, in my view, inadequate to support Lynn’s controversial conclusions regarding Bushman ability.  

It also consists of just three studies, none of which remotely resemble a full IQ test (p74-5). 

Yet, from this meagre dataset, Lynn does not hesitate to attribute to Bushmen an average IQ of just 52. 

If Lynn’s estimate of the average sub-Saharan African IQ at around 70 provoked widespread incredulity, then his much lower estimate for Bushmen is unlikely to fare better. 

Lynn anticipates such a reaction, and responds by pointing out:  

“An IQ of 54 represents the mental age of the average European 8-year-old, and the average European 8-year-old can read, write, and do arithmetic and would have no difficulty in learning and performing the activities of gathering foods and hunting carried out by the San Bushmen. An average 8-year-old can easily be taught to pick berries put them in a container and carry them home, collect ostrich eggs and use the shells for storing water and learn how to use a bow and arrow” (p76). 

Indeed, Lynn continues, other non-human animals survive in difficult, challenging environments with even lower levels of intelligence:  

“Apes with mental abilities about the same as those of human 4-year olds survive quite well as gatherers and occasional hunters and so also did early hominids with IQs around 40 and brain sizes much smaller than those of modern Bushmen. For these reasons there is nothing puzzling about contemporary Bushmen with average IQs of about 54” (p77). 

Here, Lynn makes an important point. Many non-human animals survive and prosper in ecologically challenging environments with levels of intelligence much lower than that of any hominid, let alone any extant human race. 

On the other hand, however, I suspect Lynn would not last long in Kalahari Desert – the home environment of most contemporary Bushmen.

Pygmies 

Lynn’s data on the IQs of Pygmies is even more inadequate than his data for Bushmen. Indeed, it amounts to just one study, which again fell far short of a full IQ test. 

Moreover, the author of the study, Lynn reports, did not quantify his results, reporting only that Pygmies scored much “much worse” than other populations tested using the same test (p78). 

However, while the other populations tested using the same test and outperforming Pygmies included “Eskimos, Native American and Filipinos”, Lynn conspicuously does not mention that they included other black Africans, or indeed other very low IQ groups such as Australian Aboriginals (p78). 

Thus, Lynn’s assumption that Pygmies are lower in cognitive ability than other black Africans is not supported even by the single study that he cites. 

Lynn also infers a low level of intelligence for Pygmies from their lifestyle and mode of sustenance: 

“Most of them still retain a primitive hunter-gatherer existence while many of the Negroid Africans became farmers over the last few hundred years” (p78). 

Thus, Lynn assumes that whether a population has successfully transitioned to agriculture is largely a product of their intelligence (p191). 

In contrast, most historians and anthropologists would emphasize the importance of environmental factors in explaining whether a group transitions to agriculture.[19]

Finally, Lynn also infers a low IQ from the widespread enslavement of Pygmies by neighbouring Bantus: 

“The enslavement of Pygmies by Negroid Africans is consistent with the general principle that the more intelligent races generally defeat and enslave the less intelligent, just as Europeans and South Asians have frequently enslaved Africans but not vice versa” (p78). 

However, while it may be a “general principle that the more intelligent races typically defeat and enslave the less intelligent” (p78), this is hardly a rigid rule. 

After all, Arabs often enslaved Europeans.[20] Yet, according to Lynn, the Arabs belong to a rather less intelligent race than do the Europeans whom they so often enslaved. 

Interestingly, it is notable that Pygmies are the only racial group whom Lynn includes in his survey for whom he does not provide an actual figure as an estimate their average IQ, which presumably reflects a tacit admission of the inadequacy of the available data.[21] 

Curiously, unlike for all the other racial groups discussed, Lynn also fails to provide any data on Pygmy brain-size. 

Presumably, Pygmies have small brains as compared to other races, if only on account of their smaller body-size – but what about their brain-size relative to body-size? Is there simply no data available?

Australian Aborigines 

Another group who are barely mentioned at all in most previous discussions of the topic of race differences in intelligence are Australian Aborigines. Here, however, unlike for Bushmen and Pygmies, data from Australian schools are actually surprisingly abundant. 

These give, Lynn reports, an average Aboriginal IQ of just 62 (p104). 

Unlike his estimates for Bushmen and Pygmies, this figure seems to be reliable, given the number of studies cited and the consistency of their results. One might say, then, that Australian Aboriginals have the lowest recorded IQs of any human race for whom reliable data is available. 

Interestingly, in addition to his data on IQ, Lynn also reports the results of Piagetian measures of development conducted among Aboriginals. He reports, rather remarkably, that a large minority of Aboriginal adults fail to reach what Piaget called the concrete operational stage of development – or, more specifically, fail to recognize a substance, transferred to a new container, necessarily remains of the same quantity (p105-7). 

Perhaps even more remarkable, however, are reports of Aborigine spatial memory (p107-8). This refers to the ability to remember the location of objects, and their locations relative to one another. 

Thus, he reports, one study found that, despite their low general cognitive ability, Aborigines nevertheless score much higher than Europeans in tests of spatial memory (Kearins 1981).  

Another study found no difference in the performance of whites and Aborigines (Drinkwater 1975). However, since Aborigines have much lower IQs overall, even equal performance on spatial memory as against Europeans is still out of sync with the performance of whites and Aborigines on other types of intelligence test (p108). 

Lynn speculates that Aboriginal spatial memory may represent an adaptation to facilitate navigation in a desert environment with few available landmarks.[22]

The difference, Lynn argues, seems to be innate, since it was found even among Aborigines who had been living in an urban environment (i.e. not a desert) for several generations (p108; but see Kearins 1986). 

Two other studies reported lower scores than for Europeans. However, one was an unpublished dissertation and hence must be treated with caution, while the and the other (Knapp & Seagrim 1981) “did not present his data in such a way that the magnitude of the white advantage can be calculated” (p108). 

Intriguingly, Lynn reports that this ability even appears to be reflected in neuroanatomy. Thus, despite smaller brains overall, Aborigines’ right visual cortex, implicated in spatial ability, is relatively larger than in Europeans (Klekamp et al 1987; p108-9).

New Guineans and Jared Diamond 

In his celebrated Guns, Germs and Steel, Jared Diamond famously claimed: 

“In mental ability New Guineans are probably genetically superior to Westerners, and they surely are superior in escaping the devastating developmental disadvantages under which most children in industrialized societies grow up” (Guns, Germs and Steel: p21). 

Diamond bases this claim on the fact that, in the West, survival, throughout most of our recent history, depended on who was struck down by disease, which was largely random. 

In contrast, in New Guinea, he argues, people had to survive on their wits, with survival depending on one’s ability to procure food and avoid homicide, activities in which intelligence was likely to be at a premium (Guns, Germs and Steel: p20-21). 

He also argues that the intelligence of western children is likely reduced because they spend too much time watching television and movies (Guns, Germs and Steel: p21). 

However, there is no evidence television has a negative impact on children’s cognitive development. Indeed, given the rise in IQs over the twentieth century has been concomitant with increases in television viewing, it has even been speculated that increasingly stimulating visual media may have contributed to rising IQs. 

On the basis of two IQ studies, plus three studies of Piagetian development, Lynn concludes that the average IQ of indigenous New Guineans is just 62 (p112-3). 

This is, of course, exactly the same as his estimate for the average IQ of Australian Aboriginals.  

It is therefore consistent with Lynn’s racial taxonomy, since, citing Cavalli-Sforza et al, he classes New Guineans as in the same genetic cluster, and hence as part of the same race as Australian Aboriginals (p101). 

Pacific Islanders 

Other Pacific Islanders, however, including PolynesiansMicronesiansMelanesians and Hawiians, are grouped separately and hence receive a chapter of their own. 

They also, Lynn reports, score rather higher in IQ, with most such populations having average IQs of about 85 (p117). However, the Māoris of New Zealand score rather higher, with an average IQ of about 90 (p116). 

Hawaiians and Hybrid Vigor 

For the descendants of the inhabitants of one particular Pacific Island, namely Hawaii, Lynn also reports data regarding the IQs of racially-mixed individuals, both those of part-Native-Hawiian and part-East Asian ancestry, and those of part-Native-Hawiian and part-European ancestry. 

These racial hybrids, as expected, score on average between the average scores for the two parent populations. However, Lynn reports: 

“The IQs of the two hybrid groups are slightly higher than the average of the two parent races. The average IQ of the Europeans and Hawaiians is 90.5, while the IQ of the children is 93. Similarly, the average IQ of the Chinese and Hawaiians is 90, while the IQ of the children is 91. The slightly higher than expected IQs of the children of the mixed race parents may be a hybrid vigor or heterosis effect” (p118). 

Actually, the difference between the “expected IQs” and the IQs actually recorded for the hybrid groups is so small (only one point for the Chinese-Hawaiians), that it could easily be dismissed as mere noise, and I doubt it would reach statistical significance. 

Nevertheless, Lynn’s discussion begs the question as to why hybrid vigor has not similarly elevated the IQs of the other hybrid, or racially-mixed, populations discussed in other chapters, and why Lynn has not discussed this issue when reporting the average IQs of other racially-mixed populations in other chapters. 

Of course, while hybrid vigor is a real phenomenon, so is outbreeding depression and hybrid incompatibilities

Presumably, then, which of these countervailing effects outweighs the other for different types of hybrid depends on the degree of genetic distance between the two parent populations. This, of course, varies for different races. 

It is therefore possible that some racial mixes may tend to elevate intelligence, whereas others, especially between more distantly-related populations, may tend, on average, to depress intelligence. 

For what it’s worth, Pacific Islanders, including Hawiians, are thought to be genetically closer to East Asians than to Europeans. 

South Asians and North Africans

Another group rarely treated separately in earlier works are those whom Lynn terms “South Asians and North Africans”, though this group also includes populations from the Middle East. 

Physical anthropologists often lumped these peoples together with Europeans as collectively “Caucasian” or “Caucasoid”. However, while acknowledging that they are “closely related to the Europeans”, Lynn cites Cavalli-Sforza et al as showing they form “a distinctive genetic cluster” (p79). 

He also reports that they score substantially lower in IQ than do Europeans. Their average IQ in their native homelands is just 84 (p80), while South Asians resident in the UK score only slightly higher with an average IQ of just 89 (p82-4). 

This conclusion is surely surprising and should, in my opinion, be treated with caution. 

For one thing, all of the earliest known human civilizations – namely, MesopotamiaEgypt and the Indus Valley civilization – surely emerged among these peoples, or at least in regions today inhabited primarily by people of this race.[23]

Moreover, people of Indian ancestry in particular are today regarded as a model majority in both Britain and America, whose overrepresentation in the professions, especially medicine, is widely commented upon.[24]

Indeed, according to some measures, British-Indians are now the highest earning ethnicity in Britain, or the second-highest earning after the Chinese, and Indians are also the highest earners in the USA.[25]

Interestingly, in this light, one study cited by Lynn showed a massive gain of 14-points for children from India who had been resident in the UK for more than four years as compared to those who had been resident for less than four years, the former scoring almost as high in IQ as the indigenous British, with an average IQ of 97 (p83-4; Mackintosh & Mascie-Taylor 1985).[26]

In the light of this study, it would be interesting to measure the IQs of a sample composed exclusively of people who traced their ancestry to India but who had been resident in the UK for the entirety of their lives (or even whose ancestors had been resident in the UK for several generations), since all of the other studies cited by Lynn of the IQs of Indian children in the UK presumably include both recent arrivals and long-term residents grouped together. 

Interestingly, the high achievement of immigrants, and their descendants, from India is not matched by those from neighbouring countries such as Bangladesh or Pakistan. Indeed, the same data suggesting that Indians are the highest earning ethnicity in Britain also show that British-Pakistanis and Bangladeshis are among the lowest earners

The primary divide between these three countries is, of course, not racial but rather religious. This suggests a religion as a causal factor in the difference.[27]

Thus, one study found that Muslim countries tend to have lower average IQs than do non-Muslim countries (Templer 2010; see also Dutton 2020). 

Perhaps, then, cultural practices in Muslim countries are responsible for reducing IQs. 

For example, the prevalence of consanguineous (i.e. incestuous) marriage, especially cross-cousin marriage may have an effect on intelligence due to inbreeding depression (Woodley 2009). 

Another cultural practice that could affect intelligence in Muslim countries is the practice of even pregnant women fasting during daylight hours during Ramadan (cf. Aziz et al 2004). 

However, Lynn’s own data show little difference between IQs in India and those in Pakistan and Bangladesh, nor indeed between IQs in India and those in Muslim countries in the Middle East or North Africa. Nor, according to Lynn’s data, do people of Indian ancestry resident in the UK score noticeably higher in IQ than do people who trace their ancestry to Bagladeshi, Pakistani or Middle Eastern countries. 

An alternative suggestion is that Middle-Eastern and North African IQs have been depressed as a result of interbreeding with sub-Saharan Africans, perhaps as a result of the Islamic slave trade.[28]

This is possible because, although male slaves in the Islamic world were routinely castrated and hence incapable of procreation, female slaves outnumbered males and were often employed as concubines, a practice which, unlike in puritanical North America, was regarded as perfectly socially acceptable on the part of slave owners. 

This would be consistent with the finding that Arab populations from the Middle East show some evidence of sub-Saharan African ancestry in their mitochondrial DNA, which is passed down the female line, but not in their Y-chromosome ancestry, passed down the male line (Richards et al 2003). 

In contrast, in the United States, the use of female slaves for sexual purposes, although it certainly occurred, was, at least in theory, very much frowned upon. 

In addition, in North America, due to the one-drop rule, all mixed-race descendants of slaves with any detectable degree of black African ancestry were classed as black. Therefore, at least in theory, the white bloodline would have remained ‘pure’, though some mixed-race individuals may have been able to pass

Therefore, sub-Saharan African genes may have entered the Middle Eastern, and North African, gene-pools in a way they were not able to do so among whites in North America. 

This might explain why genotypic intelligence among North African and Middle Eastern populations may have declined in the period since the great civilizations of Mesopotamia and ancient Egypt and even since the Golden Age of Islam, when the intellectual achievements of Middle Eastern and North African peoples seemed so much more impressive.

Jews

Besides Indians, another economically and intellectually overachieving model minority who derive, at least in part, from the race whom Lynn classes as “South Asians and North Africans” are the Jews. 

Lynn has recently written a whole book on the topic of Jewish intelligence and achievement, titled The Chosen People: A Study of Jewish Intelligence and Achievement (review forthcoming). 

However, in ‘Race Differences in Intelligence’, Jews do not even warrant a chapter of their own. Instead, they are discussed only at the end of the chapter on “South Asians and North Africans”, although Ashkenazi Jews also have substantial European ancestry. 

The decision not to devote an entire chapter to the Jewish people is surely correct, because, although even widely disparate groups (e.g. AshkenazimSephardic and Mizrahim, even the Lemba) do indeed share genetic affinities, Jews are not racially distinct (i.e. reliably physically distinguishable on phenotypic criteria) from other peoples. 

However, the decision to include them in the chapter on “South Asians and North Africans” is potentially controversial, since, as Lynn readily acknowledges, the Ashkenazim in particular, who today constitute the majority of world Jewry, have substantial European as well as Middle Eastern ancestry. 

Lynn claims British and US Jews have average IQs of around 108 (p68). His data for Israel are not broken down by ethnicity, but give an average IQ for Israel as a whole of 95, which Lynn, rather conjecturally, infers scores of 103 for Ashkenazi Jews, 91 for Mizrahi Jews and 86 for Palestinian-Arabs (p94). 

Lynn’s explanations for Ashkenazi intelligence, however, are wholly unpersuasive. 

First, he observes that, despite Biblical and Talmudic admonitions against miscegenation with Gentiles, Jews inevitably interbred to some extent with the host populations alongside whom they lived. From this, Lynn infers that: 

“Ashkenazim Jews in Europe will have absorbed a significant proportion of the genes for higher intelligence possessed by… Europeans” (p95). 

It is indeed true that, if, as Lynn claims, Europeans are indeed a more intelligent race than are populations from the Middle East, then interbreeding with Europeans may indeed explain how Ashkenazim came to score higher in IQ than do other populations tracing their ancestry to the Middle East. 

However, interbreeding with Europeans can hardly explain how Ashkenazi Jews came to outscore, and outperform academically and economically, even the very Europeans with whom they are said to have interbred! 

This explanation therefore fails to explain why Ashkenazim have higher IQs than do Europeans. 

Lynn’s second explanation for high Ashkenazi Jewish IQs is equally unpersuasive. He suggests that: 

“The second factor that has probably operated to increase the intelligence of Ashkenazim Jews in Europe and the United States as compared with Oriental Jews is that the Ashkenazim Jews have been more subject to persecution… Oriental Jews experienced some persecution sufficient to raise their IQ of 91, as compared with 84 among other South Asians and North Africans, but not so much as that experienced by Ashkenazim Jews in Europe.” (p95).[29]

On purely theoretical grounds, the idea that persecution selects for intelligence may seem reasonably plausible, if hardly compelling.[30] 

However, there is no evidence that persecution does indeed reduce a population’s level of intelligence. On the contrary, other groups who have been subject to persecution throughout much of their histories – e.g. the Roma (i.e. Gypsies) and African-Americans – are generally found to have relatively low IQs. 

East and South-East Asians 

Excepting Jews, the highest average IQs are found among East Asians, who have, according to Lynn’s data, an average IQ of 105, somewhat higher than that of Europeans (p121-48). 

However, whereas Jews score relatively higher in verbal intelligence than spatio-visual ability, East Asians show the opposite pattern, with relatively higher scores for spatio-visual ability.[31]

However, it is important to emphasize that this relatively high figure applies only to East Asians – i.e. Chinese, Japanese Koreans, Taiwanese etc. 

It does not apply to the related populations of Southeast Asia (i.e. Thais, Filipinos, Vietnamese, Malaysians, Cambodians, Indonesians etc.), who actually score much lower in IQ, with average scores of only around 87 in their indigenous homelands, but rising to 93 among those resident in the US. 

Thus, Lynn distinguishes the East Asians from Southeast Asians as a separate race, on the grounds that the latter, despite “some genetic affinity with East Asians” form a distinct genetic cluster in data gathered and analyzed by Cavalli-Sforza et al, and also have distinct morphological features, with “the flattened nose and epicanthic eye-fold… [being] less prominent” than among East Asians (p97). 

This is an important point, since many previous writers on the topic have implied that the higher average IQs of East Asians applied to all ‘Asians’ or ‘Mongoloids’, which would presumably include South-East Asians.[32]

Yet, in Lynn’s opinion, it is just as misleading to group all these groups together as ‘Mongoloid’ or ‘Asian’ as it was to group “Europeans” and “South Asians and North Africans” together as ‘Caucasian’ or ‘Caucasoid’. 

However, that low scores throughout South-East Asia are entirely genetic in origin is unclear. Thus, Vietnamese resident in the West have sometimes, but not always, scored considerably higher, and Jason Malloy suggests that Lynn exaggerates the overrepresentation of ethnic Chinese among Vietnamese immigrants to the West so as attribute such results to East Asians rather than South-East Asians (Malloy 2014).[33]

Moreover, in relation to Lynn’s ‘Cold Winters Theory’ (discussed below), whereby it is claimed that populations were exposed to colder temperatures during their evolution evolved higher levels of intelligence in order to cope with the adaptive challenges that surviving cold temperatures posed, it is notable that climate varies greatly across China, reflecting the geographic size of the country, with Southern China having a subtropical climate with mild winters.

However, perhaps East Asians, like the Han Chinese, are to be regarded as only relatively recent arrivals in what is now Southern China. This would be consistent with claim of some physical anthropologists that the some aspects of the morphology of East Asians reflects adaptation to the extreme cold of Siberia and the Steppe, and also with the historical expansion of the Han Chinese.

More problematic for ‘Cold Winters Theory’ is the fact that, although Lynn classifies them as East Asian (p121), the higher average IQ scores of East Asians (as compared to whites), does not even extend to the people after whom the Mongoloid race was named – namely the Mongols themselves.

According to Lynn, Mongolians score only around the same as whites, with an average IQ of only 101 (Lynn 2007).

This report is based on just two studies. Moreover, it had not been published at the time the first edition of ‘Race Differences in Intelligence’ came off the presses.

However, Lynn infers a lower IQ for Mongolians from their lower level of cultural, technological and economic development (p240).

Yet, inhabiting the Mongolian-Manchurian grassland Steppe and Gobi Desert, Mongolians were subjected to an environment even colder and more austere than that of other East Asians.

Lynn’s explanation for this anomaly is that the low population-size of the Mongols, and their isolation from other populations, meant that the necessary mutations for higher IQ never arose (p240).[34]

This is the same explanation that Lynn provides for the related anomaly of why Eskimos (“Arctic Peoples”), to whom Mongolians share some genetic affinity, also score low in IQ, an explanation that is discussed in the final part of this review.

Native Americans

Another group sometimes subsumed with Asian populations as “Mongoloids” are the indigenous populations of the American continent, namely “Native Americans”. 

However, on the basis of both genetic data from Cavalli-Sforza et al and morphological differences (“darker and sometimes reddish skin, hooked or straight nose, and lack of the complete East Asian epicanthic fold”), Lynn classifies them as a separate race and hence accords them a chapter of their own. 

His data suggest average IQs of about 86, for both Native Americans resident in Latin America, and also for those resident in North America, despite the substantially higher living standards of the latter (p158; 162-3; p166). 

Mestizo populations, however, have somewhat higher scores, with average IQs intermediate between those of the parent populations (p160).[35]

Like the Asian populations with whom they share their ancestry, Native Americans score rather higher on spatio-visual intelligence than on verbal intelligence (p156). 

In particular, they also have especially high visual memory (p159-60). 

As he did for African-Americans, Lynn also discusses the musical abilities of Native Americans. Interestingly, psychometrical testing shows that their musical ability is rather higher than their general cognitive ability, giving a MQ (Musical Quotient) of approximately 92 (p160). 

They also show the same pattern of musical abilities as do African-Americans, with higher scores for rhythmical ability than for other forms of musical ability (p160). 

However, whereas blacks, as we have seen, only score as high as Europeans for rhythmical ability, but no higher, Native Americans, because of higher IQs (and MQs) overall, actually outscore both Europeans and African-Americans when it comes to rhythmical ability. 

These results are curious. Unlike African-Americans, Native Americans are not, to my knowledge, known for their contribution to any genres of western music, and neither are their indigenous musical traditions especially celebrated. 

Artic Peoples” (i.e. Eskimos) 

Distinguished from other Native Americans are the inhabitants of the far north of the American landmass. These, together with other indigenous populations from the area around the Bering straight, namely those from Greenland, the Aleutian Islands, and the far north-east of Siberia, together form the racial group whom Lynn refers to as “Arctic Peoples”, though the more familiar, if less politically correct, term would be ‘Eskimos’.[36]

As well as forming a distinctive genetic cluster per Cavalli-Sforza et al, they are also morphologically distinct, not least in their extreme adaptation to the cold, with, Lynn reports: 

Shorter legs and arms and a thick trunk to conserve heat, a more pronounced epicanthic eye-fold, and a nose well flattened into the face to reduce the risk of frostbite” (p149). 

As we will see, Lynn is a champion of what is sometimes called Cold Winters Theory – namely the theory that the greater environmental challenges, and hence cognitive demands, associated with living in colder climates selected for increased intelligence among those races inhabiting higher latitudes. 

Therefore, on the basis of this theory, one might imagine that Eskimos, who surely evolved in one of the most difficult, and certainly in the coldest, environment of any human group, would also have the highest IQs. 

This conclusion would also be supported by the observation that, according to the data cited by Lynn himself, Eskimos also have the largest average brain-size of any race (p153). 

Interestingly, some early reports did indeed suggest that Eskimos had high levels of cognitive ability as compared to whites.[37] However, Lynn now reports that Eskimos actually have rather lower IQ scores than do whites and East Asians, with results from 15 different studies giving an average IQ of around 90. 

Actually, however, viewed in global perspective, this average IQ of 90 for Eskimos is not that low. Indeed, of the ten major races surveyed by Lynn, only Europeans and East Asians score higher.[38]

It is an especially high score for a population who, until recently, lived exclusively as hunter-gatherers. Other foraging groups, or descendants of peoples who, until recently, subsisted as foragers, tend, according to Lynn’s data, to have low IQs (e.g. Australian Aboriginals, San Bushmen, Pygmies). 

One obvious explanation for the relatively low IQs of Eskimos as compared to Europeans and East Asians would be their deprived living conditions

However, Lynn is skeptical of the claim that environmental factors are entirely to blame for the difference in IQ between Eskimos and whites, since he observes: 

“The IQ of the Arctic Peoples has not shown any increase relative to that of Europeans since the early 1930s, although their environment has improved in so far as in the second half of the twentieth century they received improved welfare payments and education. If the intelligence of the Arctic Peoples had been impaired by adverse environmental conditions in the 1930s it should have increased by the early 1980s” (p153-4). 

He also notes that all the children tested in the studies he cites were enrolled in schools (since this was where the testing took place), and hence were presumably reasonably familiar with the procedure of test-taking (p154).

Lynn’s explanation for the relatively low scores of Eskimos is discussed below in the final part of this review.

Visual Memory, Spatial Memory and Hunter-Gathering 

Eskimos also score especially high on tests of visual memory, something not usually measured in standard IQ tests (p152-3). 

This is a proficiency they share in common with Native Americans (p159-60), to whom they are obviously closely related. 

However, as we have seen, Australian Aboriginals, who are not closely related to either group, also seem to possess a similar ability, though Lynn refers to this as “spatial Memory” rather than “visual Memory” (p107-8). 

These are, strictly speaking, somewhat different abilities, although they may not be entirely separate either, and may also be difficult to distinguish between in tests. 

If Aboriginals score high on spatial memory, they may then also score high on visual memory, and vice versa for Eskimos and Native Americans. However, since Lynn does not provide comparative data on visual memory among Aboriginals, or on spatial memory among Eskimos or Native Americans, this is not certain. 

Interestingly, one thing all these three groups share in common is a recent history of subsisting, at least in part, as hunter-gatherers.[39]

One is tempted, then, to attribute this ability to the demands of a hunter-gatherer lifestyle, perhaps reflecting the need to remember the location of plant foods which appear only seasonally, or to find one’s way home after a long hunting expedition.[40] 

It would then be interesting to test the visual and spatial memories of other groups who either continue to subsist as hunter-gatherers or only recently transitioned to agriculture or urban life, such as Pygmies and San Bushmen. However, since tests of spatial and visual memory are not included in most IQ tests, the data is probably not yet available.  

For his part, Lynn attributes Eskimo visual memory to the need to “find their way home after going out on long hunting expeditions” (p152-3). 

Thus, just as the desert environment of Australian Aboriginals provides few landmarks, so: 

“The landscape of the frozen tundra [of the Eskimos] provides few distinctive cues, so hunters would need to note and remember such few features as do exist” (p153). 

Proximate Causes: Heredity or Environment?

Chapter fourteen discusses the proximate causes of race differences in intelligence and the extent to which the differences observed can be attributed to either heredity or environmental factor, and, if partly the latter, which environmental factors are most important.  

Lynn declares at the beginning of the chapter that the objective of his book is “to broaden the debate” from an exclusive focus on the black-white test score gap in the US, to instead looking at IQ differences among all ten racial groups across the world for whom data on IQ or intelligence is presented in Lynn’s book (p182). 

Actually, however, in this chapter alone, Lynn does indeed focus primarily on black-white differences, if only because it is in relation to this difference that most research has been conducted, and hence to this difference that most available evidence relates. 

Downplaying the effect of schooling, Lynn identifies malnutrition as the major environmental influence on IQ (p182-7). 

However, he rejects malnutrition as an explanation for the low scores of American blacks, noting there is no evidence of short stature in black Americans and nor have surveys have found a greater prevalence of malnutrition (p185). 

As to global differences, he concludes that: 

“The effect of malnourishment on Africans in sub-Saharan Africa and the Caribbean probably explains about half of the low IQs, leaving the remaining half to genetic factors” (p185). 

However, it is unclear what is meant by “half of the low scores” as he has identified no comparison group.[41] 

He also argues that the study of racially mixed individuals further suggests a genetic component to observed IQ differences. Thus, he claims: 

“There is a statistically significant association between light skin and intelligence” (p190). 

As evidence he cites his own study (Lynn 2002) to claim: 

“When the amount of European ancestry in American blacks is assessed by skin color, dark-skinned blacks have an IQ of 85 and light-skinned blacks have an IQ of 92” (p190). 

However, he fails to explain how he managed to divide American blacks into two discrete groups by reference to a trait that obviously varies continuously. 

More importantly, he neglects to mention altogether two other studies that also investigated the relationship between IQ and degree of racial admixture among African-Americans, but used blood-groups rather than skin tone to assess ancestry (Loehlin et al 1973; Scarr et al 1977). 

This is surely a more reliable measure of ancestry than is skin tone, since the latter is affected by environmental factors (e.g. exposure to the sun darkens the skin), and could conceivably have an indirect psychological effect.[42]

However, both these studies found no association between ancestry and IQ (Loehlin et al 1973; Scarr et al 1977).[43] 

Meanwhile, Lynn mentions the Eyferth study (1961) of the IQs of German children fathered by black and white US servicemen in the period after World War II, only to report, “the IQ of African-Europeans [i.e. those fathered by the black US servicemen] was 94 in relation to 100 for European women” (p63). 

However, he fails to mention that the IQ of those German children fathered by black US servicemen (i.e. those of mixed race) was actually almost identical to that of those fathered by white US servicemen (who, with German mothers, were wholly white). This finding is, of course, evidence against the hereditarian hypothesis with respect to race differences. 

Yet Lynn can hardly claim to be unaware of this finding, or its implications with respect to race differences, since this is actually among the studies most frequently cited by opponents of the hereditarian hypothesis with respect to the black-white test score gap for precisely this reason. 

Lynn’s presentation of the evidence regarding the relative contributions of heredity and environment to race differences in IQ is therefore highly selective and biased. 

An Evolutionary Analysis 

Only in the last three chapters does Lynn provide the belated “Evolutionary Analysis” promised in his subtitle. 

Lynn’s analysis is evolutionary in two senses. 

First, he presents both a functionalist explanation of why race differences in intelligence (supposedly) evolved (Chapter 16). This is the sort of ultimate evolutionary explanation with which evolutionary psychologists are usually concerned. 

However, in addition, Lynn also traces evolution of intelligence over evolutionary history, both in humans of different races (Chapter 17) and among our non-humans and our pre-human ancestors (Chapter 15). 

In other words, he addresses the questions of both adaptation and phylogeny, two of Niko Tinbergen’s famous Four Questions

In discussing the former of these two questions (namely, why race differences in intelligence evolved: Chapter 16), Lynn identifies climate as the ultimate environmental factor responsible for the evolution of race differences in intelligence. 

Thus, he claims that, as humans spread out beyond Africa towards regions further from the equator and hence generally with colder temperatures, especially during winters, the colder climates that these pioneers encountered posed greater challenges for the humans who encountered them in terms of feeding themselves and obtaining shelter etc., and that different human races evolved different levels of intelligence in response to the adaptive challenges posed by such difficulties. 

Hunting vs. Gathering 

The greater problems supposedly posed by colder climates included not just difficulties of keeping warm (i.e. the need for clothing, fires, insulated homes), but also the difficulties of keeping fed. 

Thus, Lynn emphasizes the dietary differences between foragers inhabiting different regions of the world: 

Among contemporary hunter-gatherers the proportions of foods obtained by hunting and gathering varies by hunting and by gathering varies according to latitude. Peoples in tropical and subtropical latitudes are largely gatherers, while peoples in temperate environments rely more on hunting, and peoples in arctic and sub-arctic environments rely almost exclusively on hunting and fishing and have to do so because plant foods are unavailable except for berries and nuts in the summer and autumn” (p227). 

I must confess that I was previously unaware of this dietary difference. However, in my defence, this is perhaps because many anthropologists seem all too ready to overgeneralize from the lifestyles of the most intensively studied tropical groups (e.g. the San of Southern Africa) to imply that what is true of these groups is true of all foragers, and was moreover necessarily also true of all our hunter-gatherer ancestors before they transitioned to agriculture. 

Thus, for example, feminist anthropologists seemingly never tire of claiming that it is female gatherers, not male hunters, who provide most of the caloric demands of foraging peoples. 

Actually, however, this is true only for tropical groups, where plant foods are easily obtainable all year round, not of hunter-gatherers in general (Ember 1978). 

It is certainly not true, for example, of Eskimos, among whom females are almost entirely reliant on male hunters to provision them for most of the year, since plant foods are hardly available at all except for during a few summer months. 

Similarly, radical-leftist anthropologist Marshall Sahlins famously characterized hunter-gatherer peoples as “The Original Affluent Society”, because, according to his data, they do not want for food and actually have more available leisure-time than do most agriculturalists, and even most modern westerners. 

Unfortunately, however, he relied primarily on data from tropical peoples such as the !Kung San to arrive at his estimates, and these findings do not necessarily generalize to other groups such as the Inuit or other Eskimos

The idea that it was our ancestor’s transition to a primarily carnivorous diet that led to increases in hominid brain-size and intelligence was once a popular theory in paleoanthropology. 

However, it has now fallen into disfavour, if only because it put accorded male hunters the starring role in hominid evolution, with female gatherers relegated to a supporting role, and hence offended the sensibilities of feminists, who have become increasingly influential in academia, even in science. 

Nevertheless, it is seems to be true that, across taxa, carnivores tend to have larger brains than herbivores. 

Of course, non-human carnivores did not evolve the exceptional intelligence of humans.  

However, Desmond Morris in The Naked Ape argued that, because our hominid ancestors only adopted a primarily carnivorous diet relatively late in their evolution, they were unable to compete with such specialized hunters as lions and tigers in terms of their fangs and claws. They therefore had to adopt a different approach, using intelligence instead or claws and fangs, hence inventing handheld weapons and cooperative group hunting. 

Lynn’s argument, however, is somewhat different to the traditional version of the Hunting Ape Hypothesis, as championed by popularizers like Desmond Morris and Robert Ardley

Thus, in the traditional version, it is the intelligence of early hominids, the descendants all populations of contemporary humans, that increased as a result of the increasing cognitive demands that hunting placed upon us. 

However, Lynn argues that it is only certain races that were subject to such selection, as their dependence on hunting increased as they populated colder regions of the globe. 

Indeed, Lynn’s arguments actually cast some doubt on the traditional version of the Hunting Ape Theory

After all, anatomically modern humans are thought to have first evolved in Africa. Yet if African foragers actually subsisted primarily on a diet of wild plant foods, and only occasionally hunted or scavenged meat to supplement this primarily herbivorous diet, then the supposed cognitive demands of hunting can hardly be invoked to explain the massive increase in hominid brain-size that occurred during the period before our ancestors left Africa to colonize the remainder of the world.[44]

Indeed, Lynn is seemingly clear that he rejects the ‘Hunting Ape Hypothesis’, writing that the increases in hominid brain-size after our ancestors “entered a new niche of the open savannah in which survival was more cognitively demanding” occurred, not because of the cognitive demands of hunting, but rather that: 

The cognitive demands of the new niche would have consisted principally of finding a variety of different kinds of foods and protecting themselves from predators” (p202)[45]

Cold Winters Theory’ 

There are several problems with so-called ‘Cold Winters Theory’ as an explanation for the race differences in IQ reported by Lynn. 

For one thing, other species have adapted themselves to colder climates without evolving a level of intelligence as high as human population, let alone of Europeans and East Asians. 

Indeed, I am not aware of any studies even suggesting a relationship between brain-size or intelligence and the temperature or latitude of their species-ranges among non-human species. However, one might expect to find an association between temperature and brain-size, if only because of Bergmann’s rule

Similarly, Neanderthals were ultimately displaced and driven to extinction throughout Eurasia by anatomically-modern humans, who, at least according to the conventional account, outcompeted Neanderthals due to their superior intelligence and tool-making ability. 

Yet, whereas anatomically modern humans are thought to have evolved in tropical Africa before spreading outwards to Eurasia, the Neanderthals were a cold-adapted species of hominid who had evolved and thrived in Eurasia during the last Ice age

At any rate, even if the conditions were indeed less demanding in tropical Africa than in temperate or arctic latitudes, then, according to basic Darwinian (and Malthusian) theory, in the absence of some other factor limiting population growth (e.g. warfare, predation, homicide, disease), this would presumably mean that humans would respond to greater resource abundance in the tropics by reproducing until they reached the carrying capacity of the environment.   

By the time the carrying capacity of the environment was reached, however, the environment would no longer be so resource-abundant given the greater number of humans competing for its resources. 

This leads me to believe that the key factors selecting for increases in the intelligence of hominids were not ecological but rather social – i.e. not access to food and shelter etc., but rather competition with other humans. 

Also, I remain unconvinced that the environments inhabited by the two races that have, according to Lynn, the lowest average IQs, namely, San Bushmen and Australian Aborigines, are cognitively undemanding. 

These are, of course, the Kalahari Desert and Australian outback (also composed, in large part, of deserts) respectively, two notoriously barren and arid environments.[46]

Meanwhile, the Eskimos occupy what is certainly the coldest, and also undoubtedly one of the most demanding, environments anywhere in the world, and also have, according to Lynn’s own data, the largest brains. 

However, according to Lynn’s data, their average IQ is only about 90, high for a foraging group, but well below that of Europeans and East Asians.[47] 

For his part, Lynn attempts to explain away this anomaly by arguing that Arctic Populations were precluded from evolving higher IQs by small and dispersed populations, reflecting of the harshness of the environment. This meant the necessary mutations either never arose or never spread through the population (p153; p239-40; p221).[48]
 
On the other hand, he explains their large brains as reflecting visual memory rather than general intelligence, as well as a lack of mutations for neural efficiency (p153; p240) 
 
However, these seem like post-hoc rationalizations 
 
After all, if conditions were harsher in Eurasia than in Africa, then this would presumably also have resulted in smaller and more dispersed populations in Eurasia than in Africa. However, this evidently did not prevent mutations for higher IQ spreading among Eurasians. 

Why then, when the environment becomes even harsher, and the population even more dispersed, would this pattern suddenly reverse itself? 
 
Likewise, if whole-brain-size is related to general intelligence, it is inconsistent to invoke specific abilities to explain Inuit brains. 

Thus, according to Lynn, Australian Aborigines have high spatial memory, which is closely related to visual memory. However, also according to Lynn, only their right visual cortex is enlarged (p108-9) and they have small overall brain-size (p108-9; p210; p212). 

Endnotes

[1] Curiously, Lynn reports, this black advantage for movement-time does not appear in the simplest form of elementary task (simple reaction time), where the subject simply has to press a button on the lighting of a light, rather than hitting a specific button, rather than alternative buttons, on the lighting of a particular light rather than other lights (p58). These latter forms of elementary cognitive test presumably involve some greater degree of cognitive processing. 

[2] First, there are the practical difficulties. Obviously, non-human animals cannot use written tests, or an interview format. Designing a maze for laboratory mice may be relatively straightforward, but building a comparable maze for elephants is rather more challenging. Second, and more important, different species likely have evolved different specialized abilities for dealing with specific adaptive problems. For example, migratory birds may have evolved specific spatio-visual abilities for navigation. However, this is not necessarily reflective of high general intelligence, and to assess their intelligence solely on the basis of their migratory ability, or even their general spatio-visual ability, would likely overestimate their general level of cognitive ability. In other words, it reflects a modulardomain-specific adaptation.

Admittedly, the same is true to some extent for human races. Thus, some races score relatively higher on certain types of intellectual ability. For example, East Asians tend to score higher on spatio-visual ability than on verbal ability; Ashkenazi Jews show the opposite pattern, scoring higher in verbal intelligence than in spatio-visual ability; while American blacks score relatively higher in tests involving rote memory than in those requiring abstract reasoning ability. Similarly, as discussed by Lynn, some races seem to have certain quite specific abilities not commensurate to their general intelligence (e.g. Aborigine visual memory). However, in general, both between and within races, most variation in human intelligence loads onto the ‘g-factor’ of general intelligence.

[3] American anthropologist Carleton Coon is credited as the first to first to propose that population differences in skull size reflect a thermoregulatory adaptation to climatic differences (Coon 1955). An alternative theory, less supported, is that it was differing levels of ambient light that resulted in differences in brain-size as between different populations tracing their ancestry to different parts of the globe (Pearce & Dunbar 2011). On this view, the larger brains of populations who trace their descent to areas of greater latitude presumably reflect only the demands of the visual system, rather than any differences in general intelligence. Yet another theory, less politically-correct than these, is so-called ‘Cold Winters Theory’, which posits that colder climates placed a greater premium on intelligence, which caused populations inhabiting colder regions of the globe to evolve larger brains and higher levels of intelligence. This is, of course, the theory championed by Lynn himself, and I will discuss the problems with this theory below.

[4] Conversely, Lynn also suggests that Turkish people score slightly higher than other Middle-Eastern populations, because they are somewhat intermixed with Europeans (p80).

[5] Lynn has recently published research regarding differences in IQ across different regions of Italy (Lynn 2010).

[6] Actually, Lynn acknowledges causation in both directions, possibly creating a feedback loop. He also acknowledges other factors in contributing to differences in economic development and prosperity, including the effects of the economic system adopted. For example, countries that adopted communism tend to be poorer than comparable countries that have capitalist economies (e.g. Eastern Europe is poorer than Western Europe, and North Korea poorer than South Korea).  

[7] Incidentally, Lynn cites two studies of Polish IQ, whose results are even more divergent than those of Portugal or Ireland, giving average IQs of 106 and 91 respectively. One of these scores is substantially below the European average, while the other the substantially above. 

[8] Essayist Ron Unz has argued that IQs in Ireland have risen in concert with living standards in Ireland (Unz 2012a; Unz 2012b). However, judging from dates when the studies cited by Lynn in ‘Race Differences in Intelligence’ were published, there is no obvious increase over time. True the earliest study, an MA thesis, published in 1973 gives the lowest figure, with an average IQ of just 87 (Gill and Byrt 1973). This rises to 97 in a study published in 1981 that provided little details on its methodology (Buj 1981). However, it declines again for in the latest study cited by Lynn on Irish IQs, which was published in 1993 but gives average IQs of just 93 and 91 for two separate samples (Carr 1993). In the more recent 2015 edition, Lynn cites a few extra studies, eleven in total. Again, however, there is no obvious increase over time, the latest study cited by Lynn, which was published in 2012, giving an average IQ of just 92 (2015 edition).

[9] While this claim is made in reference to immigrants to America and the West, it is perhaps worth noting that East Asians in South-East Asia, namely the Overseas Chinese, largely dominate the economies of South-East Asia, and are therefore on average much wealthier than the average Chinese person still residing in China (see World on Fire by Amy Chua). Given the association of intelligence with wealth, this would suggest that Chinese immigrants to South-East Asia are not substantially less intelligent than those who remained in China. Did the more intelligent Chinese migrate to South-East Asia, while the less intelligent migrated to America? If so, why would this be?

[10] According to Daniel Nettle in Personality: What Makes You the Way You Are, in the framework of the five-factor model of personality, a liking for travel is associated primarily with extraversion. One study found that an intention to migrate was positively associated with both extraversion and openness to experience, but negatively associated with agreeableness, conscientiousness, and neuroticism (Fouarge et al 2019). A study of migration within the United States found a rather more complex set of relationships between migration and each of the big five personality traits (Jokela 2009).

[11] Other Catholic countries, namely those in Southern Europe, such as Italy and Spain, may indeed have slightly lower IQs, at least in the far south of these countries. However, as we have seen, Lynn explains this in terms of racial admixture from Middle-Eastern and North African populations. Therefore, there is no need to invoke priestly celibacy in order to explain it. The crucial test case, then, is Catholic countries other than Ireland from Northern Europe, such as Austria and France.

[12] In the 2015 edition, he returns to a slightly higher figure of 71.

[13] In the 2006 edition, Lynn cites no studies from the Horn of Africa. However, in the 2015 edition, he cites five studies from Ethiopia, and, in The Intelligence of Nations, he and co-author David Becker also cite a study on Somalian IQs.

[14] Indeed, physical anthropologist John Baker, in his excellent Race (which I have reviewed here, here and here) argues that:

“The ‘Aethiopid’ race of Ethiopia and Somaliland are an essentially Europid subrace with some Negrid admixture” (Race: p225).

This may be an exaggeration. However, recent genetic studies indeed show affinities between populations from the Horn of Africa and those from the Middle East (e.g. Ali et al 2020; Khan 2011a; Khan 2011b; Hodgson 2014).

[15] However, it is not at all clear that the same is true for black African minorities resident in other western polities, whose IQs are also, according to Lynn’s data, also considerably above those for indigenous Africans. Here, I suspect black populations are more diverse. For example, in Britain, Afro-Caribbean people, who emigrated to Britain by way of the West Indies, are probably mostly mixed-race, like African-Americans, since both descend from white-owned slave populations. However, Britain also plays host to many immigrants direct from Africa, most of whom are, I suspect, of relatively unmixed sub-Saharan African descent. Yet African immigrants to the UK outperform Afro-Caribbeans in UK schools (Chisala 2015a).

[16] Blogger John ‘Chuck’ Fuerst suggests, the higher scores for Somali immigrants might reflect the fact that the peoples of the Horn of Africa actually, as we have seen, have substantial Caucasoid ancestry, and genetic affinities with North African and Middle Eastern populations (Fuerst 2015). However, the problem with attributing the relatively high scores of Somali refugees and immigrants to Caucasoid-admixture is that, as we have seen, according to the data collected by Lynn, IQs are no higher in the Horn of Africa than elsewhere in sub-Saharan Africa.

[17] If anything, “Bushmen” should presumably be grouped, not with Pygmies, with rather the distinct but related Khoikhoi pastoralists. However, the latter are now all but extinct as an independent people and are not mentioned by Lynn.

[18] For example, Lynn also acknowledges that those whom he terms “South Asians and North Africans” are “closely related to the Europeans” (p79). However, they nevertheless merit a chapter of their own. Likewise, he acknowledges that “South-East Asians” share “some genetic affinity with East Asians with whom they are to some degree interbred” (p97). Nevertheless, he justifies considering these two ostensible races in separate chapters, partly on the basis that “the flattened nose and epicanthic eye-fold are less prominent” among the former (p97). Yet the morphological differences between Pygmies and Khoisan are even greater, but they are lumped together in the same chapter.

[19] There is indeed, as Lynn notes, a correlation between a group’s IQ and their lifestyle (i.e. whether they are foragers or agriculturalists). However, the direction of causation is unclear. Does high intelligence allow a group to transition to agriculture, or does an agriculturalist lifestyle somehow increase a group’s average IQ? And, if the latter, is this a genetic or a purely environmental effect?

[20] Indeed, the very word slave is thought to derive from the ethnonym Slav, because of the frequency with which Slavic peoples were enslaved during the Middle Ages.

[21] Indeed, Lynn could hardly have arrived at an actual figure for the average Pygmy IQ, since, as we have seen, he reports the results of only a single actual study of Pygmy intelligence, the author of which did not present his results in a quantitative format.

[22] Thus, he suggests that the lower performance of the Aboriginals tested by Drinkwater (1975), as compared to those tested by Kearins (1981), may reflect the fact that the latter were the descendants of coastal populations of Aborigines, for whom the need to navigate in deserts without landmarks would have been less important. 

[23] The fact that the earliest civilization emerged among Middle Eastern, North African and South Asian populations is attributed by Lynn to the sort of environmental factors that, elsewhere in his book, he largely discounts. Thus, Lynn writes: 

“[Europeans] were not able to develop early civilizations like those built by the South Asians and North Africans because Europe was still cold, was covered with forest, and had heavy soils that were difficult to plough unlike the light soils on which the early civilizations were built, and there were no river flood plains to provide annual highly fertile alluvial deposits from which agricultural surpluses could be obtained to support an urban civilization and an intellectual class” (p237).

[24] An interesting question is whether there exist differences in IQ as between different caste groups within the Indian subcontinent, since, at least in theory, these represented endogamous breeding populations between whom strict separation was maintained. Thus, it would be interesting to know the average IQ of Brahmins or of the high-achieving Parsi people (though the latter are not strictly a caste, since they are not Hindu).

[25] However, all of these comparisons, in both Britain and America, omit to include Jewish people as a separate ethnicity, instead grouping them with other whites. Jews earn more, on average, than any other religion in Britain and America, including Hindus.

[26] I assume that this is the study that Lynn is citing, since this is the only matching study included in his references. However, curiously, Lynn refers to this study here as “Mackintosh et al 1985” (p83-4), despite their being only two authors listed in his references, such that “Mackintosh & Mascie-Taylor 1985” would be the more usual citation. Indeed, Lynn uses this latter form of citation (i.e. “Mackintosh & Mascie-Taylor 1985”) elsewhere when citing what seems to be the same paper in his earlier chapter on Africans (p47; p49).

[27] In order to determine whether religion or national origin is the key determining factor, it would be interesting to have data on the incomes (and IQs) of Pakistani Hindus, Bangladeshi Hindus and Muslim Indians resident in the West.

[28] An alternative possibility is that it was the spread of Arab genes, as a result of the Arab conquests, and resulting spread of Islam, that depressed IQs in the Middle-East and North Africa, since Arabs were, prior to the rise of Islam, a relatively backward group of desert nomads, whose intellectual achievements were minimal compared to those of many of the groups whom they conquered (e.g. Persians, Mesopotamians, Assyrians, and Egyptians). Indeed, even the achievements of Muslim civilization during the Islamic Golden Age were disproportionately those of the Persians, not the Arabs. 

[29] One might, incidentally, question Lynn’s assumption that Oriental Jews were less subject to persecution than were the Ashkenazim in Europe. This is, of course, the politically correct view, which sees Islamic civilization as, prior to recent times, more tolerant than Christendom. On this view, anti-Jewish sentiment only emerged in the Middle East as a consequence of Zionism and the establishment of the Jewish state in what was formerly Palestine. However, for alternative views, see The Myth of the Andalusian Paradise. See also Robert Spencer’s The Truth About Muhammad (which I have reviewed here), in which he argues that Islam is inherently antisemitic (i.e. anti-Jewish). Interestingly, Kevin Macdonald, in A People That Shall Dwell Alone (which I have reviewed here and here) makes almost the opposite argument to that of Lynn. Thus, he argues that it was precisely because Jews were so discriminated against in the Muslim world that their culture, and ultimately their IQs, were to decline, as they were, according to Macdonald, largely excluded from high-status and cognitively-demanding occupations, which were reserved for Muslims (p301-4). Thus, Macdonald concludes: 

“The pattern of lower verbal intelligence, relatively high fertility, and low-investment parenting among Jews in the Muslim world is linked ultimately to anti-Semitism” (A People That Shall Dwell Alone (reviewed here): p304). 

[30] For example, one might speculate that only the relatively smarter Jews were able to anticipate looming pogroms and hence escape. Alternatively, since wealth is correlated with intelligence, perhaps only the relatively richer, and hence generally smarter, Jews could afford the costs of migration, including bribes to officials, in order to escape pogroms. These are, however, obviously speculative, post-hoc ‘just-so stories’ (in the negative Gouldian sense), and I put little stock in them.

[31] This pattern among East Asians of lower scores on the verbal component of IQ tests was initially attributed to a lack of fluency in the language of the test, since the first East Asians to be tested were among diaspora populations resident in the West. However, the same pattern has now been found even among East Asians tested in their first language, in both the West and East Asia.

[32] For example, Sarich and Miele, in Race: The Reality of Human Differences (which I have reviewed here and here) write that “Asians have a slightly higher IQ than do whites” (Race: The Reality of Human Differences: p196). However, in actuality, this applies only to East Asians, not to South-East Asians (nor to South Asians and West Asians, who are “Asian” in at least the geographical, and the British-English, sense.) Similarly, in his own oversimplified tripartite racial taxonomy in Race, Evolution and Behavior (which I have reviewed here), Philippe Rushton seems to imply that the traits he attributes to Mongoloids, including high IQs and large brain-size, apply to all members of this race, including South-East Asians and even Native Americans.

[33] Ethnic Chinese were overrepresented among Vietnamese boat people, though less so among later waves of immigrants. However, perhaps a greater problem is that they were disproportionately middle-class and drawn from the business elite, and hence unrepresentative of the Vietnamese as a whole, and likely of disproportionately high cognitive ability.

[34] In his paper on Mongolian IQs, Lynn also suggests that Mongolians have lower IQs than other East Asians because they are genetically intermediate between East Asians and Eskimos (“Arctic Peoples”), who themselves have lower IQs (Lynn 2007). However, this merely begs the question as to why Eskimos themselves have lower IQs than East Asians, another anomaly with respect to ‘Cold Winters Theory’, which is discussed in the final part of this review.

[35] With regard to the population of Colombia, Lynn writes: 

“The population of Colombia is 75 percent Native American and Mestizo, 20 percent European, and 5 percent African. It is reasonable to assume that the higher IQ of the Europeans and the lower IQ of the Africans will approximately balance out and that the IQ of 84 represents the intelligence of the Native Americans” (p58). 

However, this assumption that the African and European genetic contributions will balance out seems dubious since, by Lynn’s own reckoning, the European contribution to the Colombian gene-pool is three times greater than that of Africans.

[36] The currently-preferred term Inuit is not sufficiently inclusive, because it applies only to those Eskimos indigenous to the North American continent, not the related but culturally distinct populations inhabiting Siberia or the Aleutian Islands. I continue to use the term Eskimos, because it is more accurate, not obviously pejorative, probably more widely understood, and also because I deplore the euphemism treadmill. Elsewhere, I have generally deferred to Lynn’s own usage, for example mostly using ‘Aborigine’, rather than the now preferred ‘Aboriginal’, a particularly preposterous example of the euphemism treadmill since the terms are so similar, comparable to how, today, it is acceptable to say ‘people of colour’, but not ‘coloured people’.

[37] For example, Hans Eysenck made various references in his writings to the fact that Eskimo children performed as well as European children in IQ tests as evidence for his claim that economic deprivation did not necessarily reduce IQ scores (e.g. The Structure and Measurement of Intelligence: p23). See also discussion in: Jason Malloy, A World of Difference: Richard Lynn Maps World Intelligence (Malloy 2016).

[38] Certain specific subpopulations also score higher (e.g. Ashkenazim and Māoris, though the latter only barely). However, these are subpopulations within the major ten races that Lynn identifies, not races in and of themselves.

[39] Actually, by the time Columbus landed in the Americas, many Native Americans had already partly transitioned to agriculture. However, not least because of a lack of domesticated animals that they could use as a meat source, most supplemented this with hunting and sometimes gathering too.

[40] However, Lynn reports that Japanese also score high on tests of visual memory (p143). However, excepting perhaps the Ainu, the Japanese do not have a recent history of subsisting as foragers. This suggests that foraging is not the only possible cause of high visual memory in a population.

[41] Presumably the comparison group Lynn has in mind are Europeans, since, as we have seen it is European living standards that he takes as his baseline for the purposes of estimating a group’s ”genotypic IQ” (p69), and, in a sense, all the IQ scores that he reports are measured against a European standard in so far as they are calculated by reference to an arbitrarily assigned average of 100 for European populations.

[42] Thus, it is at least theoretically possible that a relatively darker-skinned African-American child might be treated differently than a lighter-skinned child, especially one whose race is relatively indeterminate, by others (e.g. teachers) in a way that could conceivably affect their cognitive development and IQ. In addition, a darker skinned African-American child might, as a consequence of their darker complexion, come to identify as an African American to a greater extent than a lighter skinned child, which might affect who they socialize with, which celebrities they identify with and the extent to which they identify with broader black culture, all of which could conceivably have an effect on IQ. I do not contend that these effects are likely or even plausible, but they are at least theoretically possible. Using blood group to assess ancestry, especially if one actually introduces controls for skin tone (since this may be associated with blood-group, since both are presumed to be markers of degree of African ancestry), obviously eliminates this possibility. Today, this can also be done by looking at subjects’ actual DNA, which obviously has the potential to provide a more accurate measure of ancestry than either skin-tone or blood-group (e.g. Lasker et al 2019).

[43] More recently, a better study has been published regarding the association between European admixture and intelligence among African-Americans, which used genetic data to assess ancestry, and actually sought to control for the possible confounding effect of skin-colour and appearance (Lasker et al 2019). Unlike the blood-group studies, this largely supports the hereditarian hypothesis. However, this was not available at the time Lynn authored his book. Also, it ought to be noted that it was published in a controversial pay-to-publish academic journal, and therefore the quality of peer review to which the paper was subjected may be open to question. No doubt in the future, with the reduced costs of genetic testing, more studies using a similar methodology will be conducted, finally resolving the question of the relative contributions of heredity and environment to the black-white test score gap in America, and perhaps disparities between other ethnic groups too.

[44] It is a fallacy, however, to assume that what is true for those foraging peoples that have managed to survive as foragers in modern times and hence come to be studied by anthropologists was necessarily also true of all foraging groups before the transition to agriculture. On the contrary, those foraging groups that have survived into modern times, tend to have done so only in the ecologically most marginal and barren environments (e.g. the Kalahari Desert occupied by the San), since these areas are of least use to agriculturalists, and therefore represent the only regions where more technologically and socially advanced agriculturalists have yet to displace them (see Ember 1978). However, this would seem to suggest that African hunter-gatherers, prior to the expansion of Bantu agriculturalists, would have occupied more fertile areas, and therefore might have had even less need to rely on hunting than do contemporary hunter-gatherers such as the San, who are today largely restricted to the Kalahari Desert.

[45] Here, interestingly, Lynn departs from the theory of fellow race realist, and fellow exponent of ‘Cold Winters Theory’, Philippe Rushton. The latter, in his book, Race, Evolution and Behavior (which I have reviewed here), argues that: 

“Hunting in the open grasslands of northern Europe was more difficult than hunting in the woodlands of the tropics and subtropics where there is plenty of cover for hunters to hide in” (Race, Evolution and Behavior: p228). 

In contrast, Lynn argues “open grasslands”, albeit on the African Savannah rather than in Northern Europe, actually made things harder, not for predators, but rather for prey – or at least arboreal primate prey. Thus, Lynn writes: 

“The other principle problem of the hominids living in open grasslands would have been to protect themselves against lions, cheetahs and leopards. Apes and monkeys escape from the big cats by climbing into trees and swinging or jumping form one tree to another. For the Autralopithecines and the later hominids in open grasslands this was no longer possible” (p203). 

[46] To clarify, this is not to say that either San Bushmen or Australian Aborigines evolved primarily in these desert environments. On the contrary, many of them formerly occupied more fertile areas, before being displaced by more advanced neighbours, Bantu agriculturalists in the case of Khoisan, and European (more specifically British) colonizers, in the case of Aborigines. However, that they are nevertheless capable of surviving in these demanding desert environments suggests either:

(1) They are more intelligent than Lynn concludes; or
(2) That surviving in challenging environments does not require the level of intelligence that Lynn’s ‘Cold Winters Theory’ supposes.

[47] Besides Eskimos, another potential test case for ‘Cold Winters Theory’ are the Sámi (or Lapps) of Northern Scandinavia. Like Eskimos, they have inhabited an extremely cold, northern environment for many generations and are genetically quite distinct from other populations. Also, again like Eskimos, they maintained a foraging lifestyle until modern times. According to Armstrong et al (2014), the only study of Sámi cognitive ability of which I am aware, the average IQ of the Sámi is almost identical to that of neighbouring populations of Finns (about 101).

[48] Lynn gives the same explanation for the relatively lower recorded IQs of Mongolians, as compared to other East Asians (p240).

References

Ali et al (2020) Genome-wide analyses disclose the distinctive HLA architecture and the pharmacogenetic landscape of the Somali population. Science Reports 10:5652.

Anderson M (2015) Chapter 1: Statistical Portrait of the U.S. Black Immigrant Population. In A Rising Share of the U.S. Black Population Is Foreign Born. Pew Research Center: Social & Demographic Trends, April 9, 2015. 

Armstrong et al (2014) Cognitive abilities amongst the Sámi population. Intelligence 46: 35-39.

Aziz et al (2004) Intellectual development of children born of mothers who fasted in Ramadan during pregnancy International Journal for Vitamin and Nutrition Research (2004), 74, pp. 374-380.

Beals et al (1984) Brain Size, Cranial Morphology, Climate, and Time Machines. Current Anthropology 25(3), 301–330.

Buj (1981) Average IQ values in various European countries Personality and Individual Differences 2(2): 168-9.

Carr (1993) Twenty Years a Growing: A Research Note on Gains in the Intelligence Test Scores of Irish Children over Two Decades Irish Journal of Psychology 14(4): 576-582.

Chisala (2015a) The IQ Gap Is No Longer a Black and White IssueUnz Review, 25 June. 

Chisala (2015b) Closing the Black-White IQ Gap Debate, Part I, Unz Review, 5 October.

Chisala (2015c) Closing the Black-White IQ Gap Debate, Part 2Unz Review, 22 October. 

Chisala (2019) Why Do Blacks Outperform Whites in UK Schools? Unz Review, November 29

Coon (1955) Some Problems of Human Variability and Natural Selection in Climate and Culture. American Naturalist 89(848): 257-279

Drinkwater (1975) Visual memory skills of medium contact aboriginal childrenAustralian Journal of Psychology 28(1): 37-43. 

Dutton (2020) Why Islam Makes You Stupid . . . But Also Means You’ll Conquer The World (Whitefish, MT: Washington Summit, 2020).

Ember (1978) Myths about Hunter-Gatherers Ethnology 17(4): 439-448 

Eyferth (1959) Eine Untersuchung der Neger-Mischlingskinder in Westdeutschland. Vita Humana, 2:102–114. 

Fouarge et al (2019) Personality traits, migration intentions, and cultural distance. Papers in Regional Science 98(6): 2425-2454

Fuerst (2015) The Measured proficiency of Somali Americans, HumaVarieties.org

Gill & Byrt (1973). The Standardization of Raven’s Progressive Matrices and the Mill Hill Vocabulary Scale for Irish School Children Aged 6–12 Years. University College, Cork: MA Thesis.

Hodgeson et al (2014) Early Back-to-Africa Migration into the Horn of Africa. PLoS Genetics 10(6): e1004393.

Jokela (2009) Personality predicts migration within and between U.S. states Journal of Research in Personality 43(1): 79-83.

Kearins (1986) Visual spatial memory in aboriginal and white Australian childrenAustralian Journal of Psychology 38(3): 203-214. 

Kearins (1981) Visual spatial memory in Australian Aboriginal children of desert regions Cognitive Psychology 13(3): 434-460. 

Khan (2011a) The genetic affinities of Ethiopians. Discover Magazine, January 10.

Khan (2011b) A genomic sketch of the Horn of Africa. Discover Magazine, June 10

Klekamp et al (1987) A quantitative study of Australian aboriginal and Caucasian brains. Journal of Anatomy 150: 191–210.

Knapp & Seagrim (1981) Visual memory Australian aboriginal children and children of European descent International Journal of Psychology 16(1-4): 213-231. 

Langan & LoSasso (2002) Discussions on Genius and Intelligence: Mega Foundation Interview with Arthur Jensen‘ (Eastport, New York: MegaPress) 

Lasker et al (2019) Global ancestry and cognitive abilityPsych 1(1), 431-459 

Loehlin et al (1973) Blood group genes and negro-white ability differences. Behavior Genetics 3(3): 263-270 

Lynn (2002) Skin Color and Intelligence in African-Americans. Population & Environment 23:201-207 

Lynn (2007) IQ of Mongolians. Mankind Quarterly 47(3).

Lynn (2010) In Italy, north–south differences in IQ predict differences in income, education, infant mortality, stature, and literacy. Intelligence, 38, 93-100. 

Lynn (2015) Selective Emigration, Roman Catholicism and the Decline of Intelligence in the Republic of Ireland. Mankind Quarterly 55(3): 242-253.

Mackintosh & Mascie-Taylor (1985). The IQ question. In Education for All. Cmnd paper 4453. London: HMSO. 

Malloy (2014) HVGIQ: VietnamHumanvarieties.org, June 19. 

Malloy (2006) A World of Difference: Richard Lynn Maps World Intelligence. Gnxp.com, February 01. 

Pearce & Dunbar (2011) Latitudinal variation in light levels drives human visual system size, Biology Letters, 8(1): 90–93. 

Pereira et al (2005). African female heritage in Iberia: a reassessment of mtDNA lineage distribution in present timesHuman Biology77 (2): 213–29. 

Richards et al (2003) Extensive Female-Mediated Gene Flow from Sub-Saharan Africa into Near Eastern Arab PopulationsAmerican Journal of Human Genetics 72(4):1058–1064.

Rushton, J. P., & Ankney, C. D. (2009). Whole brain size and general mental ability: A reviewInternational Journal of Neuroscience119, 691-731

Sailer (1996) Great Black HopesNational Review, August 12

Scarr et al (1977) Absence of a relationship between degree of white ancestry and intellectual skills within a black population. Human Genetics 39(1):69-86 

Templer (2010) The Comparison of Mean IQ in Muslim and Non-Muslim CountriesMankind Quarterly 50(3):188-209 

Torrence (1983) Time budgeting and hunter-gatherer technology. In G. Bailey (Ed.). Hunter-Gatherer Economy in Prehistory: A European Perspective. Cambridge, Cambridge University Press.

Woodley (2009) Inbreeding depression and IQ in a study of 72 countries Intelligence 37(3): 268-276 

John Gray’s ‘Straw Dogs’: In Praise of Pessimism

‘Straw Dogs: Thoughts on Humans and Other Animals’, by John Gray, Granta Books, 2003.

The religious impulse, John Gray argues in a later work elaborating on the themes first set out in ‘Straw Dogs’, is as universal as the sex drive. Like the latter, when repressed, it re-emerges in the form of perversion.[1]

Thus, the Marxist faith in our passage into communism after the revolution represents a perversion of the Christian belief in our passage into heaven after death or Armageddon – the former, communism (i.e. heaven on earth), being quite as unrealistic as the otherworldly, celestial paradise envisaged by Christians, if not more so. 

Marxism is thus, as Edmund Wilson was the first to observe, the opiate of the intellectuals

What is true of Marxism is also, for Gray, equally true of what he regards as the predominant secular religion of the contemporary West – namely humanism. 

Its secular self-image notwithstanding, humanism is, for Gray, a substitute religion that replaces an irrational faith in an omnipotent god with an even more irrational faith in the omnipotence of Man himself (p38). 

Yet, in doing so, Gray concludes, humanism renounces the one insight that Christianity actually got right – namely the notion that humans are “radically flawed” as captured by the doctrine of original sin.[2]

Progress and Other Delusions

Of course, in its ordinary usage, the term ‘humanism’ is hopelessly broad, pretty much encompassing anyone who is neither, on the one hand, religious nor, on the other, a Nazi. 
 
For his purposes, Gray defines humanism more narrowly, namely as a “belief in progress” (p4). 

More specifically, however, he seems to have in mind a belief in the inevitability of social, economic, moral and political progress. 

Belief in the inevitability of progress is, he contends, a faith universal across the political spectrum – from neoconservatives who think they can transform Islamic tribal theocracies and Soviet Republics into liberal capitalist democracies, to Marxists who think Islamic tribal theocracies and liberal capitalist democracies alike will themselves ultimately give way to communism

Gray, however, rejects the notion of any grand narrative arc in human history.

Looking for meaning in history is like looking for patterns in clouds” (p48). 

Scientific Progress and Social Progress 

Although in an early chapter he digresses on the supposed “irrational origins” of western science,[3] Gray does not question the reality of scientific progress. 
 
Instead, what Gray questions is the assumption that social, moral and political progress will inevitably accompany scientific progress. 
 
Progress in science and technology, does not invariably lead to social, moral and political progress. On the contrary, new technologies can readily be enlisted in the service of governmental repression and tyranny. Thus, Gray observes: 

Without the railways, telegraph and poison gas, there could have been no Holocaust” (p14). 

Thus, by Gray’s reckoning, “Death camps are as modern as laser surgery” (p173).
 
Scientific progress is, he observes, unstoppable and self-perpetuating. Thus, if any nation unilaterally renounces modern technology, it will be economically outcompeted, or even militarily conquered, by other nations who harness modern technologies in the service of their economy and military: 

Any country that renounces technology makes itself prey to those that do not. At best it will fail to achieve the self-sufficiency at which it aims – at worst it will suffer the fate of the Tasmanians” (p178). 

However, the same is not true of political, social and moral progress. On the contrary, a nation excessively preoccupied with moral considerations would surely be defeated in war or indeed in economic competition by an enemy willing to cast aside morality for the sake of success. 
 
Thus, Gray concludes:

Technology is not something that humankind can control. It is an event that has befallen the world” (p14). 

Thus, Gray anticipates: 

Even as it enables poverty to be diminished and sickness to be alleviated, science will be used to refine tyranny and perfect the art of war” (p123). 

This leads him to predict: 

If one thing about the present century is certain, it is that the power conferred on humanity by new technologies will be used to commit atrocious crimes against it” (p14). 

Human Nature

This is because, according to Gray, although technology progresses, human nature itself remains stubbornly intransigent. 

Though human knowledge will very likely continue to grow and with it human power, the human animal will stay the same: a highly inventive animal that is also one of the most predatory and destructive” (p4). 

As a result, “The uses of knowledge will always be as shifting and crooked as humans are themselves” (p28). 
 
Thus, the fatal flaw in the humanist theory that political progress will inevitably accompany scientific progress is, ironically, its failure to come to grips with one particular sphere of scientific progress – namely progress in the scientific understanding of human nature itself. 
 
Sociobiological theory suggests humans are innately selfish and nepotistic to an extent incompatible with the utopias envisaged by reformers and revolutionaries
 
Evolutionary psychologists like to emphasize how natural selection has paradoxically led to the evolution of cooperation and altruism. They are also at pains to point out that innate psychological mechanisms are responsive to environmental variables and hence amenable to manipulation. 
 
This has led some thinkers to suggest that, even if utopia is forever beyond our grasp, nevertheless society can be improved by social engineering and well-meaning reform (see Peter Singer’s A Darwinian Left: which I have reviewed herehere and here). 

However, this ignores the fact that the social engineers themselves (e.g. politicians, civil servants) are possessed of the same essentially selfish and nepotistic nature as those whose behaviour they are seeking to guide and manipulate. Therefore, even if they were able to successfully reengineer society, they would do so for their own ends, not those of society or humankind as a whole.

Of course, human nature itself could itself be altered through genetic engineering or eugenics. However, once again, those charged with doing the work (scientists) and those from whom they take their orders (government, big business) will, at the time their work is undertaken, be possessed of the same nature that it is their intention to improve upon. 
 
Therefore, Gray concludes, if human nature itself is remodelled: 

It will be done haphazardly, as an upshot of struggles in the murky realm where big business, organized crime and the hidden parts of government vie for control” (p6). 

It will hence reflect the interests, not of humankind as a whole, but of rather those responsible for undertaking the project. 

The Future

In contrast to the optimistic vision of such luminaries as Steven Pinker in The Better Angels of Our Nature and Enlightenment Now and Matt Ridley in his book The Rational Optimist (which I have reviewed here), Gray’s vision of the future is positively dystopian. He foresees a return of resource wars and “wars of scarcity… waged against the world’s modern states by the stateless armies of the militant poor” (p181-2).

This is an inevitable result of a Malthusian trap

So long as population grows, progress will consist in labouring to keep up with it. There is only one way that humanity can limit its labours, and that is by limiting its numbers. But limiting human numbers clashes with powerful human needs” (p184).[4]

These “powerful human needs” include, not just the sociobiological imperative to reproduce, but also the interests of various ethnic groups in ensuring their survival and increasing their military and electoral strength (Ibid.). 

Zero population growth could be enforced only by a global authority with draconian powers and unwavering determination” (p185). 

Unfortunately (or perhaps fortunately, depending on your perspective), he concludes: 

There has never been such a power and never will be” (Ibid.). 

Thus, Gray compares the rise in human populations to the temporary “spikes that occur in the numbers of rabbits, house mice and plague rats” (p10). Thus, he concludes: 

Humans… like any other plague animal…cannot destroy the earth, but… can easily wreck the environment that sustains them” (p12). 

Thus, Gray darkly prophesizes, “We may well look back on the twentieth century as a time of peace” (p182). 

As Gray points out in his follow-up book: 

War or revolution… may seem apocalyptic possibilities, but they are only history carrying on as it has always done. What is truly apocalyptic is the belief [of Marx and Fukuyamathat history will come to a stop” (Heresies: Against Progress and Other Illusions: p67).[5]

Morality

While Gray doubts the inevitability of social, political and moral progress, he perhaps does not question sufficiently its reality. 

For example, citing improvements in sanitation and healthcare, he concludes that, although “faith in progress is a superstition”, progress itself “is a fact” (p155). 
 
Yet every society, by definition, views its own moral and political values as superior to those of other societies. Otherwise, they would not be its own values. They therefore view the recent changes in moral and political values that led to their own moral and political values as a form of moral progress. 
 
However, what constitutes moral, social and political progress is entirely a subjective assessment
 
For example, the ancient Romans, transported to our times, would surely accept the superiority of our science and technology and, if they did not, we would outcompete them both economically and militarily and thereby prove it ourselves. 

However, they would view our social, moral and political values as decadent, immoral and misguided and we would have no way of proving them wrong. 
 
In other words, while scientific and technological progress can be proven objectively, what constitutes moral and political progress is a mere matter of opinion. 
 
Gray occasionally hints in this direction (namely, moral relativism), declaring in one of his many countless quotable aphorisms 

Ideas of justice are as timeless as fashions in hats” (p103). 

He even flirts with outright moral nihilism, describing “values” as “only human needs and the needs of other animals turned into abstractions” (p197), and even venturing, “the idea of morality” may be nothing more than “an ugly superstition” (p90). 
 
However, Gray remains somewhat confused on this point. For example, among his arguments against morality is that observation that: 

Morality has hardly made us better people” (p104). 

However, the very meaning of “better people” is itself dependent on a moral judgement. If we reject morality, then there are no grounds for determining if some people are “better” than others and therefore this can hardly be a ground for rejecting morality. 

Free Will

On the issue of free will, Gray is more consistent. Relying on the controversial work of neuroscientist Benjamin Libet, he contends: 

In nearly all our life willing decides nothing – we cannot wake up or fall asleep, remember or forget our dream, summon or banish our thoughts, by deciding to do so… We just act and there is no actor standing behind what we do” (p69). 

Thus, he observes, “Our lives are more like fragmentary dreams then the enactments of conscious selves” (p38) and “Our actual experience is not of freely choosing the way we live but of being driven along by our bodily needs – by fear, hunger and, above all, sex” (p43). 
 
Rejection of free will is, moreover, yet a further reason to reject morality. 
 
Whether one behaves morally or not, and what one regards as the moral way to behave, is, Gray contends, entirely a matter of the circumstances of one’s upbringing (p107-8).[6] Thus, according to Gray “being good is good luck” and not something for which one deserves credit or blame (p104).

Gray therefore concludes: 

The fact that we are not autonomous subjects deals a death blow to morality – but it is the only possible ground of ethics” (p112). 

Yet, far from truly free, Gray contends: 

We spend our lives coping with what comes along” (p70). 

However, in expecting humankind to take charge of its own destiny: 

We insist that mankind can achieve what we cannot: conscious control of its existence” (p38). 

Self-Awareness

For Gray, then, what separates us from the remainder of the animal kingdom is not then free will, or even consciousness, but rather merely self-awareness.
 
Yet this, for Gray, is a mixed blessing at best. 
 
After all, it has long been known that musicians and sportsmen often perform best, not when consciously thinking about, or even aware of, the movements and reactions of their hands and bodies, but rather when acting ‘on instinct’ and momentarily lost in what positive psychologists call flow or being in the zone (p61). 

This is a theme Gray returns to in The Soul of the Marionette, where he argues that, in some sense, the puppet is freer, and more unrestrained in his actions, than the puppet-master.

The Gaia Cult

Given the many merits of his book, it is regrettable that Gray has an unfortunate tendency to pontificate about all manner of subjects, many of them far outside his own field of expertise. As a result, almost inevitably, he sometimes gets it completely wrong on certain specific subjects. 
 
A case in point is environmentalist James Lovelock’s Gaia theory, which Gray champions throughout his book. 

According to ‘Gaia Theory’, the planet is analogous to a harmonious self-regulating organism – in danger of being disrupted only by environmental damage wrought by man. 

Given his cynical outlook, not to mention his penchant for sociobiology, Gray’s enthusiasm for Gaia is curious.

As Richard Dawkins explains in Unweaving the Rainbow, the adaptation of organisms to their environment, which consists largely of other organisms, may give the superficial appearance of eco-systems as harmonious wholes, as some organisms exploit and hence come to rely on the presence of other organisms in order to survive (Unweaving the Rainbow: p221). 
 
However, a Darwinian perspective suggests that, far from existing in benign harmony, organisms are in a state of continuous competition and conflict. Indeed, it is paradoxically precisely their exploitation of one another that gives the superficial appearance of harmony. 
 
In other words, as Dawkins concludes: 

Individuals work for Gaia only when it suits them to do so – so why bother to bring Gaia into the discussion” (Unweaving the Rainbow: p225). 

Yet, for many of its adherents, Gaia is not so much a testable, falsifiable scientific theory as it is a kind of substitute religion. Thus, Dawkins describes ‘Gaia theory’ as “a cult, almost a religion” (Ibid: p223).

It is therefore better viewed, within Gray’s own theoretical framework, as yet another secular perversion of humanity’s innate religious impulse. 
 
Perhaps, then, Gray’s own curious enthusiasm for this particular pseudo-scientific cult suggests that Gray is himself no more immune from the religious impulse than those whom he attacks. If so, this, paradoxically, only strengthens his case that the religious impulse is indeed universal and innate.

The Purpose of Philosophy

Gray is himself a philosopher by background. However, he is contemptuous of most of the philosophical tradition that has preceded him. 

Thus, he contends:  

As commonly practised, philosophy is the attempt to find good reasons for conventional beliefs” (p37). 

In former centuries such conventional beliefs were largely religious dogma. Yet, from the nineteenth century on, they increasing became political creeds emphasizing human progress, such as Whig historiography, and the theories of Marx and Hegel.

Thus, Gray writes:  

In the Middle Ages, philosophy gave intellectual scaffolding to the Church; in the nineteenth and twentieth centuries it served a myth of progress” (p82). 

Today, however, despite the continuing faith in progress that Gray so ably dissects, philosophy has ceased to fulfil even this function and hence abandoned even these dubious raisons d’être.

The result, according to Gray, is that:

Serving neither religion nor a political faith, philosophy is a subject without a subject-matter; scholasticism without the charm of dogma” (p82). 

Yet Gray reserves particular scorn for moral philosophy, which is, according to him, “an exercise in make-believe” (p89) and “very largely a branch of fiction” (p109), albeit one “less realistic in its picture of human life than the average bourgeois novel” (p89), which, he ventures, likely explains why “a philosopher has yet to write a great novel” (p109). 

In other words, compared with outright fiction, moral philosophy is simply less realistic. 

Anthropocentrism

Although, at the time ‘Straw Dogs’ was first published, Gray held the title ‘Professor of European Thought’ at the London School of Economics, he is particularly scathing in his comments regarding Western philosophy. 

Thus, like Schopenhauer, his pessimist precursor, (who is, along with Hume, one of the few Western philosophers whom he mentions without also disparaging), Gray purports to prefer Eastern philosophical traditions. 

These and other non-Western religious and philosophical traditions are, he claims, unpolluted by the influence of Christianity and hence view humans as merely another animal, no different from the rest. 

I do not have sufficient familiarity with Eastern philosophical traditions to assess this claim. However, I suspect that anthropocentrism and the concomitant belief that humans are somehow special, unique and different from all other organisms is a universal and indeed innate human delusion. 

Indeed, paradoxically, it may not even be limited to humans. 
 
Thus, I suspect that, to the extent they were, or are, capable of conceptualizing such a thought, earthworms and rabbits would also conceive of themselves as special and unique over and above all other species in just the same way we do.

Death or Nirvanva?

Ultimately, however, Gray rejects eastern philosophical and religious traditions too – including Buddhism
 
There is no need, he contends, to spend lifetimes striving to achieve nirvāna and the cessation of suffering as the Buddha proposed. On the contrary, he observes, there is no need for any such effort, since: 

Death brings to everyone the peace Buddha promised only after lifetimes of striving” (p129). 

All one needs to do, therefore, is to let nature take its course, or, if one is especially impatient, perhaps hurry things along by suicide or an unhealthy lifestyle.

Aphoristic Style

I generally dislike books written in the sort of pretentious aphoristic style that Gray adopts. In my experience, they generally replace the argumentation necessary to support their conclusions with bad poetry.

Indeed, sometimes the poetic style is so obscurantist that it is difficult even to discern what these conclusions are in the first place. 
 
However, in ‘Straw Dogs’, the aphoristic style seems for once appropriate. This is because Gray’s arguments, though controversial, are straightforward and not requiring of additional explication. 
 
Indeed, one suspects the inability of earlier thinkers to reach the same conclusions reflects a failure of ‘The Will’ rather than ‘The Intellect’ – an unwillingness to face up to and come to terms with the reality of the human condition. 

A Saviour to Save us from Saviours’?

Unlike other works dealing with political themes, Gray does not conclude with a chapter proposing solutions to the problems identified in previous chapters. Instead, his conclusion is as bleak as the pages that precede it.

At its worst, human life is not tragic, but unmeaning… the soul is broken but life lingers on… what remains is only suffering” (p101).

Personally, however, I found it refreshing that, unlike other self-important, self-appointed saviours of humanity, Gray does not attempt to portray himself as some kind of saviour of mankind. On the contrary, his ambitions are altogether more modest.

Moreover, he does not hold our saviours in particularly high esteem but rather seems to regard them as very much part of the problem. 
 
He does therefore consider briefly what he refers to as the Buddhist notion that we actually require “A Saviour to Save Us From Saviours”. 

Eventually, however, Gray renounces even this role. 

Humanity takes its saviours too lightly to need saving from them… When it looks to deliverers it is for distraction, not salvation” (p121). 

Gray thus reduces our self-important, self-appointed saviours – be they philosophers, religious leaders, self-help gurus or political leaders – to no more than glorified competitors in the entertainment industry.

Distraction as Salvation?

Indeed, for Gray, it is not only saviours who function as a form of distraction for the masses. On the contrary, for Gray, ‘distraction’ is now central to life in the affluent West. 
 
Thus, in the West today, standards of living have improved to such an extent that obesity is now a far greater health problem than starvation, even among the so-called ‘poor’ (indeed, one suspects, especially among the so-called ‘poor’!). 
 
Yet clinical depression is now rapidly expanding into the greatest health problem of all. 
 
Thus, Gray concludes: 

Economic life is no longer geared chiefly to production… [but rather] to distraction” (p162). 

In other words, where once, to acquiesce in their own subjugation, the common people required only bread and circuses, today they seem to demand cake, ice cream, alcohol, soap operas, Playstations, Premiership football and reality TV!

Indeed, Gray views most modern human activity as little more than distraction and escapism. 

It is not the idle dreamer who escapes from reality. It is practical men and women who turn to a life of action as a refuge from insignificance” (194). 

Indeed, for Gray, even meditation is reduced to a form of escapism: 

The meditative states that have long been cultivated in Eastern traditions are often described as techniques for heightening consciousnessIn fact they are ways of by-passing self-awareness” (p62). 
 

Yet Gray does not disparage escapism as a superficial diversion from serious and worthy matters. 
 
On the contrary, he views distraction, or even escapism, as the key to, if not happiness, then at least to the closest we can ever approach to this elusive but chimeric state.

Moreover, the great mass of mankind instinctively recognizes as much:

Since happiness is unavailable, the mass of mankind seeks pleasure” (p142). 

Thus, in a passage which is perhaps the closest Gray comes to self-help advice, he concludes: 

Fulfilment is found, not in daily life, but in escaping from it” (p141-2). 

Perhaps then, escapism is not such a bad thing, and there is something is to be said for sitting around watching TV all day after all. 
____________ 

 
By his own thesis then, it is perhaps as a form of ‘Distraction’ that Gray’s own book ought ultimately to be judged. 
 
By this standard, I can only say that, with its unrelenting cynicism and pessimism, ‘Straw Dogs’ distracted me immensely – and, according to the precepts of Gray’s own philosophy, there can surely be no higher praise!

Endnotes

[1] John Gray, Heresies: Against Progress and Other Illusions: p7; p41. 

[2] John Gray, Heresies: Against Progress and Other Illusions: p8; p44. 

[3] John Gray, ‘Straw Dogs’: p20-23.

[4] Of course, the assumption that human population will continue to grow contradicts the demographic transition model, whereby it is assumed that a decline in fertility inevitably accompanies economic development. However, while it is true that declining fertility has accompanied increasing prosperity in many parts of the world, it is not at all clear why this has occurred. Indeed, from a sociobiological perspective, increases in wealth should lead to an increased reproductive rate, as organisms channel their greater material resources into increased reproductive success, the ultimate currency of natural selection. It is therefore questionable how much faith we should place in the universality of a process the causes of which are so little understood. Moreover, the assumption that improved living-standards in the so-called ‘developing world’ will inevitably lead to reductions in fertility obviously presupposes that the so-called ‘developing world’ will indeed ‘develop’ and that living standards will indeed improve, a obviously questionable assumption. Ultimately, the very term ‘developing world’ may turn out to represent a classic case of wishful thinking. 

[5] Thus, of the bizarre pseudoscience of cryonics, whereby individuals pay private companies for the service of freezing their brains or whole bodies after death, in the hope that, with future advances in technology, they can later be resurrected, he notes that the ostensible immortality promised by such a procedure is itself dependent on the very immortality of the private companies offering the service, and of the very economic and legal system (including contractual obligations) within which such companies operate.

If the companies that store the waiting cadavers do not go under in stock market crashes, they will be swept away by war or revolutions” (Heresies: Against Progress and Other Illusions: p67).

[6] Actually, heredity surely also plays a role, as traits such as empathy and agreeableness are partly heritable, as is sociopathy and criminality.

Richard Dawkins’ ‘The Selfish Gene’: Selfish Genes, Selfish Memes and Altruistic Phenotypes

‘The Selfish Gene’, by Richard Dawkins, Oxford University Press, 1976.

Selfish Genes ≠ Selfish Phenotypes

Richard Dawkins’s ‘The Selfish Gene’ is among the most celebrated, but also the most misunderstood, works of popular science.

Thus, among people who have never read the book (and, strangely, a few who apparently have) Dawkins is widely credited with arguing that humans are inherently selfish, that this disposition is innate and inevitable, and even, in some versions, that behaving selfishly is somehow justified by our biological programming, the titular ‘Selfish Gene’ being widely misinterpreted as referring to a gene that causes us to behave selfishly.

Actually, Dawkins is not concerned, either directly or primarily, with humans at all.

Indeed, he professes to be “not really very directly interesting in man”, whom he dismisses as “a rather aberrant species” and hence peripheral to his own interest, namely how evolution has shaped the bodies and especially the behaviour of organisms in general (Dawkins 1981: p556).

‘The Selfish Gene’ is then, unusually, if not uniquely, for a bestselling work of popular science, a work, not of human biology nor even of non-human zoology, ethology or natural history, but rather of theoretical biology.

Moreover, in referring to genes as ‘selfish’, Dawkins has in mind not a trait that genes encode in the organisms they create, but rather a trait of the genes themselves.

In other words, individual genes are themselves conceived of as ‘selfish’ (in a metaphoric sense), in so far as they have evolved by natural selection to selfishly promote their own survival and replication by creating organisms designed to achieve this end.

Indeed, ironically, as Dawkins is at pains to emphasise, selfishness at the genetic level can actually result in altruism at the level of the organism or phenotype.

This is because, where altruism is directed towards biological kin, such altruism can facilitate the replication of genes shared among relatives by virtue of their common descent. This is referred to as kin selection or inclusive fitness theory and is one of the central themes of Dawkins’ book.

Yet, despite this, Dawkins still seems to see organisms themselves, humans very much included, as fundamentally selfish – albeit a selfishness tempered by a large dose of nepotism.

Thus, in his opening paragraphs no less, he cautions:

If you wish, as I do, to build a society in which individuals cooperate generously and unselfishly towards a common good, you can expect little help from our biological nature. Let us try to teach generosity and altruism, because we are born selfish” (p3).

The Various Editions

In later editions of his book, namely those published since 1989, Dawkins tempers this rather cynical view of human and animal behaviour by the addition of a new chapter – Chapter 12, titled ‘Nice Guys Finish First’.

This new chapter deals with the subject of reciprocal altruism, a topic he had actually already discussed earlier, together with the related, but distinct, phenomenon of mutualism,[1] in Chapter 10 (entitled, ‘You Scratch My Back, I’ll Ride on Yours’).

In this additional chapter, he essentially summarizes the work of political scientist Robert Axelrod, as discussed in Axelrod’s own book The Evolution of Co-Operation. This deals with evolutionary game theory, specifically the iterated prisoner’s dilemma, and the circumstances in which a cooperative  strategy can, by cooperating only with those who have a history of reciprocating, survive, prosper, evolve, and, in the long-term, ultimately outcompete  and hence displace those strategies which maximize only short-term self-interest.

Post-1989 editions also include another new chapter titled ‘The Long Reach of the Gene’ (Chapter 13).

If, in Chapter 12, the first additional chapter, Dawkins essentially summarised the contents of of Axelrod’s book, The Evolution of Cooperation, then, in Chapter 13, he summarizes his own book, The Extended Phenotype.

In addition to these two additional whole chapters, Dawkins also added extensive endnotes to these post-1989 editions.

These endnotes clarify various misunderstandings which arose from how he explained himself in the original version, defend Dawkins against some criticisms levelled at certain passages of the book and also explain how the science progressed in the years since the first publication of the book, including identifying things he and other biologists got wrong.

With still more recent new editions, the content of ‘The Selfish Gene’ has burgeoned still further. Thus, he 30th Anniversary Edition boasts only a new introduction; the recent 40th Anniversary Edition, published in 2016, boasts a new Epilogue too. Meanwhile, the latest so-called Extended Selfish Gene boasts, in addition to this, two whole new chapters.

Actually, these two new chapters are not that new, being lifted wholesale from, once again, The Extended Phenotype, a work whose contents Dawkins has already, as we have seen, summarized in Chapter 13 (‘The Long Reach of the Gene’), itself an earlier addition to the book’s seemingly ever expanding contents list.

The decision not to entirely rewrite ‘The Selfish Gene’ was apparently that of Dawkins’ publisher, Oxford University Press.

This was probably the right decision. After all, ‘The Selfish Gene’ is not a mere undergraduate textbook, in need of revision every few years in order to keep up-to-date with the latest published research.

Rather, it was a landmark work of popular science, and indeed of theoretical biology, that introduced a new approach to understanding the evolution of behaviour and physiology to a wider readership, composed of biologist and non-biologist alike, and deserves to stand in its original form as a landmark in the history of science.

However, while the new introductions and the new epilogue is standard fare when republishing a classic work several years after first publication, the addition of four (or two, depending on the edition) whole new chapters strikes me less readily defensible.

For one thing, they distort the structure of the book, and, though interesting in and of themselves, always read for me rather as if they have been tagged on at the end as an afterthought – as indeed they have.

The book certainly reads best, in a purely literary sense, in its original form (i.e. pre-1989 editions), where Dawkins concludes with an optimistic, if fallacious, literary flourish (see below).

Moreover, these additional chapters reek of a shameless marketing strategy, designed to deceive new readers into paying the full asking price for a new edition, rather than buying a cheaper second-hand copy or just keeping their old one.

This is especially blatant in respect of the book’s latest incarnation, The Extended Selfish Gene, which, according to the information of Oxford University Press’s website, was released only three months after the previous 40th Anniversary Edition yet includes two additional chapters.

One frankly expects better from so celebrated a publisher such as Oxford University Press, and indeed so celebrated a biologist and science writer as Richard Dawkins, especially as I suspect neither are especially short of money.

If I were recommending someone who has never read the book before on which edition to buy, I would probably advise them to get a second-hand copy of any post-1989 editions, since these can now be picked up very cheap, and include the additional endnotes which I found personally very interesting.

On the other hand, if you want to read three additional chapters either from or about The Extended Phenotype then you are probably best to buy, instead, well… The Extended Phenotype – as this is also now a rather old book of which, as with ‘The Selfish Gene’, old copies can now be picked up very cheap.

The ‘Gene’s-Eye-View’ of Evolution

The Selfish Gene is a seminal work in the history of biology primarily because Dawkins takes the so-called gene’s-eye-view of evolution to its logical conclusion. To this extent, contrary to popular opinion, Dawkins’ exposition is not merely a popularization, but actually breaks new ground theoretically.

Thus, John Maynard Smith famously talked of kin selection by analogy with ‘group selection’ (Smith 1964). Meanwhile, William Hamilton, who formulated the theory underlying these concepts, always disliked the term ‘kin selection’ and talked instead of the direct, indirect and inclusive fitness of organisms (Hamilton 1964a; 1964b).

However, Dawkins takes this line of thinking to its logical conclusion by looking – not at the fitness or reproductive success of organisms or phenotypes – but rather at the success in self-replication of genes themselves.

Thus, although he certainly stridently rejects group-selection, Dawkins replaces this, not with the familiar individual-level selection of classical Darwinism, but rather with a new focus on selection at the level of the gene itself.

Abstract Animals?

Much of the interest, and no little of the controversy, arising from ‘The Selfish Gene’ concerned, of course, its potential application to human behaviour. However, in the book itself, humans, whom, as mentioned above, Dawkins dismisses as a “rather aberrant species” in which he professes to be “not really very directly interested” (Dawkins 1981: p556) are actually mentioned only occasionally and briefly.

Indeed, most of the discussion is purely theoretical. Even the behaviour of non-human animals is described only for illustrative purposes, and even these illustrative examples often involve simplified hypothetical creatures rather than descriptions of the behaviour of real organisms.

For example, he illustrates his discussion of the relative pros and cons of either fighting or submitting in conflicts over access to resources by reference to ‘hawks’ and ‘doves’ – but is quick to acknowledge that these are hypothetical and metaphoric creatures, with no connection to the actual bird species after whom they are named:

The names refer to conventional human usage and have no connection with the habits of the birds from whom the names are derived: doves are in fact rather aggressive birds” (p70).

Indeed, even Dawkins’ titular “selfish genes” are rather abstract and theoretical entities. Certainly, the actual chemical composition and structure of DNA is of only peripheral interest to him.

Indeed, often he talks of “replicators” rather than “genes” and is at pains to point out that selection can occur in respect of any entity capable of replication and mutation, not just DNA or RNA. (Hence his introduction of the concept of memes: see below).

Moreover, Dawkins uses the word ‘gene’ in a somewhat different sense to the way the word is employed by most other biologists. Thus, following George C. Williams in Adaptation and Natural Selection, he defines a “gene” as:

Any portion of chromosomal material that potentially lasts for enough generations to serve as a unit of natural selection” (p28).

This, of course, makes his claim that genes are the principle unit of selection something approaching a tautology or circular argument.

Sexual Selection in Humans?

Where Dawkins does mention humans, it is often to point out the extent to which this “rather aberrant species” apparently conspicuously fails to conform to the predictions of selfish-gene theory.

For example, at the end of his chapter on sexual selection (Chapter 9: “Battle of the Sexes”) he observes that, in contrast to most other species, among humans, at least in the West, it seems to be females who are most active in using physical appearance as a means of attracting mates:

One feature of our own society that seems decidedly anomalous is the matter of sexual advertisement… It is strongly to be expected on evolutionary grounds that where the sexes differ, it should be the males that advertise and the females that are drab… [Yet] there can be no doubt that in our society the equivalent of the peacock’s tail is exhibited by the female, not the male” (p164).

Thus, among most other species, it is males who have evolved more elaborate plumages and other flashy, sexually selected ornaments. In contrast, females of the same species are often comparatively drab in appearance.

Yet, in modern western societies, Dawkins observes, it is more typically women who “paint their faces and glue on false eyelashes” (p164).

Here, it is notable that Dawkins, being neither an historian nor an anthropologist, is careful to restricts his comments to “our own society” and, elsewhere, to “modern western man”.

Thus, one explanation is that it is only our own ‘WEIRD’, western societies that are anomalous?

Thus, Matt Ridley, in The Red Queen, proposes that maybe:

Modern western societies have been in a two-century aberration from which they are just emerging. In Regency England, Louis XIV’s France, medieval Christendom, ancient Greece, or among the Yanomamö, men followed fashion as avidly as women. Men wore bright colours, flowing robes, jewels, rich materials, gorgeous uniforms, and gleaming, decorated armour. The damsels that knights rescued were no more fashionably accoutred than their paramours. Only in Victorian times did the deadly uniformity of the black frock coat and its dismal modern descendant, the grey suit, infect the male sex, and only in this century have women’s hemlines gone up and down like yo-yos” (The Red Queen: p292).

There is an element of truth here. However, I suspect it partly reflects a misunderstanding of the different purposes for which men and women use clothing, including bright and elaborate clothing.

Thus, it rather reminds me of Margaret Mead’s claim that, among the Tschambuli of Papua New Guinea, sex-roles were reversed because, here, it was men who painted their faces and wore ‘make-up’, not women.

Yet what Mead neglected to mention that the ‘make-up’ in question that Mead found so effeminate was actually war-paint that a Tschambuli warrior was only permitted to wear after killing his first enemy warrior (see Homicide: Foundations of Human Behavior: p152).

Of course, clothes and makeup are an aspect of behaviour rather than morphology, and thus more directly analogous to, say, the nests (or, more precisely, the bowers) created by male bowerbirds than the tail of the peacock.

However, behaviour is, in principle, no less subject to natural selection (and sexual selection) than is morphology, and therefore the paradox remains.

Moreover, even focusing exclusively on morphology, the sex difference still seems to remain.

Thus, perhaps the closest thing to a ‘peacock’s tail’ in humans (i.e. a morphological trait designed to attract mates) is a female trait, namely breasts.

Thus, as Desmond Morris first observed, in humans, the female breasts seem to have been co-opted for a role in sexual selection, since, unlike among other mammals, women’s breasts are permanent, from puberty on, not present only during lactation, and composed primarily of fatty tissues, not milk (Møller 1995; Manning et al 1997; Havlíček et al 2016).

In contrast, men possess no obvious equivalent of the ‘peacock’s tail’ (i.e. a trait that has evolved in response to female choice) – though Geoffrey Miller makes a fascinating (but ultimately unconvincing) case that the human brain may represent a product of sexual selection (see The Mating Mind: How Sexual Choice Shaped the Evolution of Human Nature).[2]

Interestingly, in an endnote to post-1989 editions of ‘The Selfish Gene’, Dawkins himself tentatively speculates that maybe the human penis might represent a sexually-selected ‘fitness indicator’.

Thus, he points out that the human penis is large as compared to that of other primates, yet also lacks a baculum (i.e. penis bone) that facilitates erections. This, he speculates, could mean that the capacity to maintain an erection might represent an honest signal of health in accordance with Zahavis handicap principle (307-8).

However, it is more likely that the large size, or more specifically the large width, of the human penis reflects instead a response to the increased size of the vagina, which itself increased in size to enable human females to give birth to large-brained, and hence large-headed, infants (see Bowman 2008; Sexual Selection and the Origins of Human Mating Systems: pp61-70).[3]

How then can we make sense of this apparent paradox, whereby, contrary to Bateman’s principle, sexual selection appears to have operated more strongly on women than on men?

For his part, Dawkins himself offers no explanation, merely lamenting:

What has happened in modern western man? Has the male really become the sought-after sex, the one that is in demand, the sex that can afford to be choosy? If so, why?” (p165).

However, in respect of what David Buss calls short-term mating strategies (i.e. casual sex, hook-ups and one night stands), this is certainly not the case.

On the contrary, patterns of everything from prostitution and rape to erotica and pornography consumption confirm that, in respect of short-term ‘commitment’-free casual sex, it remains women who are very much in demand and men who are the ardent pursuers (see The Evolution of Human Sexuality: which I have reviewed here).

Thus, in one study conducted on a University campus, 72% of male students agreed to go to bed with a female stranger who approached them with a request to this effect. In contrast, not a single one of the 96 females approached agreed to the same request from a male questioner (Clark and Hatfield 1989).

(What percentage of the students sued the university for sexual harassment was not revealed.)

However, humans also form long-term pair-bonds to raise children, and, in contrast to males of most other mammalian species, male parents often invest heavily in the offspring of such unions.

Men are therefore expected to be relatively choosier in respect of long-term romantic partners (e.g. wives) than they are for casual sex partners. This may then explain the relatively high levels of reproductive competition engaged in by human females, including high levels of what Dawkins calls ‘sexual advertising’.

Reproductive competition between women may be especially intense in western societies practising what Richard Alexander termed ‘socially-imposed monogamy’.

This refers to societies where there are large differences between males in social status and resource holdings, but where even wealthy males are prohibited by law from marrying multiple women at once.[4]

Here, there may be intense competition as between females for exclusive rights to resource-abundant ‘alpha male’ providers (Gaulin and Boser 1990).

Thus, to some extent, the levels of sexual competition engaged in by women in western societies may indeed be higher than in non-western, polygynous societies.

This, then, might explain why females use what Dawkins terms ‘sexual advertising’ to attract long-term mates (i.e. husbands). However, it still fails to explain why males don’t – or, at least, don’t seem to do so to anything like the same degree.

The answer may be that, in contrast to mating patterns in modern western societies, ‘female choice’ may actually have played a surprisingly limited role in human evolutionary history, given that, in most pre-modern societies, arranged marriages were, and are, the norm.

Male mating competition may then have taken the form of ‘male-male contest competition’ (i.e. fighting) rather than displaying to females – i.e. what Darwin called intra-sexual selection’ rather than ‘inter-sexual selection’.

Thus, while men indeed possess no obvious analogue to the peacock’s tail, they do seem to possess traits designed for fighting – namely considerably greater levels of upper-body musculature and violent aggression as compared to women (see Puts 2010).

In other words, human males may not have any obvious ‘peacock’s tail’, but we perhaps we do have, if you like, ‘stag’s antlers’.

From Genes to Memes

Dawkins’ eleventh chapter, which was, in the original version of the book (i.e. pre-1989 editions), the final chapter, is also the only chapter to focus exclusively on humans.

Entitled ‘Memes: The New Replicators’, it focuses again on the extent to which humans are indeed an “aberrant species”, being subject to cultural as well as biological evolution to a unique degree.

Interestingly, however, Dawkins argues that the principles of natural selection discussed in the preceding chapters of the book can be applied just as usefully to cultural evolution as to biological evolution.

In doing so, he coins the concept of the ‘meme’ as the cultural unit of selection, equivalent to a gene, passing between minds analogously to a virus.

This term has been enormously influential in intellectual discourse, and indeed in popular discourse, and even passed into popular usage.

The analogy of memes to genes makes for an interesting thought-experiment. However, like any analogy, it can be taken too far.

Certainly ideas can be viewed as spreading between people, and as having various levels of fitness depending on the extent to which they catch on.

Thus, to take one famous example, Dawkins famously described religions to ‘Viruses of the Mind’, which travel between, and infect, human minds in a manner analogous to a virus.

Thus, proponents of Darwinian medicine contend that pathogens such as flu and the common cold produce symptoms such as coughing, sneezing and diarrhea precisely because these behaviours promote the spread and replication of the pathogen to new hosts through the bodily fluids thereby expelled.

Likewise, rabies causes dogs and other animals to become aggressive and bite, which likewise facilitates the spread of the rabies virus to new hosts.[5]

By analogy, successful religions are typically those that promote behaviours that facilitate their own spread.

Thus, a religion that commands its followers to convert non-believers, persecute apostates, ‘be fruitful and multiply’ and indoctrinate your offspring with their beliefs is, for obvious reasons, likely to spread faster and have greater longevity than a religious doctrine that commands adherents become celibate hermits and that proselytism is a mortal sin.

Thus, Christians are admonished by scripture to save souls and preach the gospel among heathens; while Muslims are, in addition, admonished to wage holy war against infidels and persecute apostates.

These behaviour facilitate the spread of Christianity and Islam just as surely as coughing and sneezing promote the spread of the flu.[6]

Like genes, memes can also be said to mutate, though this occurs not only through random (and not so random) copying errors, but also by deliberate innovation by the human minds they ‘infect’. Memetic mutation, then, is not entirely random.

However, whether this way of looking at cultural evolution is a useful and theoretically or empirically productive way of conceptualizing cultural change remains to be seen.

Certainly, I doubt whether ‘memetics’ will ever be a rigorous science comparable to genetics, as some of the concept’s more enthusiastic champions have sometimes envisaged. Neither, I suspect, did Dawkins ever originally intend or envisage it as such, having seemingly coined the idea as something of an afterthought.

At any rate, one of the main factors governing the ‘infectiousness’ or ‘fitness’ of a given meme, is the extent to which the human mind is receptive to it and the human mind is itself a product of biological evolution.

The basis for understanding human behaviour, even cultural behaviour, is therefore how natural selection has shaped the human mind – in other words evolutionary psychology not memetics.

Thus, humans will surely have evolved resistance to memes that are contrary to their own genetic interests (e.g. celibacy) as a way of avoiding exploitation and manipulation by third-parties.

For more recent discussion of the status of the meme concept (the ‘meme meme’, if you like) see The Meme Machine; Virus of the Mind; The Selfish Meme; and Darwinizing Culture.

Escaping the Tyranny of Selfish Replicators?

Finally, at least in the original, non-‘extended’ editions of the book, Dawkins concludes ‘The Selfish Gene’, with an optimistic literary flourish, emphasizing once again the alleged uniqueness of the “rather aberrant” human species.[7]

Thus, his final paragraph ends:

We are built as gene machines and cultured as meme machines, but we have the power to turn against our creators. We, alone on earth, can rebel against the tyranny of the selfish replicators” (p201).

This makes for a dramatic, and optimistic, conclusion. It is also flattering to anthropocentric notions of human uniqueness, and of free will.

Unfortunately, however, it ignores the fact that the “we” who are supposed to be doing the rebelling are ourselves a product of the same process of natural selection and, indeed, of the same selfish replicators against whom Dawkins calls on us to rebel. Indeed, even the (alleged) desire to revolt is a product of the same process.[8]

Likewise, in the book’s opening paragraphs, Dawkins proposes:

Let us try to teach generosity and altruism, because we are born selfish. Let us understand what our selfish genes are up to, because we may then at least have the chance to upset their designs.” (p3)

However, this ignores, not only that the “us” who are to do the teaching and who ostensibly wish to instil altruism in others are ourselves the product of this same evolutionary process and these same selfish replicators, but also that the subjects whom we are supposed to indoctrinate with altruism are themselves surely programmed by natural selection to be resistant to any indoctrination or manipulation by third-parties to behave in ways that conflict with their own genetic interests.

In short, the problem with Dawkins’ cop-out Hollywood Ending is that, as anthropologist Vincent Sarich is quoted as observing, Dawkins has himself “spent 214 pages telling us why that cannot be true”. (See also Straw Dogs: Thoughts on Humans and Other Animals: which I have reviewed here and here).[9]

The preceding 214 pages, however, remain an exciting, eye-opening and stimulating intellectual journey, even over thirty years after their original publication.

__________________________

Endnotes

[1] Mutualism is distinguished from reciprocal altruism by the fact that, in the former, both parties receive an immediate benefit from their cooperation, whereas, in the latter, for one party, the reciprocation is delayed. It is reciprocal altruism that therefore presents the greater problem for evolution, and for evolutionists, because, here, there is the problem policing the agreement – i.e. how is evolution to ensure that the immediate beneficiary does indeed reciprocate, rather than simply receiving the benefit without later returning the favour (a version of the free rider problem). The solution, according to Axelrod, is that, where parties interact repeatedly over time, they come to engage in reciprocal altruism only with other parties with a proven track record of reciprocity, or at least without a proven track record of failing to reciprocate. 

[2] Certainly, many male traits are attractive to women (e.g. height, muscularity). However, these also have obvious functional utility, not least in increasing fighting ability, and hence probably have more to do with male-male competition than female choice. In contrast, many sexually-selected traits are positive hindicaps to their bearers, in all spheres except attracting mates. Indeed, one influential theory of sexual selection claims that it is precisely because they represent a handicap that they serve as an honest indicator of fitness and hence a reliable index of genetic quality.

[3] Thus, Edwin Bowman writes:

As the diameter of the bony pelvis increased over time to permit passage of an infant with a larger cranium, the size of the vaginal canal also became larger” (Bowman 2008).

Similarly, in their controversial book Human Sperm Competition: Copulation, Masturbation and Infidelity, Robin Baker and Mark Bellis persuasively contend:

The dimensions and elasticity of the vagina in mammals are dictated to a large extent by the dimensions of the baby at birth. The large head of the neonatal human baby (384g brain weight compared with only 227g for the gorilla…) has led to the human vagina when fully distended being large, both absolutely and relative to the female body… particularly once the vagina and vestibule have been stretched during the process of giving birth, the vagina never really returning to its nulliparous dimensions” (Human Sperm Competition: p171).

In turn, larger vaginas probably select for larger penises in order to fill the vagina (Bowman 2008).

According to Baker and Bellis, this is because the human penis functions as a suction piston, functioning to remove the sperm deposited by rival males, as a form of sperm competition, a theory that actually has some experimental support (Gallup et al 2003; Gallup and Burch 2004; Goetz et al 2005; see also Why is the Penis Shaped Like That).

Thus, according to this view:

In order to distend the vagina sufficiently to act as a suction piston, the penis needs to be a suitable size [and] the relatively large size… and distendibility of the human vagina (especially after giving birth) thus imposes selection, via sperm competition, for a relatively large penis” (Human Sperm Competition: p171).

However, even in the absence of sperm competition, Alan Dixson observes:

In primates and other mammals the length of the erect penis and vaginal length tend to evolve in tandem. Whether or not sperm competition occurs, it is necessary for males to place ejaculates efficiently, so that sperm have the best opportunity to migrate through the cervix and gain access to the higher reaches of the female tract” (Sexual Selection and the Origins of Human Mating Systems: p68).

[4] In natural conditions, it is assumed that, in egalitarian societies, where males have roughly equal resource holdings, they will each attract an equal number of wives (i.e. given an equal sex ratio, one wife for each man). However, in highly socially-stratified societies, where there are large differences in resource holdings between men, it is expected that wealthier males will be able to support, and provide for, multiple wives, and will use their greater resource-holdings for this end, so as to maximize their reproductive success (see here). This is a version of the polygyny threshold model (see Kanazawa and Still 1999).

[5] There are also pathogens that affect the behaviour of their hosts in more dramatic ways. For example, one parasite, Toxoplasma gondii, when it infects a mouse, reduces the mouse’s aversion to cat urine, which is theorized to increase the risk of its being eaten by a cat, facilitating the reproductive life-cycle of the pathogen at the expense of that of its host. Similarly, the fungus, ophiocordyceps unilateralis turns ants into so-called zombie ants, who willingly leave the safety of their nests, and climb and lock themselves onto a leaf, again in order to facilitate the life cycle of their parasite at the expense of their own. Another parasite, dicrocoelium dendriticum (aka the lancet liver fluke) also affect the behaviour of ants whom it infects, causing them to climb to the tip of a blade of grass during daylight hours, increasing the chance they will be eaten by cattle or other grazing animals, facilitating the next stage of the parasite’s life-history

[6] In contrast, biologist Richard Alexander in Darwinism and Human Affairs cites the Shakers as an example of the opposite type of religion, namely one that, because of its teachings (namely, strict celibacy) largely died out.

In fact, however, Shakers did not quite entirely disappear. Rather, a small rump community of Shakers the Sabbathday Lake Shaker Village survives to this day, albeit greatly reduced in number and influence. This is presumably because, although the Shakers did not, at least in theory, have children, they did proselytise.

In contrast, any religion which renounced both reproduction and proselytism would presumably never spread beyond its initial founder or founders, and hence never come to the attention of historians, theorists of religion, or anyone else in the first place.

[7]  As noted above, this is among the reasons that ‘The Selfish Gene’ works best, in a purely literary sense, in its original incarnation. Later editions have at least two further chapters tagged on at the end, after this dramatic and optimistic literary flourish.

[8] Dawkins is then here here guilty of a crude dualism. Marxist neuroscientist Steven Rose, in an essay in Alas Poor Darwin (which I have reviewed here and here) has also accused Dawkins of dualism for this same passage, writing:

Such a claim to a Cartesian separation of these authors’ [Dawkins and Steven Pinker] minds from their biological constitution and inheritance seems surprising and incompatible with their claimed materialism” (Alas Poor Darwin: Arguments Against Evolutionary Psychology: p262).

Here, Rose may be right, but he is also a self-contradictory hypocrite, since his own views represent an even cruder form of dualism. Thus, in an earlier book, Not in Our Genes: Biology, Ideology, and Human Nature, co-authored with fellow-Marxists Leon Kamin and Richard Lewontin, Rose and his colleagues wrote, in a critique of sociobiological conceptions of a universal human nature:

Of course there are human universals that are in no sense trivial: humans are bipedal; they have hands that seem to be unique among animals in their capacity for sensitive manipulation and construction of objects; they are capable of speech. The fact that human adults are almost all greater than one meter and less than two meters in height has a profound effect on how they perceive and interact with their environment” (passage extracted in The Study of Human Nature: p314).

Here, it is notable that all the examples “human universal that are in no sense trivial” given by Rose, Lewontin and Kamin are physiological not psychological or behavioural. The implication is clear: yes, our bodies have evolved through a process of natural selection, but our brains and behaviour have somehow been exempt from this process. This of course, is an even cruder form of dualism than that of Dawkins.

As John Tooby and Leda Cosmides observe:

This division of labor is, therefore, popular: Natural scientists deal with the nonhuman world and the “physical” side of human life, while social scientists are the custodians of human minds, human behavior, and, indeed, the entire human mental, moral, political, social, and cultural world. Thus, both social scientists and natural scientists have been enlisted in what has become a common enterprise: the resurrection of a barely disguised and archaic physical/mental, matter/spirit, nature/human dualism, in place of an integrated scientific monism” (The Adapted Mind: Evolutionary Psychology and the Generation of Culture: p49).

A more consistent and thoroughgoing critique of Dawkins dualism is to be found in John Gray’s excellent Straw Dogs (which I have reviewed here and here).

[9] This quotation comes from p176 of Marek Kohn’s The Race Gallery: The Return of Racial Science (London: Vintage, 1996). Unfortunately, Kohn does not give a source for this quotation.

__________________________

References

Bowman EA (2008) Why the human penis is larger than in the great apes Archives of Sexual Behavior 37(3): 361.

Clark & Hatfield (1989) Gender differences in receptivity to sexual offers, Journal of Psychology & Human Sexuality, 2:39-53.

Dawkins (1981) In defence of selfish genes, Philosophy 56(218):556-573.

Gallup et al (2003). The human penis as a semen displacement device. Evolution and Human Behavior, 24, 277-289.

Gallup & Burch (2004). Semen displacement as a sperm competition strategy in humans. Evolutionary Psychology, 2, 12-23.

Gaulin & Boser (1990) Dowry as Female Competition, American Anthropologist 92(4):994-1005.

Goetz et al (2005) Mate retention, semen displacement, and human sperm competition: a preliminary investigation of tactics to prevent and correct female infidelity. Personality and Individual Differences, 38: 749-763

Hamilton (1964) The genetical evolution of social behaviour I and II, Journal of Theoretical Biology 7:1-16,17-52.

Havlíček et al (2016) Men’s preferences for women’s breast size and shape in four cultures, Evolution and Human Behavior 38(2): 217–226.

Kanazawa & Still (1999) Why Monogamy? Social Forces 78(1):25-50.

Manning et al (1997) Breast asymmetry and phenotypic quality in women, Ethology and Sociobiology 18(4): 223–236.

Møller et al (1995) Breast asymmetry, sexual selection, and human reproductive success, Ethology and Sociobiology 16(3): 207-219.

Puts (2010) Beauty and the beast: mechanisms of sexual selection in humans, Evolution and Human Behavior 31:157-175.

Smith (1964). Group Selection and Kin Selection, Nature 201(4924):1145-1147.