Edward O Wilson’s ‘Sociobiology: The New Synthesis’: A Book Much Read About, But Rarely Actually Read

Edward O Wilson, Sociobiology: The New Synthesis Cambridge: Belknap, Harvard 1975

Sociobiology – The Field That Dare Not Speak its Name? 

From its first publication in 1975, the reception accorded Edward O Wilson’s ‘Sociobiology: The New Synthesis’ has been divided. 

On the one hand, among biologists, especially those specialist in the fields of ethology, zoology and animal behaviour, the reception was almost universally laudatory. Indeed, my 25th Anniversary Edition even proudly proclaims on the cover that it was voted by officers and fellows of the Animal Behavior Society as the most important ever book on animal behaviour, supplanting even Darwin’s own seminal On The Expression of Emotions in Man and Animals

However, on the other side of the university campus, in social science departments, the reaction was very different. 

Indeed, the hostility that the book provoked was such that ‘sociobiology’ became almost a dirty word in the social sciences, and ultimately throughout the academy, to such an extent that ultimately the term fell into disuse (save as a term of abuse) and was replaced by largely synonymous euphemisms like behavioral ecology and evolutionary psychology.[1]

Sociobiology thus became, in academia, ‘the field that dare not speak its name’. 

Similarly, within the social sciences, even those researchers whose work carried on the sociobiological approach in all but name almost always played down the extent of their debt to Wilson himself. 

Thus, books on evolutionary psychology typically begin with disclaimers acknowledging that the sociobiology of Wilson was, of course, crude and simplistic, and that their own approach is, of course, infinitely more sophisticated. 

Indeed, reading some recent works on evolutionary psychology, one could be forgiven for thinking that evolutionary approaches to understanding human behaviour began around 1989 with the work of Tooby and Cosmides

Defining the Field 

What then does the word ‘sociobiology’ mean? 

Today, as I have mentioned, the term has largely fallen into disuse, save among certain social scientists who seem to employ it as a rather indiscriminate term of abuse for any theory of human behaviour that they perceive as placing too great a weight on hereditary or biological factors, including many areas of research only tangentially connected to with sociobiology as Wilson originally conceived of it (e.g. behavioral genetics).[2]

The term ‘sociobiology’ was not Wilson’s own coinage. It had occasionally been used by biologists before, albeit rarely. However, Wilson was responsible for popularizing – and perhaps, in the long-term, ultimately unpopularizing it too, since, as we have seen, the term has largely fallen into disuse.[3] 

Wilson himself defined ‘sociobiology’ as: 

The systematic study of the biological basis of all social behavior” (p4; p595). 

However, as the term was understood by other biologists, and indeed applied by Wilson himself, sociobiology came to be construed more narrowly. Thus, it was associated in particular with the question of why behaviours evolved and the evolutionary function they serve in promoting the reproductive success of the organism (i.e. just one of Tinbergen’s Four Questions). 

The hormonal, neuroscientific, or genetic causes of behaviours are just as much a part of “the biological basis of behavior” as are the ultimate evolutionary functions of behaviour. However, these lie outside of scope of sociobiology as the term was usually understood. 

Indeed, Wilson himself admitted as much, writing in ‘Sociobiology: The New Synthesis’ itself of how: 

Behavioral biology… is now emerging as two distinct disciplines centered on neurophysiology and… sociobiology” (p6). 

Yet, in another sense, Wilson’s definition of the field was also too narrow. 

Thus, behavioural ecologists have come to study all forms of behaviour, not just social behaviour.  

For example, optimal foraging theory is a major subfield within behavioural ecology (the successor field to sociobiology), but concerns feeding behaviour, which may be an entirely solitary, non-social activity. 

Indeed, even some aspects of an organism’s physiology (as distinct from behaviour) have come to be seen as within the purview of sociobiology (e.g. the evolution of the peacock’s tail). 

A Book Much Read About, But Rarely Actually Read 

Sociobiology: The New Synthesis’ was a massive tome, numbering almost 700 pages. 

As Wilson proudly proclaims in his glossary, it was: 

Written with the broadest possible audience in mind and most of it can be read with full understanding by any intelligent person whether or not he or she has had any formal training in science” (p577). 

Unfortunately, however, the sheer size of the work alone was probably enough to deter most such readers long before they reached p577 where these words appear. 

Indeed, I suspect the very size of the book was a factor in explaining the almost universally hostile reception that the book received among social scientists. 

In short, the book was so large that the vast majority of social scientists had neither the time nor the inclination to actually read it for themselves, especially since a cursory flick through its pages showed that the vast majority of them seemed to be concerned with the behaviour of species other than humans, and hence, as they saw it, of little relevance to their own work. 

Instead, therefore, their entire knowledge of the sociobiology was filtered through to them via the critiques of the approach authored by other social scientists, themselves mostly hostile to sociobiology, who presented a straw man caricature of what sociobiology actually represented. 

Indeed, the caricature of sociobiology presented by these authors is so distorted that, reading some of these critiques, one often gets the impression that included among those social scientists not bothering to read the book for themselves were most of the social scientists nevertheless taking it upon themselves to write critiques of it. 

Meanwhile, the fact that the field was so obviously misguided (as indeed it often was in the caricatured form presented in the critiques) gave most social scientists yet another reason not to bother wading through its 700 or so pages for themselves. 

As a result, among sociologists, psychologists, anthropologists, public intellectuals, and other such ‘professional damned fools’, as well as the wider the semi-educated, reading public, ‘Sociobiology: The New Synthesis’ became a book much read about – but rarely actually read (at least in full). 

As a consequence, as with other books falling into this category (e.g. the Bible and The Bell Curve) many myths have emerged regarding its contents which are quite contradicted on actually taking the time to read it for oneself. 

The Many Myths of Sociobiology 

Perhaps the foremost myth is that sociobiology was primarily a theory of human behaviour. In fact, as is revealed by even a cursory flick through the pages of Wilson’s book, sociobiology was, first and foremost, a theoretical approach to understanding animal behaviour. 

Indeed, Wilson’s decision to attempt to apply sociobiological theory to humans as well was, it seems, almost something of an afterthought, and necessitated by his desire to provide a comprehensive overview of the behaviour of all social animals, humans included. 
 
This is connected to the second myth – namely, that sociobiology was Wilson’s own theory. In fact, rather than a single theory, sociobiology is better viewed as a particular approach to a field of study, the field in question being animal behaviour. 
 
Moreover, far from being Wilson’s own theory, the major advances in the understanding of animal behaviour that gave rise to what came to be referred to as ‘sociobiology’ were made in the main by biologists other than Wilson himself.  
 
Thus, it was William Hamilton who first formulated inclusive fitness theory (which came to be known as the theory of kin selection); John Maynard Smith who first introduced economic models and game theory into behavioural biology; George C Williams who was responsible for displacing a crude group-selection in favour of a new focus on the gene itself as the principal unit of selection; while Robert Trivers was responsible for such theories such as reciprocal altruismparent-offspring conflict and differential parental investment theory
 
Instead, Wilson’s key role was to bring the various strands of the emerging field together, give it a name and, in the process, take far more than his fair share of the resulting flak. 
 
Thus, far from being a maverick theory of a single individual, what came to be known as ‘sociobiology’ was, if not based on accepted biological theory at the time of publication, then at least based on biological theory that came to be recognised as mainstream within a few years of its publication. 
 
Controversy attached almost exclusively to the application of these same principles to explain human behaviour. 

Applying Sociobiology to Humans 

In respect of Wilson’s application of sociobiological theory to humans, misconceptions again abound. 

For example, it is often asserted that Wilson only extended his theory to apply to human behaviour in his infamous final chapter, entitled, ‘Man: From Sociobiology to Sociology’. 

Actually, however, Wilson had discussed the possible application of sociobiological theory to humans several times in earlier chapters. 
 
Often, this was at the end of a chapter. For example, his chapter on “Roles and Castes” closes with a discussion of “Roles in Human Societies” (p312-3). Similarly, the final subsection of his chapter on “Aggression” is titled “Human Aggression” (p 254-5). 
 
Other times, however, humans get a mention in mid-chapter, as in Chapter Fifteen, which is titled ‘Sex and Society’, where Wilson discusses the association between adultery, cuckoldry and violent retribution in human societies, and rightly prophesizes that “the implications for the study of humans” of Trivers’ theory of differential parental investment “are potentially great” (p327). 
 
Another misconception is that, while he may not have founded the approach that came to be known as sociobiology, it was Wilson who courted controversy, and bore most of the flak, because he was the first biologist brave, foolish, ambitious, farsighted or naïve enough to attempt to apply sociobiological theory to humans. 
 
Actually, however, this is untrue. For example, a large part of Robert Trivers’ seminal paper on reciprocal altruism published in 1971 dealt with reciprocal altruism in humans and with what are presumably specifically human moral emotions, such as guilt, gratitude, friendship and moralistic anger (Trivers 1971). 
 
However, Trivers’ work was published in the Journal of Theoretical Biology and therefore presumably never came to the attention of any of the leftist social scientists largely responsible for the furore over sociobiology, who, being of the opinion that biological theory was wholly irrelevant to human behaviour, and hence to their own field, were unlikely to be regular readers of the journal in question. 

Yet this is perhaps unfortunate since Trivers, unlike the unfortunate Wilson, had impeccable left-wing credentials, which may have deflected some of the overtly politicized criticism (and pitchers of water) that later came Wilson’s way. 

Reductionism vs Holism

Among the most familiar charges levelled against Wilson by his opponents within the social sciences, and by contemporary opponents of sociobiology and evolutionary psychology, alongside the familiar and time-worn charges of ‘biological determinism’ and ‘genetic determinism’, is that sociobiology is inherently reductionist, something which is, they imply, very much a bad thing. 
 
It is therefore something of a surprise to find among the opening pages of ‘Sociobiology: The New Synthesis’, Wilson defending “holism”, as represented, in Wilson’s view, by the field of sociobiology itself, as against what he terms “the triumphant reductionism of molecular biology” (p7). 
 
This passage is particularly surprising for anyone who has read Wilson’s more recent work Consilience: The Unity of Knowledge, where he launches a trenchant, unapologetic and, in my view, wholly convincing defence of “reductionism” as representing, not only “the cutting edge of science… breaking down nature into its constituent components” but moreover “the primary and essential activity of science” and hence at the very heart of the scientific method (Consilience: p59). 

Thus, in a quotable aphorism, Wilson concludes: 

The love of complexity without reductionism makes art; the love of complexity with reductionism makes science” (Consilience: p59). 

Of course, whether ‘reductionism’ is a good or bad thing, as well as the extent to which sociobiology can be considered ‘reductionist’, ultimately depends on precisely how we define ‘reductionism’. Moreover, ‘reductionism’, how ever defined, is a surely matter of degree. 

Thus, philosopher Daniel Dennett, in his book Darwin’s Dangerous Idea, distinguishes what he calls “greedy reductionism”, which attempts to oversimplify the world (e.g. Skinnerian behaviourism, which seeks to explain all behaviours in terms of conditioning), from “good reductionism”, which attempts to understand it in all its complexity (i.e. good science).

On the other hand, ‘holistic’ is a word most often employed in defence of wholly unscientific approaches, such as so-called holistic medicine, and, for me, the word itself is almost always something of a red flag. 

Thus, the opponents of sociobiology, in using the term ‘reductionist’ as a criticism, are rejecting the whole notion of a scientific approach to understanding human behaviour. In its place, they offer only a vague, wishy-washy, untestable and frankly anti-scientific obscurantism, whereby any attempt to explain behaviour in terms of causes and effects is dismissed as reductionism and determinism

Yet explaining behaviour, whether the behaviour of organisms, atoms, molecules or chemical substances, in terms of causes and effects is the very essence, if not the very definition, of science. 

In other words, determinism (i.e. the belief that events are determined by causes) is not so much a finding of science as its basic underlying assumption.[4]

Yet Wilson’s own championing of “holism” in ‘Sociobiology: The New Synthesis’ can be made sense of in its historical context. 

In other words, just as Wilson’s defence of reductionism in ‘Concilience’ was a response to the so-called sociobiology debates of the 1970s and 80s in which the charge of ‘reductionism’ was wielded indiscriminately by the opponents of sociobiology, so Wilson’s defence of holism in ‘Sociobiology: The New Synthesis’ itself must be understood in the context, not of the controversy that this work itself provoked (which Wilson was, at the time, unable to foresee), but rather of a controversy preceded its publication. 

In particular, certain molecular biologists at Harvard, and perhaps elsewhere, led by the brilliant yet but abrasive molecular biologist James Watson, had come to the opinion that molecular biology was to be the only biology, and that traditional biology, fieldwork and experiments were positively passé. 

This controversy is rather less familiar to anyone outside of Harvard University’s biology department than the sociobiology debates, which not only enlisted many academics from outside of biology (e.g. psychologists, sociologists, anthropologists and even philosophers), but also spilled over into the popular media and even became politicized. 

However, within the ivory towers of Harvard University’s department of biology, this controversy seems to have been just as fiercely fought over.[5]

As is clear from ‘Sociobiology: The New Synthesis’, Wilson’s own envisaged “holism” was far from the wishy-washy obscurantism which one usually associates with those championing a ‘holistic approach’, and thoroughly scientific. 

Thus, in On Human Nature, Wislon’s follow-up book to ‘Sociobiology: The New Synthesis’, where he first concerned himself specifically to the application of sociobiological theory to humans, Wilson gives perhaps his most balanced description of the relative importance of reductionism and holism, and indeed of the nature of science, writing: 

Raw reduction is only half the scientific process… the remainder consist[ing] of the reconstruction of complexity by an expanding synthesis under the control if laws newly demonstrated by analysis… reveal[ing] the existence of novel emergent phenomena” (On Human Nature: p11). 

It is therefore in this sense, and in contrast to the reductionism of molecular biology, that Wilson saw sociobiology as ‘holistic’. 

Group Selection? 

One of the key theoretical breakthroughs that formed the basis for what came to be known as sociobiology was the discrediting of group-selectionism, largely thanks to the work of George C Williams, whose ideas were later popularized by Richard Dawkins in The Selfish Gene (which I have reviewed here).[6] 
 
A focus the individual, or even the gene, as the primary, or indeed the only, unit of selection, came to be viewed as an integral component of the sociobiological worldview. Indeed, it was once seriously debated on the pages of the newsletter of the European Sociobiological Society whether one could truly be both a ‘sociobiologist’ and a ‘group-selectionist’ (Price 1996). 

It is therefore something of a surprise to discover that the author of ‘Sociobiology: The New Synthesis’, responsible for christening the emerging field, was himself something of a group-selectionist. 

Wilson has recently ‘come out’ as a group-selectionist by co-authoring a paper concerning the evolution of eusociality in ants (Nowak et al 2010). However, reading ‘Sociobiology: The New Synthesis’ leads one to suspect that Wilson had been a closet, or indeed a semi-out, group-selectionist all along. 

Certainly, Wilson repeats the familiar arguments against group-selectionism popularised by Richard Dawkins in The Selfish Gene (which I have reviewed here), but first articulated by George C Williams in Adaptation and Natural Selection (see p106-7). 

However, although he offers no rebuttal to these arguments, this does not prevent Wilson from invoking, or at least proposing, group-selectionist explanations for behaviours elsewhere in the remainder of the book (e.g. p275). 

Moreover, Wilson concludes: 

Group selection and higher levels of organization, however intuitively implausible… are at least theoretically possible under a wide range of conditions” (p30). 

 
Thus, it is clear that, unlike, say, Richard Dawkins, Wilson did not view group-selectionism as a terminally discredited theory. 

Man: From Sociobiology to Sociology… and Perhaps Evolutionary Psychology 

What then of Wilson’s final chapter, entitled ‘Man – From Sociobiology to Sociology’? 

It was, of course, the only one to focus exclusively on humans, and, of course, the chapter that attracted by far the lion’s share of the outrage and controversy that soon ensued. 

Yet, reading it today, over forty years after it was first written, it is, I feel, rather disappointing. 

Let me be clear, I went in very much wanting to like it. 

After all, Wilson’s general approach was basically right. Humans, like all other organisms, have evolved through a process of natural selection. Therefore, their behaviour, no less than their physiology, or the physiology or behaviour of non-human organisms, must be understood in the light of this fact. 

Moreover, not only were almost all of the criticisms levelled at Wilson misguided, wrongheaded and unfair, but they often bordered upon persecution as well.

The most famous example of this leftist witch hunting was when, during a speech at the annual meeting of the American Association for the Advancement of Science, he was drenched him with a pitcher of water by leftist demonstrators. 

However, this was far from an isolated event. For example, an illustration from the book The Moral Animal shows a student placard advising protesters to “bring noisemakers” in order to deliberately disrupt one of Wilson’s speaking engagements (The Moral Animal: illustration p341). 

In short, Wilson seems to have been an early victim of what would today be called ‘deplatorming’ and ‘cancel culture’, phenomena that long predated the coining of these terms

Thus, one is tempted to see Wilson in the role of a kind of modern Galileo, being, like Galileo, persecuted for his scientific theories, which, like those of Galileo, turned out to be broadly correct. 

Moreover, Wilson’s views were, in some respects, analogous to those of Galileo. Both disputed prevailing orthodoxies in such a way as to challenge the view that humans were somehow unique or at the centre of things, Galileo by suggesting the earth was not at the centre of the solar system, and Wilson by showing that human behaviour was not all that different from that of other animals.[7]

Unfortunately, however, the actual substance of Wilson’s final chapter is rather dated.

Inevitably, any science book will be dated after forty years. However, while this is also true of the book as a whole, it seems especially true of this last chapter, which bears little resemblance to the contents of a modern textbook on evolutionary psychology

This is perhaps inevitable. While the application of sociobiological theory to understanding and explaining the behaviour other species was already well underway, the application of sociobiological theory to humans was, the pioneering work of Robert Trivers on reciprocal altruism notwithstanding, still very much in its infancy. 

Yet, while the substance of the chapter is dated, the general approach was spot on.

Indeed, even some of the advances claimed by evolutionary psychologists as their own were actually anticipated by Wilson. 

Thus, Wilson recognises:

One of the key questions [in human sociobiology] is to what extent the biogram represents an adaptation to modern cultural life and to what extent it is a phylogenetic vestige” (p458). 

He thus anticipates the key evolutionary psychological concept of the Environment of Evolutionary Adaptedness or EEA, whereby it is theorized that humans are evolutionarily adapted, not to the modern post-industrial societies in which so many of us today find ourselves, but rather to the ancestral environments in which our behaviours first evolved.

Wilson proposes examine human behavior from the disinterested perspective of “a zoologist from another planet”, and concludes: 

In this macroscopic view the humanities and social sciences shrink to specialized branches of biology” (p547). 

Thus, for Wilson: 

Sociology and the other social sciences, as well as the humanities, are the last branches of biology waiting to be included in the Modern Synthesis” (p4). 

Indeed, the idea that the behaviour of a single species is alone exempt from principles of general biology, to such an extent that it must be studied in entirely different university faculties by entirely different researchers, the vast majority with little or no knowledge of general biology, nor of the methods and theory of researchers studying the behaviour of all other organisms, reflects an indefensible anthropocentrism

However, despite the controversy these pronouncements provoked, Wilson was actually quite measured in his predictions and even urged caution, writing 

Whether the social sciences can be truly biologicized in this fashion remains to be seen” (p4) 

The evidence of the ensuing forty years suggests, in my view, that the social sciences can indeed be, and are well on the way to being, as Wilson puts it, ‘biologicized’. The only stumbling block has proven to be social scientists themselves, who have, in some cases, proven resistant. 

‘Vaunting Ambition’? 

Yet, despite these words of caution, the scale of Wilson’s intellectual ambition can hardly be exaggerated. 

First, he sought to synthesize the entire field of animal behavior under the rubric of sociobiology and in the process produce the ‘New Synthesis’ promised in the subtitle, by analogy with the Modern Synthesis of Darwinian evolution and Mendelian genetics that forms the basis for the entire field of modern biology. 

Then, in a final chapter, apparently as almost something of an afterthought, he decided to add human behaviour into his synthesis as well. 

This meant, not just providing a new foundation for a single subfield within biology (i.e. animal behaviour), but for several whole disciplines formerly virtually unconnected to biology – e.g. psychology, cultural anthropology, sociology, economics. 

Oh yeah… and moral philosophy and perhaps epistemology too. I forgot to mention that. 

From Sociobiology to… Philosophy?

Indeed, Wilson’s forays into philosophy proved even more controversial than those into social science. Though limited to a few paragraphs in his first and last chapter, they were among the most widely quoted, and critiqued, in the whole book. 

Not only were opponents of sociobiology (and philosophers) predictably indignant, but even those few researchers bravely taking up the sociobiological gauntlet, and even applying it to humans, remained mostly skeptical. 

In proposing to reconstruct moral philosophy on the basis of biology, Wilson was widely accused of committing what philosophers call the naturalistic fallacy or appeal to nature fallacy

This refers to the principle that, if a behaviour is natural, this does not necessarily make it right, any more than the fact that dying of tuberculosis is natural means that it is morally wrong to treat tuberculosis with such ‘unnatural’ interventions as vaccination or antibiotics. 

In general, evolutionary psychologists have generally been only too happy to reiterate the sacrosanct inviolability of the fact-value chasm, not least because it allowed them to investigate the evolutionary function of such morally dubious, or indeed morally reprehensible, behaviours as infidelity, rape, war, sexual infidelity and child abuse, while denying they are thereby providing a justification for the behaviours in question. 

Yet this begs the question: if we cannot derive values from facts, whence can values be arrived at? Can they be derived only from other values? If so, then whence are our ultimate moral values, from which all others are derived, themselves ultimately derived? Must they be simply taken on faith? 

Wilson has recently controversially argued, in his excellent Consilience: The Unity of Knowledge, that, in this context: 

The posing of the naturalistic fallacy is itself a fallacy” (Consilience: p273). 

Leaving aside this controversial claim, it is clear that his point in ‘Sociobiology’ is narrower. 

In short, Wilson seems to be arguing that, in contemplating the appropriateness of different theories of prescriptive ethics (e.g. utilitarianism, Kantian deontology), moral philosophers consult “the emotional control centers in the hypothalamus and limbic system of the brain” (p3). 

Yet these same moral philosophers take these emotions largely for granted. They treat the brain as a “black box” rather than a biological entity the nature of which is itself the subject of scientific study (p562). 

Yet, despite the criticism Wilson’s suggestion provoked among many philosophers, the philosophical implications of recognising that moral intuitions are themselves a product of the evolutionary process have since become an serious and active area of philosophical enquiry. Indeed, among the leading pioneers in this field of enquiry has been the philosopher of biology Michael Ruse, not least in collaboration Wilson himself (Ruse & Wilson 1986). 

Yet if moral philosophy must be rethought in the light of biology and the evolved nature of our psychology, then the same is also surely true of arguably the other main subfield of contemporary philosophy – namely epistemology.  

Yet Wilson’s comments regarding the relevance of sociobiological theory to epistemology are even briefer than the few sentences he devotes in his opening and closing chapters to moral philosophy, being restricted to less than a sentence – a mere five-word parenthesis in a sentence primarily discussing moral philosophy and philosophers (p3). 

However, what humans are capable of knowing is, like morality, ultimately a product of the human brain – a brain which is a itself biological entity that evolved through a process of natural selection. 

The brain, then, is designed not for discovering ‘truth’, in some abstract, philosophical sense, but rather for maximizing the reproductive success of the organism whose behaviour it controls and directs. 

Of course, for most purposes, natural selection would likely favour psychological mechanisms that produce, if not ‘truth’, then at least a reliable model of the world as it actually operates, so that an organism can modify its behaviour in accordance with this model, in order to produce outcomes that maximizes its inclusive fitness under these conditions. 

However, it is at least possible that there are certain phenomena that our brains are, through the very nature of their wiring and construction, incapable of fully understanding (e.g. quantum mechanics or the hard question of consciousness), simply because such understanding was of no utility in helping our ancestors to survive and reproduce in ancestral environments. 

The importance of evolutionary theory to our understanding of epistemology and the limits of human knowledge is, together with the relevance of evolutionary theory to moral philosophy, a theme explored in philosopher Michael Ruse’s book, Taking Darwin Seriously, and is also the principal theme of such recent works as The Case Against Reality: Why Evolution Hid the Truth from Our Eyes by Donald D Hoffman. 

Dated? 

Is ‘Sociobiology: The New Synthesis’ worth reading today? At almost 700 pages, it represents no idle investment of time. 

Wilson is a wonderful writer even in a purely literary sense, and has the unusual honour for a working scientist of being a twice Pulitzer-Prize winner. However, apart from a few provocative sections in the opening and closing chapters, ‘Sociobiology: The New Synthesis’ is largely written in the form of a student textbook, is not a book one is likely to read on account of its literary merits alone. 

As a textbook, Sociobiology is obviously dated. Indeed, the extent to which it has dated is an indication of the success of the research programme it helped inspire. 

Thus, one of the hallmarks of true science is the speed at which cutting-edge work becomes obsolete.  

Religious believers still cite holy books written millennia ago, while adherents of pseudo-sciences like psychoanalysis and Marxism still paw over the words of Freud and Marx. 

However, the scientific method is a cumulative process based on falsificationism and is moreover no respecter of persons.

Scientific works become obsolete almost as fast as they are published. Modern biologists only rarely cite Darwin. 

If you want a textbook summary of the latest research in sociobiology, I would instead recommend the latest edition of Animal Behavior: An Evolutionary Approach or An Introduction to Behavioral Ecology; or, if your primary interest is human behavior, the latest edition of David Buss’s Evolutionary Psychology: The New Science of the Mind

The continued value of ‘Sociobiology: The New Synthesis’ lies in the field, not of science, but history of science In this field, it will remain a landmark work in the history of human thought, for both the controversy, and the pioneering research, that followed in its wake. 

Endnotes

[1] Actually, ‘evolutionary psychology’ is not quite a synonym for ‘sociobiology’. Whereas the latter field sought to understand the behaviour of all animals, if not all organisms, the term ‘evolutionary psychology’ is usually employed only in relation to the study of human behaviour. It would be more accurate, then, to say ‘evolutionary psychology’ is a synonym, or euphemism, for ‘human sociobiology’.

[2] Whereas behavioural geneticists focus on heritable differences between individuals within a single population, evolutionary psychologists largely focus on behavioural adaptations that are presumed to be pan-human and universal. Indeed, it is often argued that there is likely to be minimal heritable variation in human psychological adaptations, precisely because such adaptations have been subject to such strong selection pressure as to weed out suboptimal variation, such that only the optimal genotype remains. On this view, substantial heritable variation is found only in respect of traits that have not been subject to intense selection pressure (see Tooby & Cosmides 1990). However, this fails to be take into account such phenomena as frequency dependent selection and other forms of polymorphism, whereby different individuals within a breeding population adopt, for example, quite different reproductive strategies. It is also difficult to reconcile with the finding of behavioural geneticists that there is substantial heritable variation in intelligence as between individuals, despite the fact that the expansion of human brain-size over the course of evolution suggests that intelligence has been subject to strong selection pressures.

[3] For example, in 1997, the journal Ethology and Sociobiology, which had by then become, and remains, the leading scholarly journal in the field of what would then have been termed ‘human sociobiology’, and now usually goes by the name of ‘evolutionary psychology’, changed its name to Evolution and Human Behavior.

[4] An irony is that, while science is built on the assumption of determinism, namely the assumption that observed phenomena have causes that can be discovered by controlled experimentation, one of the findings of science is that, at least at the quantum level, determinism is actually not true. This is among the reasons why quantum theory is paradoxically popular among people who don’t really like science (and who, like virtually everyone else, don’t really understand quantum theory). Thus, Richard Dawkins has memorably parodied quantum mysticism as as based on the reasoning that: 

Quantum mechanics, that brilliantly successful flagship theory of modern science, is deeply mysterious and hard to understand. Eastern mystics have always been deeply mysterious and hard to understand. Therefore, Eastern mystics must have been talking about quantum theory all along.”

[5] Indeed, although since reconciled, Wilson and Watson seem to have shared a deep personal animosity for one another, Wilson once describing how he had once considered Watson, with whom he later reconciled, “the most unpleasant human being I had ever met” – see Wilson’s autobiography, Naturalist. A student of Watson’s describes how, when Wilson was granted tenure at Harvard before Watson:

It was a big, big day in our corridor” as “Watson could be heard coming up the stairwell…  shouting ‘fuck, fuck, fuck” (Watson and DNA: p98)  

Wilson’s description of Watson’s personality in his memoir is interesting in the light of the later controversy regarding the latters comments regarding the economic implications of racial differences in intelligence, with Wilson writing: 

Watson, having risen to historic fame at an early age, became the Caligula of biology. He was given license to say anything that came to his mind and expect to be taken seriously. And unfortunately, he did so, with a casual and brutal offhandedness.” 

In contrast, geneticist David Reich suggests that Watson’s abrasive personality predated his scientific discoveries and may even have been partly responsible for them, writing: 

His obstreperousness may have been important to his success as a scientist” (Who We are and how We Got Here: p263).

[6] Group selection has recently, however, enjoyed something of a resurgence in the form of multi-level selection theory. Wilson himself is very much a supporter of this trend.

[7] Of course, it goes without saying that the persecution to which Wilson was subjected was as nothing compared to that to which Galileo was subjected (see my post, A Modern McCarthyism in Our Midst). 

References 

Nowak et al (2010) The evolution of eusociality Nature 466:1057–1062. 

Price (1996) ‘In Defence of Group Selection, European Sociobiological Society Newsletter. No. 42, October 1996 

Ruse & Wilson (1986) Moral Philosophy as Applied SciencePhilosophy 61(236):173-192 

Tooby & Cosmides (1990) On the Universality of Human Nature and the Uniqueness of the Individual: The Role of Genetics and AdaptationJournal of Personality 58(1): 17-67. 

Trivers (1971) The evolution of reciprocal altruism. Quarterly Review of Biology 46:35–57 

Donald Symons’ ‘The Evolution of Human Sexuality’: A Founding Work of Modern Evolutionary Psychology

The Evolution of Human Sexuality by Donald Symons (Oxford University Press 1980). 

Research over the last four decades in the field that has come to be known as evolutionary psychology has focused disproportionately on mating behaviour. Geoffrey Miller (1998) has even argued that it is the theory of sexual selection rather than that of natural selection which, in practice, guides most research in this field. 

This does not reflect merely the prurience of researchers. Rather, given that reproductive success is the ultimate currency of natural selection, mating behaviour is, perhaps along with parental investment, the form of behaviour most directly subject to selective pressures.

Almost all of this research traces its ancestry ultimately to Donald Symons’ ‘The Evolution of Human Sexuality’ by Donald Symons. Indeed, much of it was explicitly designed to test claims and predictions formulated by Symons himself in this very book.

Age Preferences 

For example, in his discussion of the age at which women are perceived as most attractive by males, Symons formulated two alternative hypotheses. 

First, if human evolutionary history were characterized by fleeting one-off sexual encounters (i.e. one-night standscasual sex and hook-ups), then, he reasoned, men would have evolved to find women most attractive when the latter are at the age of their maximum fertility

For women, fertility is said to peak around when a woman reaches her mid-twenties since, although women still in their teens have high pregnancy rates, they also experience greater risk of birth complications

However, if human evolutionary history were characterized instead by long-term pair bonds, then men would have evolved to be maximally attracted to somewhat younger women (i.e. those at the beginning of their reproductive careers), so that, by entering a long-term relationship with the woman at this time, a male is potentially able to monopolize her entire lifetime reproductive output (p189). 

More specifically, males would have evolved to prefer females, not of maximal fertility, but rather of maximal reproductive value, a term borrowed from demography and population genetics which refers to a person’s expected future reproductive output given their current age. Unlike fertility, a woman’s reproductive value peaks around her mid- to late-teens.  

On the basis of largely anecdotal evidence, Symons concludes that human males have evolved to be most attracted to females of maximal reproductive value rather than maximal fertility.  

Subsequent research designed to test between Symons’s rival hypotheses has largely confirmed his speculative hunch that it is younger females in their mid- to late-teens who are perceived by males as most attractive (e.g. Kenrick and Keefe 1992). 

Why Average is Attractive 

Symons is also credited as the first person to recognize that a major criterion of attractiveness is, paradoxically, averageness, or at least the first to recognize the significance of, and possible evolutionary explanation for, this discovery.[1] Thus, Symons argues that: 

[Although] health and status are unusual in that there is no such thing as being too healthy or too high ranking… with respect to most anatomical traits, natural selection produces the population mean” (p194). 

On this view, deviations from the population mean are interpreted as the result of deleterious mutations or developmental instability, and hence bad genes.[2]

Concealed Ovulation

Support has even emerged for some of Symons’ more speculative hunches. 
 
For example, one of Symons’ two proposed scenarios for the evolution of concealed ovulation, in which he professed “little confidence” (p141), was that this had evolved so as to impede male mate-guarding and enable females select a biological father for their offspring different from their husbands (p139-141). 
 
Consistent with this theory, studies have found that women’s mate preferences vary throughout their menstrual cycle in a manner compatible with a so-called ‘dual mating strategy’, preferring males evidencing a willingness to invest in offspring at most times, but, when at their most fertile, preferring characteristics indicative of genetic quality (e.g. Penton-Voak et al 1999). 

Meanwhile, a questionnaire distributed via a women’s magazine found that women engaged in extra-marital affairs do indeed report engaging in ‘extra-pair copulations’ (EPCs) at times likely to coincide with ovulation (Bellis and Baker 1990).[3]

The Myth of Female Choice

Interestingly, Symons even anticipated some of the mistakes evolutionary psychologists would be led into. 
 
Thus, he warns that researchers in modern western societies may be prone to overestimate the importance of female choice as a factor in human evolution, because, in their own societies, this is a major factor, if not the major factor, in determining marriage and sexual and romantic relationships (p203).[4]
 
However, in ancestral environments (i.e. what evolutionary psychologists now call the Environment of Evolutionary Adaptedness or EEA) arranged marriages were likely the norm, as they are in most premodern cultures around the world today (p168).[5] 
 
Thus, Symons concludes: 

“There is no evidence that any features of human anatomy were produced by intersexual selection [i.e. female choice]. Human physical sex differences are explained most parsimoniously as the outcome of intrasexual selection (the result of male-male competition)” (p203). 

Thus, human males have no obvious analogue of the peacock’s tail, but they do have substantially greater levels of upper-body strength and violent aggression as compared to females.[6]
 
This was a warning almost entirely ignored by subsequent generations of researchers before being forcefully reiterated by Puts (2010)

Homosexuality as a ‘Test-Case 

An idea of the importance of Symons’s work can be ascertained by comparing it with contemporaneous works addressing the same subject-matter. 
 
Edward O Wilson’s  On Human Nature was first published in 1978, only a year before Symons’s ‘The Evolution of Human Sexuality’. 

However, whereas Symons’s book set out much of the theoretical basis for what would become the modern science of evolutionary psychology, Wilson’s chapter on “Sex” has dated rather less well, and a large portion of chapter is devoted to introducing a now faintly embarrassing theory of the evolution of homosexuality which has subsequently received no empirical support (see Bobrow & Bailey 2001).[7] 
 
In contrast, Symons’s own treatment of homosexuality is innovative. It is also characteristic of his whole approach and illustrates why ‘The Evolution of Human Sexuality‘ has been described by David Buss as “the first major treatise on evolutionary psychology proper” (Handbook of Evolutionary Psychology: p251). 
 
Rather than viewing all behaviours as necessarily adaptive (as critics of evolutionary psychology, such as Stephen Jay Gould, have often accused sociobiologists of doing),[8] Symons instead focuses on admittedly non-adaptive (or, indeed, even maladaptive) behaviours, not because he believes them to be adaptive, but rather because they provide a unique window on the nature of human sexuality 
 
Accordingly, Symons does not concern himself with how homosexuality evolved, implicitly viewing it as a rare and maladaptive malfunctioning of normal sexuality. Yet the behaviour of homosexuals is of interest to Symons because it provides a window on the nature of male and female sexuality as it manifests itself when freed from the constraints imposed by the conflicting desires of the opposite sex. 
 
On this view, the rampant promiscuity manifested by many homosexual men (e.g. ‘cruising’ and ‘cottaging’ in bathhouses and public lavatories, or Grindr hookups) reflects the universal male desire for sexual variety when freed from the constraints imposed by the conflicting desires of women. 

This desire for sexual variety is, of course, obviously reproductively unproductive among homosexual men themselves. However, it evolved because it enhanced the reproductive success of heterosexual men by motivating them to attempt to mate with multiple females and thereby father multiple offspring. 
 
In contrast, burdened with pregnancy and lactation, women’s potential reproductive rate is more tightly constrained than that of men. They therefore have little to gain reproductively by mating with multiple males, since they can usually gestate, and nurse, only one offspring at a time. 
 
It is therefore notable that, among lesbians, there is little evidence of the sort of rampant promiscuity common among gay men. Instead, lesbian relationships seem to be characterized by much the same features as heterosexual coupling (i.e. long-term pair-bonds).
 
The similarity of heterosexual coupling to that of lesbians, and the striking contrast with that of male homosexuals, suggests that it is women, not men, who exert decisive influence in dictating the terms of heterosexual coupling.[9] 
 
Thus, Symons reports:  

There is enormous cross-cultural variation in sexual customs and laws and the extent of male control, yet nowhere in the world do heterosexual relations begin to approximate those typical of homosexual men This suggests that, in addition to custom and law, heterosexual relations are structured to a substantial degree by the nature and interests of the human female” (p300). 

This conclusion is, of course, diametrically opposite to the feminist contention that it is men who dictate the terms of heterosexual coupling and for whose exclusive benefit such relationships are structured. 
 
It also suggests, again contrary to feminist assumptions of male dominance, that most men are ultimately frustrated in achieving their sexual ambitions to a far greater extent than are most women. 

Thus, Symons concludes: 

The desire for sexual variety dooms most human males to a lifetime of unfulfilled longing” (p228). 

Here, Symons anticipates Camille Paglia who was later to famously observe: 

Men know they are sexual exiles. They wander the earth seeking satisfaction, craving and despising, never content. There is nothing in that anguished motion for women to envy” (Sexual Personae: p19). 

Criticisms of Symons’s Use of Homosexuality as a Test-Case

There is, however, a potential problem with Symons’s use of homosexual behaviour as a window onto the nature of male and female sexuality as they manifest themselves when freed from the conflicting desires of the opposite sex. The whole analysis rests on a questionable premise – namely that homosexuals are, their preference for same-sex partners aside, otherwise similar, if not identical, to heterosexuals of their own sex in their psychology and sexuality. 
 
Symons defends this assumption, arguing: 

There is no reason to suppose that homosexuals differ systematically from heterosexuals in any way other than their sexual object choice” (p292). 

Indeed, in some respects, Symons seems to see even “sexual object choice” as analogous among homosexuals and heterosexuals of the same sex. 
 
For example, he observes that, unlike women, both homosexual and heterosexual men tend to evaluate prospective mates primarily on the basis their physical appearance and youthfulness (p295). 

Thus, in contrast to the failure of periodicals featuring male nudes to attract a substantial female audience (see below), Symons notes the existence of a market for gay pornography parallel in most respects to heterosexual porn – i.e. featuring young, physically attractive models in various states of undress (p301). 
 
This, of course, contradicts the feminist notion that men are led to ‘objectify’ women only due to the sexualized portrayal of the latter in the media. 
 
Instead, Symons concludes: 

That homosexual men are at least as likely as heterosexual men to be interested in pornography, cosmetic qualities and youth seems to me to imply that these interests are no more the result of advertising than adultery and alcohol consumption are the result of country and western music” (p304).[10] 

However, this assumption of the fundamental similarity of heterosexual and homosexual male psychology has been challenged by David Buller in his book, Adapting Minds: Evolutionary Psychology and the Persistent Quest for Human Nature
 
Buller cites evidence that male homosexuals are ‘feminized’ in many aspects of their behaviour.

Thus, one of the few consistent early correlates of homosexuality is gender non-conformity in childhood and some evidence (e.g. digit ratios, the fraternal birth order effect) has been interpreted to suggest that the level of prenatal exposure to masculinizing androgens (e.g. testosterone) in utero affects sexual orientation.
 
As Buller notes, although gay men seem, like heterosexual men, to prefer youthful sexual partners, they also appear to prefer sexual partners who are, in other respects highly masculine.[11]

Thus, Buller observes: 

“The males featured in gay men’s magazines embody very masculine, muscular physiques, not pseudo-feminine physiques” (Adapting Minds: p227).

Indeed, the models in such magazines seem in most respects similar in physical appearance to the male models, pop stars, actors and other ‘sex symbols’ and celebrities fantasized about by heterosexual women and girls.
 
How then are we to resolve this apparent paradox? 
 
One possible explanation that some aspects of the psychology of male homosexuals are feminized but not others – perhaps because different parts of the brain are formed at different stages of prenatal development, at which stages the levels of masculinizing androgens in the womb may vary. 
 
Indeed, there is even some evidence that homosexual males may be hyper-masculinized in some aspects of their physiology.

For example, it has been found that homosexual males report larger penis-sizes than heterosexual men (Bogaert & Hershberger 1999). 
 
This, researchers Glenn Wilson and Qazi Rahman propose, may be because: 

If it is supposed that the barriers against androgens with respect to certain brain structures (notably those concerned with homosexuality) lead to increased secretion in an effort to break through, or some sort of accumulation elsewhere… then there may be excess testosterone left in other departments” (Born Gay: The Psychobiology of Sex Orientation: p80). 

Another possibility is that male homosexuals actually lie midway between heterosexual men and women in their degree of masculinization.  

On this view, homosexual men come across as relatively feminine only because we naturally tend to compare them to other men (i.e. heterosexual men). However, as compared to women, they may be relatively masculine, as reflected in the male-typical aspects of their sexuality focused upon by Symons. 
 
Interestingly, this latter interpretation suggests the slightly disturbing possibility that, freed from the restraints imposed by women, heterosexual men would be even more indiscriminately promiscuous than their homosexual counterparts.

Evidence consistent with this interpretation is provided by one study from the 1980s which found that, when approached by a female stranger (also a student), on a University campus, with a request to go to bed with them, fully 72% of male students agreed (Clark and Hatfield 1989). 

In contrast, in the same study, not a single one of the 96 females approached by male strangers with the same request on the same university campus agreed to go to bed with the male stranger.

Yet what percentage of the female students subsequently sued the university for sexual harassment was not reported.

Pornography as a “Natural Experiment

For Symons, fantasy represents another window onto sexual and romantic desires. Like homosexuality, fantasy is, by its very nature, unconstrained by the conflicting desires of the opposite sex (or indeed by anything other than the imagination of the fantasist). 

Symons later collaborated in an investigation into sexual fantasy by means of a questionnaire (Ellis and Symons 1990). 

However, in the present work, he investigates fantasy indirectly by focusing on what he calls “the natural experiment of commercial periodical publishing” – i.e. pornographic magazines (p182). 
 
In many respects, this approach is preferable to a survey because, even in an anonymous questionnaire, individuals may be less than honest when dealing with a sensitive topic such as their sexual fantasies. On the other hand, they are unlikely to regularly spend money on a magazine unless they are genuinely attracted by its contents. 
 
Before the internet age, softcore pornographic magazines, largely featuring female nudes, commanded sizeable circulations. However, their readership (if indeed ‘readership’ is the right words, since there was typically little reading involved) was almost exclusively male. 
 
In contrast, there was little or no female audience for magazines containing pictures of naked males. Instead, magazines marketed towards women (e.g. fashion magazines) contain, mostly, pictures of other women. 
 
Indeed, when, in the 1970s, attempts were made, in the misguided name of feminism and ‘women’s liberation’, to market magazines featuring male nudes to a female readership, one such title, Viva, abandoned publishing male nudes after just a few years due to lack of interest or demand, then subsequently went bust just a few years after that, while the other, Playgirl, although it did not entirely abandon male nudes, was notorious, as a consequence, for attracting a readership composed in large part of homosexual men. 
 
Symons thus concludes forcefully and persuasively: 

The notion must be abandoned that women are simply repressed men waiting to be liberated” (p183). 

Indeed, though it has been loudly and enthusiastically co-opted by feminists, this view of women, and of female sexuality – namely women as “repressed men waiting to be liberated” – represents an obviously quintessentially male viewpoint. 

Indeed, taken to extremes, it has even been used as a justification for rape.

Thus, the curious, sub-Freudian notion that female rape victims actually secretly enjoy being raped seems to rest ultimately on the assumption that female sexuality is fundamentally the same as that of men (i.e. indiscriminately enjoying of promiscuous sex) and that it is only women’s sexual ‘repression’ that prevents them admitting as much.

Romance Literature 

Unfortunately, however, there is notable omission in Symons’s discussion of pornography as a window into male sexuality – namely, he omits to consider whether there exists any parallel artistic genre that offers equivalent insight into the female psyche. 
 
Later writers on the topic have argued that romance novels (e.g. Mills and Boon, Jane Austin), whose audience is as overwhelmingly female as pornography’s is male, represent the female equivalent of pornography, and that analysis of the the content of such works provides insights into female mate preferences parallel to those provided into male psychology by pornography (e.g. Kruger et al 2003; Salmon 2004; see also Warrior Lovers: Erotic Fiction, Evolution and Female Sexuality, co-authored by Symons himself). 

Female Orgasm as Non-Adaptive

An entire chapter of ‘The Evolution of Human Sexuality’, namely Chapter Three (entitled, “The Female Orgasm: Adaptation or Artefact”), is devoted to rejecting the claim that the female orgasm represents a biological adaptation. 
 
This is perhaps excessive. However, it does at least conveniently contradicts the claim of some critics of evolutionary psychology, and of sociobiology, such as Stephen Jay Gould that the field is ‘ultra-Darwinian’ or ‘hyper-adaptionist’ and committed to the misguided notion that all traits are necessarily adaptive.[12]
 
In contrast, Symons champions the thesis that the female capacity for orgasm is a simply non-adaptive by-product of the male capacity to orgasm, the latter of which is of course adaptive. 
 
On this view, the female orgasm (and clitoris) is, in effect, the female equivalent of male nipples (only more fun). 
 
Certainly, Symons convincingly critiques the romantic notion, popularized by Desmond Morris among others, that the female orgasm functions as a mechanism designed to enhance ‘pair-bonding’ between couples. 
 
However, subsequent generations of evolutionary psychologists have developed less naïve models of the adaptive function of female orgasm. 
 
For example, Geoffrey Miller argues that the female orgasm functions as an adaptation for mate choice (The Mating Mind: p239-241). 
 
Of course, at first glance, experiencing orgasm during coitus may appear to be a bit late for mate choice, since, by the time coitus has occurred, the choice in question has already been made. However, given that, among humans, most sexual intercourse is non-reproductive (i.e. does not result in conception), the theory is not altogether implausible. 
 
On this view, the very factors which Symons views as suggesting female orgasm is non-adaptive – such as the relative difficultly of stimulating female orgasm during ordinary vaginal sex – are positive evidence for its adaptive function in carefully discriminating between suitors/lovers to determine their desirability as father for a woman ’s offspring. 
 
Nevertheless, at least according to the stringent criteria set out by George C Williams in his classic Adaptation and Natural Selection, as well as the more general principle of parsimony (also known as Occam’s Razor), the case for female orgasm as an adaptation remains unproven (see also Sherman 1989; Case Of The Female Orgasm: Bias in the Science of Evolution).

Out-of-Date?

Much of Symons’ work is dedicated to challenging the naïve group-selectionism of Sixties ethologists, especially Desmond Morris. Although scientifically now largely obsolete, Morris’s work still retains a certain popular resonance and therefore this aspect of Symons’s work is not entirely devoid of contemporary relevance. 
 
In place of Morris‘s rather idyllic notion that humans are a naturally monogamous ‘pair-bonding’ species, Symons advocates instead an approach rooted in the individual-level (or even gene-level) selection championed Richard Dawkins in The Selfish Gene (reviewed here). 
 
This leads to some decidedly cynical conclusions regarding the true nature of sexual and romantic relations among humans. 
 
For example, Symons argues that it is adaptive for men to be less sexually attracted to their wives than they are to other women – because they are themselves liable to bear the cost of raising offspring born to their wives but not those born to other women with whom they mate (e.g. those mated to other males). 
 
Another cynical conclusion is that the primary emotion underlying the institution of marriage, both cross-culturally and in our own society, is neither love nor even lust, but rather male sexual jealousy and proprietariness (p123). 

Marriage, then, is an institution borne not of love, but of male sexual jealousy and the behaviour known to biologists as mate-guarding
 
Meanwhile, in his excellent chapter on ‘Copulation as a Female Service’ (Chapter Eight), Symons suggests that many aspects of heterosexual romantic relationships may be analogous to prostitution
 
As well as its excessive focus on debunking sixties ethologists like Morris, ‘The Evolution of Human Sexuality’ is also out-of-date in a more serious respect Namely, it fails to incorporate the vast amount of empirical research on human sexuality from a sociobiological perspective which has been conducted since the first publication of his work. 
 
For a book first published thirty years ago, this is inevitable – not least because much of this empirical research was inspired by Symons’ own ideas and specifically designed to test theories formulated in this very work. 
 
In addition, potentially important new factors in human reproductive behaviour that even Symons did not foresee have been identified, for example role of levels of fluctuating asymmetry functioning as a criterion for, or at least correlate of, physical attractiveness. 
 
For an updated discussion of the evolutionary psychology of human sexual behaviour, complete with the latest empirical data, readers should consult the latest edition of David Buss’s The Evolution Of Desire: Strategies of Human Mating
 
In contrast, in support of his theories Symons relies largely on classical literary insight, anecdote and, most importantly, a review of the ethnographic record. 
 
However, this latter focus ensures that, in some respects, the work remains of more than merely of historical interest. 
 
After all, one of the more legitimate criticisms levelled against recent research in evolutionary psychology is that it is insufficiently cross-cultural and, with several notable exceptions (e.g. Buss 1989), relies excessively on research conducted among convenience samples of students at western universities. 
 
Given costs and practicalities, this is inevitable. However, for a field that aspires to understand a human nature presumed to be universal, such a method of sampling is highly problematic. 
 
The Evolution of Human Sexuality’ therefore retains its importance for two reasons. 

First, is it the founding work of modern evolutionary psychological research into human sexual behaviour, and hence of importance as a landmark and classic text in the field, as well as in the history of science more generally. 

Second, it also remains of value to this day for the cross-cultural and ethnographic evidence it marshals in support of its conclusions. 

Endnotes

[1] Actually, the first person to discover this, albeit inadvertently, was the great Victorian polymath, pioneering statistician and infamous eugenicist Francis Galton, who, attempting to discover abnormal facial features possessed by the criminal class, succeeded in morphing the faces of multiple convicted criminals. The result was, presumably to his surprise, an extremely attractive facial composite, since all the various minor deformities of the many convicted criminals whose faces he morphed actually balanced one another out to produce a face with few if any abnormalities or disproportionate features.

[2] More recent research in this area has focused on the related concept of fluctuating asymmetry.

[3] However, recent meta-analyses have called into question the evidence for cyclical fluctuations in female mate preferences (Wood et al 2014; cf. Gildersleeve et al 2014), and it has been suggested that such findings may represent casualties of the so-called replication crisis in psychology. It has also been questioned whether ovulation in humans is indeed concealed, or is actually detectable by subtle cues (e.g. Miller et al 2007), for example, changes in face shape (Oberzaucher et al 2012), breast symmetry (Scutt & Manning 1996) and body scent (Havlicek et al 2006).

[4] Another factor leading recent researchers to overestimate the importance of female choice in human evolution is their feminist orientation, since female choice gives women an important role in human evolution, even, paradoxically, in the evolution of male traits.

[5] Actually, in most cultures, only a girl’s first marriage is arranged on her behalf by her parents. Second- and third-marriages are usually negotiated by the woman herself. However, since female fertility peaks early, it is a girl’s first marriage that is usually of the most reproductive, and hence Darwinian, significance.

[6] Indeed, the human anatomical trait in humans that perhaps shows the most evidence of being a product of intersexual selection is a female one, namely the female breasts, since the latter are, unlike the mammary glands of most other mammals, permanently present from puberty on, not only during lactation, and composed primarily of fatty tissues, not milk (Møller 1995; Manning et al 1997; Havlíček et al 2016

[7] Wilson terms his theory “the kin selection theory hypothesis of the origin of homosexuality” (p145). However, a better description might be the ‘helper at the nest theory of homosexuality’, the basic idea being that, like sterile castes in some insects, and like older siblings in some bird species where new nest sites are unavailable, homosexuals, rather than reproducing themselves, direct their energies towards assisting their collateral kin in successfully raising, and provisioning, their own offspring (p143-7). The main problem with this theory is that there is no evidence that homosexuals do indeed devote any greater energies towards assisting their kin in this respect. On the contrary, homosexuals instead seem to devote much of their time and resources towards their own sex life, much as do heterosexuals (Bobrow & Bailey 2001).

[8] As we will see, contrary to the stereotype of evolutionary psychologists as viewing all traits as necessarily adaptive, as they are accused of doing by the likes of Gould, Symons also argued that the female orgasm and menopause are non-adaptive, but rather by-products of other adaptations.

[9] This is not necessarily to say that rampant, indiscriminate promiscuity is a male utopia, or the ideal of any man, be he homosexual or heterosexual. On the contrary, the ideal mating system for any individual male is harem polygyny in which the chastity of his own partners is rigorously policed (see Despotism and Differential Reproduction: which I have reviewed here and here). However, given an equal sex ratio, this would condemn other males to celibacy. Similarly, Symons reports that “Homosexual men, like most people, usually want to have intimate relationships”. However, he observes:

Such relationships are difficult to maintain, largely owing to the male desire for sexual variety; the unprecedented opportunity to satisfy this desire in a world of men, and the male tendency towards sexual jealousy” (p297).  

It does indeed seem to be true that homosexual relationships, especially those of gay males, are, on average, of shorter duration than are heterosexual relationships. However, Symons’ claim regarding “the male tendency towards sexual jealousy” is questionable. Actually, subsequent research in evolutionary psychology has suggested that men are no more prone to jealousy than women, but rather that it is sorts of behaviours which most intensely provoke such jealousy that differentiate the sexes (Buss 1992). However, many gay men practice open relationships, which seems to suggest a lack of jealousy – or perhaps this simply reflects a recognition of the difficulty of maintaining relationships given, as Symons puts it, “the male desire for sexual variety [and] the unprecedented opportunity to satisfy this desire in a world of men”. 

[10] Indeed, far from men being led to objectify women due to the portrayal of women in a sexualized manner in the media, Symons suggests:

There may be no positive feedback at all; on the contrary, constant exposure to pictures of nude and nearly nude female bodies may to some extent habituate men to these stimuli” (p304).

[11] Admittedly, some aspects of body-type typically preferred by gay males (especially the twink) do reflect apparently female traits, especially a relative lack of body-hair. However, lack of body-hair is also obviously indicative of youth. Moreover, a relative lack of body-hair also seems to be a trait favoured in men by heterosexual women. For a discussion of the relative preference on the part of (heterosexual) females for masculine versus feminine traits in male sex partners, see the final section of this review.

[12] Incidentally, Symons also rejects the theory that the female menopause is adaptive, a theory which has subsequently become known as the grandmother hypothesis (p13). Also, although it does not directly address the issue, Symons’ discussion of human rape (p276-85), has also been interpreted as implicitly favouring the theory that rape is a by-product of the greater male desire for commitment free promiscuous sex, rather than the product of a specific rape adaptation in males (see Palmer 1991; and A Natural History of Rape: reviewed here). 

References 

Bellis & Baker (1990). Do females promote sperm competition?: Data for humans. Animal Behavior, 40: 997-999 
Bobrow & Bailey (2001). Is male homosexuality maintained via kin selection? Evolution and Human Behavior, 22: 361-368 
Bogaert & Hershberger (1999) The relation between sexual orientation and penile size. Archives of Sexual Behavior 1999 Jun;28(3) :213-21. 
Buss (1989). Sex differences in human mate preferences: Evolutionary hypotheses tested in 37 cultures. Behavioral and Brain Sciences 12: 1-49 
Ellis & Symons (1990) Sex differences in sexual fantasy: An evolutionary psychological approach, Journal of Sex Research 27(4): 527-555.
Gildersleeve, Haselton & Fales (2014) Do women’s mate preferences change across the ovulatory cycle? A meta-analytic review. Psychological Bulletin 140(5):1205-59.
Havlíček, Dvořáková, Bartos & Fleg (2006) Non‐Advertized does not Mean Concealed: Body Odour Changes across the Human Menstrual Cycle. Ethology 112(1):81-90.
Havlíček et al (2016) Men’s preferences for women’s breast size and shape in four cultures. Evolution and Human Behavior 38(2): 217–226 
Kenrick & Keefe (1992). Age preferences in mates reflect sex differences in human reproductive strategies. Behavioral and Brain Sciences, 15: 75-133. 
Kruger et al (2003) Proper and Dark Heroes as Dads and Cads. Human Nature 14(3): 305-317 
Manning et al (1997) Breast asymmetry and phenotypic quality in women. Ethology and Sociobiology 18(4): 223–236 
Miller (1998). How mate choice shaped human nature: A review of sexual selection and human evolution. In C. Crawford & D. Krebs (Eds.), Handbook of Evolutionary Psychology: Ideas, Issues, and Applications (pp. 87-129). Mahwah, NJ: Lawrence Erlbaum
Miller, Tybur & Jordan (2007). Ovulatory cycle effects on tip earnings by lap dancers: economic evidence for human estrous? Evolution and Human Behavior. 28(6):375–381 
Møller et al (1995) Breast asymmetry, sexual selection, and human reproductive success. Ethology and Sociobiology 16(3): 207-219 
Palmer (1991) Human Rape: Adaptation or By-Product? Journal of Sex Research 28(3): 365-386 
Penton-Voak et al (1999) Menstrual cycle alters face preferences, Nature 399 741-2. 
Puts (2010) Beauty and the Beast: Mechanisms of Sexual Selection in Humans. Evolution and Human Behavior 31 157-175 
Salmon (2004) The Pornography Debate: What Sex Differences in Erotica Can Tell Us About Human Sexuality. In Evolutionary Psychology, Public Policy and Personal Decisions (London: Lawrence Erlbaum Associates, 2004) 
Scutt & Manning (1996) Symmetry and ovulation in women. Human Reproduction 11(11):2477-80
Sherman (1989) The clitoris debate and levels of analysis, Animal Behaviour, 37: 697-8
Wood et al (2014). Meta-analysis of menstrual cycle effects on women’s mate preferencesEmotion Review, 6(3), 229–249.

Judith Harris’s ‘The Nurture Assumption’: By Parent or Peers

Judith Harris, The Nurture Assumption: Why Children Turn Out the Way They Do. Free Press, 1998.

Almost all psychological traits on which individual humans differ, from personality and intelligence to mental illness, are now known to be substantially heritable. In other words, individual differences in these traits are, at least in part, a consequence of genetic differences between individuals. 

This finding is so robust that it has even been termed by Eric Turkenheimer the First Law of Behviour Genetics and, although once anathema to most psychologists save a marginal fringe of behavioural geneticists, it has now, under the sheer weight of evidence produced by the latter, belatedly become the new orthodoxy. 

On reflection, however, this transformation is not entirely a revelation. 

After all, it was only in the mid-twentieth century that the curious notion that individual differences were entirely the product of environmental differences first arose, and, even then, this delusion was largely restricted to psychologists, sociologists, feminists and other such ‘professional damned fools’, along with those among the semi-educated public who seek to cultivate an air of intellectualism by aping the former’s affections. 

Before then, poets, peasants and laypeople alike had long recognized that ability, insanity, temperament and personality all tended to run in families, just as physical traits like stature, complexion, hair and eye colour also do.[1]

However, while the discovery of a heritable component to character and ability merely confirms the conventional wisdom of an earlier age, another behavioural genetic finding, far more surprising and counterintuitive, has passed relatively unreported. 

This is the discovery that the so-called shared family environment (i.e. the environment shared by siblings, or non-siblings, raised in the same family home) actually has next to no effect on adult personality and behaviour. 

This we know from such classic study designs in behavioural genetics as twin studiesadoption studies and family studies.  

In short, individuals of a given degree of relatedness, whether identical twins, fraternal twins, siblings, half-siblings or unrelated adoptees, are, by the time they reach adulthood, no more similar to one another in personality or IQ when they are raised in the same household than when they are raised in entirely different households. 

The Myth of Parental Influence 

Yet parental influence has long loomed large in virtually every psychological theory of child development, from the Freudian Oedipus complex and Bowby’s attachment theory to the whole literary genre of books aimed at instructing anxious parents on how best to raise their children so as to ensure that the latter develop into healthy, functional, successful adults. 

Indeed, not only is the conventional wisdom among psychologists overturned, but so is the conventional wisdom among sociologists – for one aspect of the shared family environment is, of course, household income and social class

Thus, if the family that a person is brought up in has next to no impact on their psychological outcomes as an adult, then this means that the socioeconomic status of the family home in which they are raised also has no effect. 

Poverty, or a deprived upbringing, then, has no effect on IQ, personality or the prevalence of mental illness, at least by the time a person has reached adulthood.[2]

Neither is it only leftist sociologists who have proved mistaken. 

Thus, just as leftists use economic deprivation as an indiscriminate, catch-all excuse for all manner of social pathology (e.g. crime, unemployment, educational underperformance) so conservatives are apt to place the blame on divorcefamily breakdown, having children out of wedlock and the consequential increase in the prevalence of single-parent households

However, all these factors are, once again, part of the shared family environment – and according to the findings of behavioural genetics, they have next to no influence on adult personality or intelligence. 

Of course, chaotic or abusive family environments do indeed tend to produce offspring with negative life outcomes. 

However, none of this proves that it was the chaotic or abusive family environment that caused the negative outcomes. 

Rather, another explanation is at hand – perhaps the offspring simply biologically inherit the personality traits of their parents, the very personality traits that caused their family environment to be so chaotic and abusive in the first place.[3] 

For example, parents who divorce or bear offspring out-of-wedlock likely differ in personality from those who first get married then stick together, perhaps being more impulsive or less self-disciplined and conscientious (e.g. less able refrain from having children from a relationship that was destined to be fleeting, or less able to persevere and make the relationship last). 

Their offspring may, then, simply biologically inherit these undesirable personality attributes, which then themselves lead to the negative social outcomes associated with being raised in single-parent households or broken homes. The association between family breakdown and negative outcomes for offspring might, then, reflect simply the biological inheritance of personality. 

Similarly, as leftists are fond of reminding us, children from economically-deprived backgrounds do indeed have lower recorded IQs and educational attainment than those from more privileged family backgrounds, as well as other negative outcomes as adults (e.g. lower earnings, higher rates of unemployment). 

However, this does not prove that coming from a deprived family background necessarily itself depresses your IQ, educational attainment or future salary. 

Rather, an equally plausible possibility is simply that offspring simply biologically inherit the low intelligence of their parents – the very low intelligence which was likely a factor causing the low socioeconomic status of their parents, since intelligence is known to correlate strongly with educational and occupational advancement.[4]

In short, the problem with all of this body of research which purports to demonstrate the influence of parents and family background on psychology and behavioural outcomes for offspring is that they fail to control for the heritability of personality and intelligence, an obvious confounding factor

The Non-Shared Environment

However, not everything is explained by heredity. As a crude but broadly accurate generalization, only about half the variation for most psychological traits is attributable to genes. This leaves about half of the variation in intelligence, personality and mental illness to be explained environmental factors.  

What are these environmental factors if they are not to be sought in the shared family environment

The obvious answer is, of course, the non-shared family environment – i.e. the ways in which even children brought up in the same family-home nevertheless experience different micro-environments, both within the home and, perhaps more importantly, outside it. 

Thus, even the fairest and most even-handed parents inevitably treat their different offspring differently in some ways.  

Indeed, among the principal reasons that parents treat their different offspring differently is precisely because the different offspring themselves differ in their own behaviour.  

Corporal punishment 

Rather than differences in the behaviour of different children resulting from differences in how their parents treat them, it may be that differences in how parents treat their children may reflect responses to differences in the behaviour of the children themselves. 

In other words, the psychologists have the direction of causation precisely backwards. 

Take, for example, one particularly controversial issue, namely the physical chastisement of children by their parents as a punishment for bad behaviour (e.g. spanking). 

Thus, some psychologists have sometimes argued that physical chastisement actually causes misbehaviour. 

As evidence, they cite the fact that children who are spanked more often by their parents or caregivers on average actually behave worse than those whose caregivers only rarely or never spank the children entrusted to their care.  

This, they claim, is because, in employing spanking as a form of discipline, caregivers are inadvertently imparting the message that violence is a good way of solving your problems. 

Actually, however, I suspect children are more than capable of working out for themselves that violence is often an effective means of getting your way, at least if you have superior physical strength to your adversary. Unfortunately, this is something that, unlike reading, arithmetic and long division, does not require explicit instruction by teachers or parents. 

Instead, a more obvious explanation for the correlation between spanking and misbehaviour in children is not that spanking causes misbehaviour, but rather that misbehaviour causes spanking. 

Indeed, once one thinks about it, this is in fact rather obvious: If a child never seriously misbehaves, then a parent likely never has any reason to spank that child, even if the parent is, in principle, a strict disciplinarian; whereas, on the other hand, a highly disobedient child is likely to try the patience of even the most patient caregiver, whatever his or her moral opposition to physical chastisement in principle. 

In other words, causation runs in exactly the opposite direction to that assumed by the naïve psychologists.[5] 

Another factor may also be at play – namely, offspring biologically inherit from their parents the personality traits that cause both the misbehaviour and the punishment. 

In other words, parents with aggressive personalities may be more likely to lose their temper and physically chastise their children, while children who inherit these aggressive personalities are themselves more likely to misbehave, not least by behaving in an aggressive or violent manner. 

However, even if parents treat their different offspring differently owing to the different behaviour of the offspring themselves, this is not the sort of environmental factor capable of explaining the residual non-shared environmental effects on offspring outcomes. 

After all, this merely begs the question as to what caused these differences in offspring behaviour in the first place? 

If the differences in offspring behaviour exist prior to differences in parental responses to this behaviour, then these differences cannot be explained by the differences in parental responses.  

Peer Groups 

This brings us back to the question of the environmental causes of offspring outcomes – namely, if about half the differences among children’s IQs and personalities are attributable to environmental factors, but these environmental factors are not to be found in the shared family environment (i.e. the environment shared by children raised in the same household), then where are these environmental factors to be sought? 

The search for environmental factors affecting personality and intelligence has, thus far, been largely unsuccessful. Indeed, some behavioural geneticists have almost gone as far as conceding scholarly defeat in identifying correlates for the environmental portion of the variance. 

Thus, leading contemporary behavioural geneticist Robert Plomin in his recent book, Blueprint: How DNA Makes Us Who We Are, concludes that those environmental factors that affect cognitive ability, personality, and the development of mental illness are, as he puts it, ‘unsystematic’ in nature. 

In other words, he seems to be saying that they are mere random noise. This is tantamount to accepting that the null hypothesis is true. 

Judith Harris, however, has a quite different take. According to Harris, environmental causes must be sought, not within the family home, but rather outside it – in a person’s interactions with their peer-group and the wider community.[6]

Environment ≠ Nurture 

Thus, Harris argues that the so-called nature-nurture debate is misnamed, since the word ‘nurture’ usually refers to deliberate care and moulding of a child (or of a plant or animal). But many environmental effects are not deliberate. 

Thus, Harris repeatedly references behaviourist John B. Watson’s infamous boast: 

Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in and I’ll guarantee to take any one at random and train him to become any type of specialist I might select—doctor, lawyer, artist, merchant-chief and, yes, even beggar-man and thief, regardless of his talents, penchants, tendencies, abilities, vocations, and race of his ancestors.” 

Yet what strikes me as particularly preposterous about Watson’s boast is not its radical environmental determinism, nor even its rather convenient unfalsifiability.[7] 

Rather, what most strikes me as most preposterous about Watson’s claim is its frankly breath-taking arrogance. 

Thus, Watson not only insisted that it was environment alone that entirely determined adult personality. In this same quotation, he also proclaimed that he already fully understood the nature of these environmental effects to such an extent that, given omnipotent powers to match his evidently already omniscient understanding of human development, he could produce any outcome he wished. 

Yet, in reality, environmental effects are anything but clear-cut. Pushing a child in a certain direction, or into a certain career, may sometimes have the desired effect, but other times have the exact opposite effect to that desired, provoking the child to rebel against parental dictates. 

Thus, even to the extent that environment does determine outcomes, the precise nature of the environmental factors implicated, and their interaction with one another, and with the child’s innate genetic endowment, is surely far more complex than the simple mechanisms proposed by behaviourists like Watson (e.g. reinforcement and punishment). 

Language Acquisition 

The most persuasive evidence for Harris’s theory of the importance of peer groups comes from an interesting and widely documented peculiarity of language acquisition

The children of immigrants, whose parents speak a different language inside the family home, and may even themselves be monolingual, nevertheless typically grow up to speak the language of their host culture rather better than they do the language to which they were first exposed in the family home. 

Indeed, while their parents may never achieve fluency in the language of their host culture, having missed out on the Chomskian critical period for language acquisition, their children often actually lose the ability to speak their parent’s language, often much to the consternation of parents and grandparents. 

Yet, from an sociobiological or evolutionary psychological perspective, such an outcome is obviously adaptive. 

If a child is to succeed in wider society, they must master its language, whereas, if their parent’s first language is not spoken anywhere in their host society except in their family, then it is of limited utility, and, once their parents themselves become proficient in the language of the host culture, becomes entirely redundant (see The Ethnic Phenomenon (reviewed herehere and here): p258). 

Code-Switching 

Harris suggests that the same applies to personality. Just as the child of immigrants switches between one language and another at home and school, so they also adopt different personalities. 

Thus, many parents are surprised to be told by their children’s teachers at parents’ evenings that their offspring is quiet and well-behaved at school, since, as they themselves report, he or she isn’t at all like that at home. 

Yet, at home, a child has only, at most, a sibling or two with whom to compete for his parents’ attention. In contrast, at school, he or she has a whole class with whom to compete for their teacher’s attention.

It is therefore unsurprising that most children are less outgoing at school than they are at home with their parents. 

For example, an older sibling might be able push his little brother around at home. But, if he is small for his age, he is unlikely to be able to get away with the same behaviour among his peers at school. 

Children therefore adopt two quite different personalities – one for interactions with family and siblings, and another for among their peers.

This then, for Harris, explains why, perhaps surprisingly, birth-order has generally been found to have little if any effect on personality, at least as personality manifests itself outside the family home. 

An Evolutionary Theory of Socialization? 

Interestingly, even evolutionary psychologists have not been immune from the delusion of parental influence. Thus, in one influential paper, anthropologists Patricia Draper and Henry Harpending argued that offspring calibrate their reproductive strategy by reference to the presence or absence of a father in their household (Draper & Harpending 1982). 

On this view, being raised in a father-absent household is indicative of a social environment where low male parental investment is the norm, and hence offspring adjust their own reproductive strategy accordingly, adopting a promiscuous, low-investment mating strategy characterized by precocious sexual development and an inability to maintain lasting long-term relationships (Draper & Harpending 1982Belsky et al 1991). 

There is indeed, as these authors amply demonstrate, a consistent correlation between father-absence during development and both earlier sexual development and more frequent partner-switching in later life. 

Yet there is also another, arguably more obvious, explanation readily at hand to explain this association. Perhaps offspring simply inherit biologically the personality traits, including sociosexual orientation, of their parents. 

On this view, offspring raised in single-parent households are more likely to adopt a promiscuous, low-investment mating strategy simply because they biologically inherit the promiscuous sociosexual orientation of their parents, the very promiscuous sociosexual orientation that caused the latter to have children out-of-wedlock or from relationships that were destined to break down and hence caused the father-absent childhood of their offspring. 

Moreover, even on a priori theoretical grounds, Draper, Harpending and Belsky’s reasoning is dubious. 

After all, whether you personally were raised in a one- or two-parent family is obviously a very unreliable indicator of the sorts of relationships prevalent in the wider community into which you are born, since it represents a sample size of just one. 

Instead, therefore, it would be far more reliable to calibrate your reproductive strategy in response to the prevalence of one-parent households in the wider community at large, rather than the particular household type into which you happen to have been born.  

This, of course, directly supports Harris’s own theory of ‘peer group socialization’. 

In short, to the extent that children do adapt to the environment and circumstances of their upbringing (and they surely do), they must integrate into, adopt the norms of, and a reproductive strategy to maximize their fitness within, the wider community into which they are born, rather than the possibly quite idiosyncratic circumstances and attitudes of their own family. 

Absent Fathers, from Upper-Class to Under-Class 

Besides language-acquisition among the children of immigrants, another example cited by Harris in support of her theory of ‘peer group socialization’ is the culture, behaviours and upbringing of British upper-class males.

Here, boys were, and, to some extent, still are, reared primarily, not by their parents, but rather by nanniesgovernoresses and, more recently, in exclusive fee-paying all-male boarding schools

Yet, despite having next to no contact with their fathers throughout most of their childhood, these boys nevertheless managed somehow to acquire manners, attitudes and accents similar, if not identical, to those of their upper-class fathers, and not at all those of the middle-class nannies, governoresses and masters with whom they spent most of their childhood being raised. 

Yet this phenomenon is by no means restricted to the British upper-classes. On the contrary, rather than citing the example of the British upper-classes in centuries gone by, Harris might just as well have cited that of contemporary underclass in Britain and elsewhere, since what was once true of the British upper-classes, is now equally true of the underclass

Just as the British upper-classes were once raised by governoresses, nannies and in private schools with next to no contact with their fathers, so contemporary underclass males are similarly raised in single-parent households, often to unwed mothers, and typically have little if any contact with their biological fathers. 

Here, as Warren Farrell observes in his seminal The Myth of Male Power (which I have reviewed here and here), there is a now a “a new nuclear family: woman, government and child”, what Farrell terms “Government as a Substitute Husband”. 

Yet, once again, these underclass males, raised by single parents with the assistance of the state, typically turn out much like their absent fathers with whom they have had little if any contact, often going on to promiscuously father a succession of offspring themselves, with whom they likewise have next to no contact. 

Abuse 

But what of actual abuse? Surely this has a long-term devastating psychological impact on children. This, at any rate, is the conventional wisdom, and questioning this wisdom is tantamount to contemporary heresy, with attendant persecution

Take, for example, what is perhaps the form of child abuse that provokes the most outrage and disgust – namely, sexual abuse. Here, it is frequently asserted that paedophiles were almost invariably themselves abused as children, which creates a so-called ‘cycle of abuse’. 

However, there are at least three problems with this claim. 

First, it cannot explain how the first person in this cycle became a paedophile. 

Second, we might doubt whether it is really true that paedophiles are disproportionately likely to have themselves been abused as children. After all, abuse is something that almost invariably happens surreptitiously ‘behind closed doors’ and is therefore difficult to verify or disprove. 

Thus, even if most paedophiles claim to have been victims of abuse, it is possible that they are simply lying in order to elicit sympathy or excuse or shift culpability for their own offending. 

Finally, even if paedophiles can be shown to be disproportionately likely to have themselves been victimized as children, this by no means proves that their victimization caused their sexual orientation. 

Rather, since most abuse is perpetrated by parents or other close family members, an alternative possibility is that victims simply biologically inherit the sexual orientation of their abuser. After all, if homosexuality is partially heritable, as is now widely accepted, then why not paedophilia as well? 

However, the finding that the shared family environment accounts for hardly any of the variance in outcomes among adults does not preclude the possibility that severe abuse may indeed have an adverse effect on adult outcomes. 

After all, adoption studies can only tell us what percent of the variance is caused by heredity or by shared or unshared environments within a specific population as a whole. 

Perhaps the shared family environment accounts for so little of the variance precisely because the sort of severe abuse that does indeed have a devastating long-term effect on personality and mental health is, thankfully, so very rare in modern societies. 

Indeed, it may be especially rare within the families used in adoption studies precisely because adoptive families are carefully screened for suitability before being allowed to adopt. 

Moreover, Harris emphasizes an important caveat: Even if abuse does not have long-term adverse psychological effects, this does not mean that abuse causes no harm, and nor does it in any way excuse such abuse. 

On the contrary, the primary reason we shouldn’t mistreat children (and should severely punish those who do) is not on account of some putative long-term psychological effect on the adults whom the children subsequently become, but rather because of the very real pain and suffering inflicted on a child at the time the abuse takes place. 

Race Differences in IQ 

Finally, Harris even touches upon that most vexed area of the (so-called) nature-nurture debate – race differences in intelligence

Here, the politically-correct claim that differences in intelligence between racial groups, as recorded in IQ tests, are of purely environmental origin runs into a problem, since the sorts of environmental effects that are usually posited by environmental determinists as accounting for the black-white test score gap in America (e.g. differences in rates of poverty and socioeconomic status) have been shown to be inadequate because, even after controlling for these factors, there remains a still unaccounted for gap in test-scores. 

Thus, as Arthur R. Jensen laments: 

This gives rise to the hypothesizing of still other, more subtle environmental factors that either have not been or cannot be measured—a history of slavery, social oppression, and racial discrimination, white racism, the ‘black experience,’ and minority status consciousness [etc]” (Straight Talk About Mental Tests: p223). 

The problem with these explanations, however, is that none of these factors has yet been demonstrated to have any effect on IQ scores. 

Moreover, some of the factors proposed as explanations are formulated in such a vague form (e.g. “white racism, the ‘black experience’”) that it is difficult to conceive of how they could ever be subjected to controlled testing in the first place.[8] 

Jensen has termed this mysterious factor the ‘X-factor’. 

In coining this term, Jensen was emphasizing its vague, mysterious and unfalsifiable nature. Jensen did not actually believe that this posited ‘X-factor’, whatever it was, really did account for the test-score gap. Rather, he thought heredity explained most, if not all, of the remaining test-score gap. 

However, Harris takes Jensen at his word. Thus, she announces: 

I believe I know what this X factor is… I can describe it quite clearly. Black kids and white kids identify with different groups that have different norms. The differences are exaggerated by group contrast effects and have consequences that compound themselves over the years. That’s the X factor” (p248-9). 

Interestingly, although she does not develop it, Harris’s claim is actually compatible with, and potentially reconciles, the conflicting findings of two of the most widely-cited studies in this vexed area of research and debate. 

First, in the more recent of these two studies, Minnesota Transracial Adoption Study, the same differences in IQ were observed among black, white and mixed-race children adopted into upper-middle class white families as are found among the respective among black, white and mixed-race populations in society at large (Scarr & Weinberg 1976). 

Moreover, although, when tested during childhood, the children’s adoptive households did seem to have had a positive effect on their IQ scores, by the time they reached the cusp of adulthood, the black teenagers who had been adopted into upper-middle-class white homes actually scored no higher in IQ than did blacks in the wider population not raised in upper-middle class white families (Weinberg, Scarr & Waldman 1992). 

This study is often cited by hereditarians as evidence for innate racial differences (e.g. Levin 1994Lynn 1994Whitney 1996). 

However, in the light of the findings of the behavioural genetics studies discussed by Harris in ‘The Nurture Assumption’, the fact that white upper-middle-class adoptive homes had no effect on the adult IQs of the black children adopted into them is, in fact, hardly surprising. 

After all, as we have seen, the shared family environment generally has no effect on IQ, at least by the time the person being tested has reached adulthood. One would therefore not expect adoptive homes, howsoever white and upper-middle-class, to have any effect on adult IQs of the black children adopted into them, or indeed of the white or mixed-race children adopted into them. 

In short, adoptive homes have no effect on adult IQ, whether or not the adoptees, or adoptive families, are black, white, brown, yellow, green or purple! 

But, if race differences in intelligence are indeed entirely environmental in origin, then where are these environmental causes to be found, if not in the family environment? 

Harris has an answer – black culture. 

According to her, the black adoptees, although raised in white adoptive families, nevertheless still come to identify as black, and to identify with the wider black culture and social norms. In addition, they may, on account of their racial identification, come to socialize with other blacks in school and elsewhere. 

As a result of this acculturation to African-American norms and culture, they therefore come to score lower in IQ than their white peers and adoptive siblings. 

But how can we test this theory? Perhaps we could look at the IQ scores of black children raised in white families where there is no wider black culture with which to identify, and few if any black peers with whom to socialize?  

This brings us to the second of the two studies which Harris’s theory potentially reconciles, namely the Eyferth study.  

Here, it was found that the mixed-race children fathered by black American servicemen who had had sexual relationships with German women during the Allied occupation of Germany after World War Two had almost exactly the same average IQ scores as a control group of offspring fathered by white US servicemen during the same time period (Eyferth 1959). 

The crucial difference from the Minnesota study may be that these children, raised in monoracial Germany in the mid-twentieth century, had no wider African-American culture with which to identify or whose norms to adopt, and few if any black or mixed-race peers in their vicinity with whom to socialize. 

This then is perhaps the last lifeline for the radical environmentalist theory of race differences in intelligence – namely the theory that African-American culture somehow depresses intelligence. 

Unfortunately, however, this proposition is likely almost as politically unpalatable to politically-correct liberals as is the notion that race differences in intelligence reflect innate genetic differences.[9] 

Endnotes

[1] Thus, this ancient wisdom is reflected, for example, in many folk sayings, such as the apple does not fall far from the tree, a chip off the old block and like father, like son, many of which long predate either Darwin’s theory of evolution, and Mendel’s work on heredity, let alone the modern work of behavioural geneticists.

[2] It is important to emphasize here that this applies only to psychological outcomes, and not, for example, economic outcomes. For example, a child raised by wealthy parents is indeed likely to be wealthier than one raised in poverty, if only because s/he is likely to inherit (some of) the wealth of his parents. It is also possible that s/he may, on average, obtain a better job as a consequence of the opportunities opened by his privileged upbringing. However, his IQ will be no higher than had s/he been raised in relative poverty, and neither will s/he be any more or less likely to suffer from a mental illness. 

[3] Similarly, it is often claimed that children raised in care homes, or in foster care, tend to have negative life-outcomes. However, again, this by no means proves that it is care homes or foster care that causes these negative life-outcomes. On the contrary, since children who end up in foster care are typically either abandoned by their biological parents, or forcibly taken from their parents by social services on account of the inadequate care provided by the latter, or sometimes outright abuse, it is obvious that their parents represent an unrepresentative sample of society as a whole. An obvious alternative explanation, then, is that the children in question simply inherit the dysfunctional personality attributes of their biological parents, namely the very dysfunctional personality attributes that caused the latter to either abandon their children or have them removed by the social services.

[4] Likewise, the heritability of such personality traits as conscientiousness and self-discipline, in addition to intelligence, likely also partly account for the association between parental income and academic attainment among their offspring, since both academic attainment, and occupational success, require the self-discipline to work hard to achieve success. These factors, again in addition to intelligence, likely also contribute to the association between parental income and the income and socioeconomic status ultimately attained by their offspring.

[5] This possibility could, of course, be ruled out by longitudinal studies, which investigate whether the spanking preceded the misbehaviour, or vice versa. However, this is easier said than done, since, unless relying on the reports by caregivers or children themselves, which depends on both the memory and honesty of the caregivers and children themselves, it would have to involve intensive, long-term, and continued observation in order to establish which came first, namely the pattern of misbehaviour, or the adoption of physical chastisement as a method of discipline. This would, presumably, require continuous observation from birth onwards, so as to ensure that the very first instance of spanking or excessive misbehaviour were recorded. To my knowledge, such a careful and intensive long-term study of this sort has yet to be conducted, if even it is possible.

[6] The fact that the relevant environmental variables must be sought outside the family home is one reason why the terms ‘between-family environment’ and ‘within-family environment’, sometimes used as synonyms or alternatives for ‘shared’ and ‘non-shared family environment’ respectively, are potentially misleading. Thus, the ‘within-family environment’ refers to those aspects of the environment that differ for different siblings even within a single family. However, these factors may differ within a single family precisely because they occur outside, not within, the family itself. The terms ‘shared’ and ‘non-shared family environment’ are therefore to be preferred, so as to avoid any potential confusion these alternative terms could cause.

[7] Both practical and ethical considerations, of course, prevent Watson from actually creating his “own specified world” in which to bring up his “dozen healthy infants”. Therefore, no one is able to put his claim to the test. It is therefore unfalsifiable and Watson is therefore free to make such boasts, safe in the knowledge that there is no danger of his actually being made to make good on his claims or being proven wrong.

[8] Actually, at least some of these theories are indeed testable and potentially falsifiable. With regard to the factors quoted by Jensen (namely, “a history of slavery, social oppression, and racial discrimination, white racism… and minority status consciousness”), one way of testing these theories is to look at test scores in those countries where there is no such history. For example, in sub-Saharan Africa, as well as in Haiti and Jamaica, blacks are not in the majority, and are moreover in control of the government. Yet the IQ scores of the indigenous population of Africa is actually even lower than among blacks in the USA (see Richard Lynn’s Race Differences in Intelligence: reviewed here). True, most such countries still have a history of racial oppression and discrimination, albeit in the form of European colonialism rather than racial slavery or segregation in the American sense. However, the lower scores for black Africans is true even in those few sub-Saharan African countries that were not colonized by western powers, or only briefly colonized (e.g. Ethiopia). Moreover, this merely begs the question as to why Africa was so easily colonized by Europeans. Also, other minority groups ostensibly subject to racial discrimination and oppression (e.g. Jews, Overseas Chinese) actually score very high in IQ, and are economically successful. As for “the ‘black experience’”, this meanly begs the question as to why the ‘black experience’ has been so similar, and resulted in the same low IQs, in so many different parts of the world, something implausible unless unless the ‘black experience’ itself reflects innate aspects of black African psychology. 

[9] Thus, ironically, the recently deceased James Flynn, though always careful, throughout his career, to remain on the politically-correct radical environmentalist side of the debate with regard to the causes of race differences in intelligence, nevertheless recently found himself taken to task by the leftist, politically-correct British Guardian newspaper for a sentence in his recent book, Does Your Family Make You Smarter, where he described American blacks as coming from a “from a cognitively restricted subculture” (Wilby 2016). Thus, whether one attributes lower black IQs to biology or to culture, either answer is certain offend leftists, and the power of political correctness can, it seems, never be appeased.

References 

Belsky, Steinberg & Draper (1991) Childhood Experience, Interpersonal Development, and Reproductive Strategy: An Evolutionary Theory of Socialization Child Development 62(4): 647-670 

Draper & Harpending (1982) Father Absence and Reproductive Strategy: An Evolutionary Perspective Journal of Anthropological Research 38:3: 255-273 

Eyferth (1959) Eine Untersuchung der Neger-Mischlingskinder in WestdeutschlandVita Humana, 2, 102–114 

Levin (1994) Comment on Minnesota Transracial Adoption Study. Intelligence. 19: 13–20 

Lynn, R (1994) Some reinterpretations of the Minnesota Transracial Adoption Study. Intelligence. 19: 21–27 

Scarr & Weinberg (1976) IQ test performance of black children adopted by White familiesAmerican Psychologist 31(10):726–739 

Weinberg, Scarr & Waldman, (1992) The Minnesota Transracial Adoption Study: A follow-up of IQ test performance at adolescence Intelligence 16:117–135 

Whitney (1996) Shockley’s experiment. Mankind Quarterly 37(1): 41-60

Wilby (2006) Beyond the Flynn effect: New myths about race, family and IQ? Guardian, September 27.

A Modern McCarthyism in our Midst

Anthony Browne, The Retreat of Reason: Political Correctness and the Corruption of Public Debate in Modern Britain (London: Civitas, 2006) 

Western civilization has progressed. Today, unlike in earlier centuries, we no longer burn heretics at the stake

Instead, according to sociologist Steven Goldberg, himself no stranger to contemporary heresy, these days: 

“All one has to lose by unpopular arguments is contact with people one would not be terribly attracted to anyway” (Fads and Fallacies in the Social Sciences: p222). 

Unfortunately, however, Goldberg underplays, not only the psychological impact of ostracism, but also the more ominous consequences that sometimes attach to contemporary heresy. 
 
Thus, bomb and death threats were issued repeatedly to women such as Erin Pizzey and Suzanne Steinmetz for pointing out that women were just as likely, or indeed somewhat more likely, to perpetrate acts of domestic violence against their husbands and boyfriends as their husbands and boyfriends were to perpetrate acts of domestic violence against them – a finding now replicated in literally hundreds of studies (see also Domestic Violence: The 12 Things You Aren’t Supposed to Know). 
 
Similarly, in the seventies, Arthur Jensen, a psychology professor at the University of California, had to be issued with an armed guard on campus after suggesting, in a sober and carefully argued scientific paper, that it was a “not unreasonable” hypothesis that the IQ difference between blacks and whites in America was partly genetic in origin. 
 
Political correctness has also cost people their jobs. 

Academics like Chris BrandHelmuth NyborgLawrence SommersFrank EllisNoah Carl and, most recently, Bo Winegard have been forced to resign or lost their academic positions as a consequence of researching, or, in some cases, just mentioning, politically incorrect theories such as the possible social consequences of, or innate basis for, sex and race differences in intelligence

Indeed, even the impeccable scientific credentials of James Watson, a figure jointly responsible for among the most important scientific discoveries of the twentieth century, did not spare him this fate when he was reported in a newspaper as making some controversial but eminently defensible comments regarding population differences in cognitive ability and their likely impact on prospects for economic development.  

At the time of (re-)writing this piece, the most recent victim of this process of purging in academia is the celebrated historian, and long-term controversialist, David Starkey, excommunicated for some eminently sensible, if crudely expressed, remarks about slavery. 

Meanwhile, as proof of the one-sided nature of the witch-hunt, during the very same month as that in which Starkey was excommunicated from public life, a non-white leftist female academic, Priyamvada Gopal, tweeted the borderline genocidal tweet: 

“White lives don’t matter. As white lives.[1]

Yet the only repercussions the latter faced from her employer, Cambridge University, was to be almost immediately promoted to a full professorship

Cambridge University also, in response, issued a defence of their employees right to academic freedom, tweeting that: 

“[Cambridge] University defends the right of its academics to express their own lawful opinions which others might find controversial”

This is indeed an admirable and principled stance – if applied consistently. 

Unfortunately, however, although this tweet was phrased in general terms, and actually included no mention of Gopal by name, it was evidently not of general application. 

For Cambridge University is, not only among the institutions from which Starkey was forced to tender his resignation this very same year, but also itself the very same institution that, only a year before, had denied a visiting fellowship to Jordan Peterson, the eminent public intellectual, for his controversial stances and statements on a range of topics, and which, only two years before, had denied an academic fellowship to researcher Noah Carl, after a letter calling for his dismissal which was signed by, among others, none other than the loathsome Priyamvada Gopal herself. 

The inescapable conclusion is the freedom of “academics to express lawful opinions which others might find controversial” at Cambridge University applies, despite the general wording of the tweet from which these words are taken, only to those controversial opinions of which the leftist academic and cultural establishment currently approves. 

Losing Your Livelihood 

If I might be accused here of focusing excessively on freedom of speech in an academic context, this is only because academia is among the arenas where freedom of expression is most essential, as it is only if all ideas, however offensive to certain protected groups, are able to freely circulate, and compete, in the marketplace of ideas that knowledge is able to progress through a selective process of testing and falsification.[2]

However, although the university environment is, today, especially intolerant, nevertheless similar fates have also befallen non-academics, many of whom have been deprived of their livelihoods on account of their politics. 

For example, in The Retreat of Reason, first published in 2006, Anthony Browne points to the case of a British headmaster sacked for saying Asian pupils should be obliged to learn English, a policy that was then, only a few years later, actually adopted as official government policy (p50). 

In the years since the publication of ‘The Retreat of Reason’, such examples have only multiplied. 

Indeed, today it is almost taken for granted that anyone caught saying something controversial and politically incorrect on the internet in his own name, or even under a pseudonym if subsequently ‘doxed’, is liable to lose his job.

Likewise, Browne noted that police and prison officers in the UK were then barred from membership of the BNP, a legal and constitutional political party, but not from membership of Sinn Fein, who until quite recently had supported domestic terror against the British state, including the murder of soldiers, civilians and the police themselves, nor of various Marxist groups that advocate the violent overthrow of the whole capitalist system (p51-2). 

Today, meanwhile, even believing that a person cannot change their biological sex is said to be a bar on admission into the British police.

Moreover, employees sacked on account of their political views cannot always even turn to their unions for support. 
 
Instead, trade unions have themselves expelled members for their political beliefs (p52) – then successfully defended this action in the European Court of Human rights by citing the right to freedom of association (see ASLEF v UK [2007] ECHR 184). 

Yet, ironically, freedom of association is not only the precise freedom denied to employers by anti-discrimination laws, but also the very same freedom that surely guarantees a person’s right to be a member of a constitutional, legal political party, or express controversial political views outside of their work, without being at risk of losing their job. 

Browne concludes:

One must be very disillusioned with democracy not to find it at least slightly unsettling that in Europe in the twenty-first century government employees are being banned from joining certain legal political parties but not others, legal democratic party leaders are being arrested in dawn raids for what they have said and political parties leading the polls are being banned by judges” (p57). 

Of course, racists and members of parties like the BNP hardly represent a fashionable cause célèbre for civil libertarians. But, then, neither did other groups targeted for persecution at the time of their persecution. This is, of course, precisely what rendered them so vulnerable to persecution. 
 
Political correctness is often dismissed as a trivial issue, which only bigots and busybodies bother complaining about, when there are so many more serious problems and suffering around in the world. 

Yet free speech is never trivial. When people lose their jobs and livelihoods because of currently unfashionable opinions, what we are witnessing is a form of modern McCarthyism. 
 
Indeed, as American conservative David Horowitz observes: 

“The era of the progressive witch-hunt has been far worse in its consequences to individuals and freedom of expression than was the McCarthy era… [not least because] unlike the McCarthy era witch-hunt, which lasted only a few years, the one enforced by left-wing ‘progressives’ is now entering its third decade and shows no signs of abating” (Left Illusions: An Intellectual Odyssey).[3] 

Yet, while columnists, academics, and filmmakers delight in condemning, without fear of reprisals, a form of McCarthyism that ran out of steam over half a century ago (i.e. anti-communism during the Second Red Scare), few dare to incur the wrath of the contemporary inquisition by exposing a modern McCarthyism right here in our midst.  

Recent Developments 

Browne’s ‘The Retreat of Reason’ was first published in 2006. Unfortunately, however, in the intervening decade and a half, despite Browne’s wise counsel, the situation has only worsened. 

Thus, in 2006, Browne rightly championed New Media facilitated by the internet age, such as blogs, for disseminating controversial, politically-incorrect ideas and opinion, and thereby breaking the mainstream media monopoly on the dissemination of information and ideas (p85). 

Here, Browne was surely right. Indeed, new media, such as blogs, have not only been responsible for disseminating ideas that are largely taboo in the mainstream media, but even for breaking news stories that had been suppressed by mainstream media, such as the racial identity of those responsible for the 2015-2016 New Year’s Eve sexual assaults in Germany

However, in the decade and a half since ‘The Retreat of Reason’ was published, censorship has become increasingly restrictive even in the virtual sphere. 

Thus, internet platforms like YouTubePatreon, Facebook and Twitter increasingly deplatform content providers with politically incorrect viewpoints, and, in a particularly disturbing move, even some websites have been, at least temporarily, forced offline, or banished to the darkweb, by their web hosting providers.

Doctrinaire libertarians respond that this is not a free speech issue, because these are private business with the right to deny service to anyone with whom they choose not to contract.

In reality, however, platforms like Facebook and Twitter are far more than private businesses. As virtual market monopolies, they are part of the infrastructure of everyday life in the twenty-first century.

To be banned from communicating on Facebook is tantamount to being barred from communication in a public place.

Moreover, the problem is only exacerbated by the fact that the few competitors seeking to provide an alternative to these Big Tech monopolies with a greater commitment to free speech are themselves de-platormed by their hosting providers as a direct consequence of their commitment to free speech.

Likewise, the denial of financial services, such as banking or payment processing, to groups or individuals on the basis of their politics is particularly troubling, effectively making it all but impossible those afflicted to remain financially viable. The result is effectively tantamount to being made an ‘unperson’.

Moreover, far from remaining a hub of free expression, social media has increasingly provided a rallying and recruiting ground for moral outrage and repression, not least in the form of so-called twittermobs, intent on publicly shaming, harassing and denying employment opportunities to anyone of whose views they disapprove.

In short, if the internet has facilitated free speech, it has also facilitated political persecution, since today, it seems, one can enjoy all the excitement and exhilaration of joining a witchhunt without ever straying from the comfort of your computer screen.

Explaining Political Correctness 

For Browne, PC represents “the dictatorship of virtue” (p7) and replaces “reason with emotion” and subverts “objective truth to subjective virtue” (xiii). 

Political correctness is an assault on both reason and… democracy. It is an assault on reason, because the measuring stick of the acceptability of a belief is no longer its objective, empirically established truth, but how well it fits in with the received wisdom of political correctness. It is an assault on… democracy because [its] pervasiveness… is closing down freedom of speech” (p5). 

Yet political correctness is not wholly unprecedented. 
 
On the contrary, every age has its taboos. Thus, in previous centuries, it was compatibility with religious dogma rather than leftist orthodoxy that represented the primary “measuring stick of the acceptability of a belief” – as Galileo, among others, was to discover for his pains. 
 
Although, as a conservative, Browne might be expected to be favourably disposed to traditional religion, he nevertheless acknowledges the analogy between political correctness and the religious dogmas of an earlier age: 

Christianity… has shown many of the characteristics of modern political correctness and often went far further in enforcing its intolerance with violence” (p29).iv 

Indeed, this intolerance is not restricted to Christianity. Thus, whereas Christianity, in an earlier age, persecuted heresy with even greater intolerance than even the contemporary left, in many parts of the world Islam still does.  

As well as providing an analogous justification for the persecution of heretics, political correctness may also, Browne suggests, serve a similar psychological function to religion, in representing: 

A belief system that echoes religion in providing ready, emotionally-satisfying answers for a world too complex to understand fully and providing a gratifying sense of righteousness absent in our otherwise secular society” (p6).

Defining Political Correctness 

What, then, do we mean by ‘political correctness’? 

Political correctness evaluates a claim, not on its truth, but on its offensiveness to certain protected groups. Some views are held to be not only false, indeed sometimes not even false, but rather unacceptable, unsayable and beyond the bounds of acceptable opinion. 

Indeed, for the enforcers of the politically correct orthodoxy, the truth or falsehood of a statement is ultimately of little interest to them. 

Browne provides a useful definition of political correctness as: 

An ideology which classifies certain groups of people as victims in need of protection from criticism and which makes believers feel that no dissent should be tolerated” (p4). 

Refining this, I would say that, for an opinion to be politically incorrect, two criteria must be met:

1) The existence of a group to whom the opinion in question is regarded as ‘offensive’
2) The group in question must be perceived as ‘oppressed’

Thus, it is perfectly acceptable to disparage and offend supposedly ‘privileged’ groups (e.g. males, white people, Americans or the English), but groups with ‘victim-status’ are deemed sacrosanct and beyond reproach, at least as a group. 
 
Victim-status itself, however, is rather arbitrarily bestowed. 
 
Certainly, actual poverty or deprivation has little to do with it. 

Thus, it is perfectly acceptable to denigrate the white working-class. Thus, pejorative epithets aimed at the white working class, such as redneck, chav and ‘white trash’, are widely employed and considered socially-acceptable in polite conversation (see Jim Goad’s The Redneck Manifesto: How Hillbillies, Hicks, and White Trash Became America’s Scapegoats).

Yet the use of comparably derogatory terms in respect of, say, black people, is considered wholly beyond the pale, and sufficient to end media careers in Britain and America.

However, multi-millionaires who happen to be black, female or homosexual are permitted to perversely pose as ‘oppressed’, and wallow in their own ostensible victimhood. 
 
Thus, in the contemporary West, the Left has largely abandoned its traditional constituency, namely the working class, in favour of ethnic minorities, homosexuals and feminists.

In the process, the ‘ordinary working man’, once the quintessential proletarian, has found himself recast in leftist demonology as a racist, homophobic, wife-beating bigot.

Likewise, men are widely denigrated in popular culture. Yet, contrary to the feminist dogma which maintains that men have disproportionate power and are privileged, it is in fact men who are overwhelmingly disadvantaged by almost every sociological measure.

Thus, Browne writes: 

Men were overwhelmingly underachieving compared with women at all levels of the education system, and were twice as likely to be unemployed, three times as likely to commit suicide, three times as likely to be a victim of violent crime, four times as likely to be a drug addict, three times as likely to be alcoholic and nine times as likely to be homeless” (p49). 

Indeed, overt discrimination against men, such as the different ages at which men and women were then eligible for state pensions in the UK (p25; p60; p75) and the higher levels of insurance premiums demanded of men (p73) are widely tolerated.[4]

The demand for equal treatment only goes as far as it advantages the [ostensibly] less privileged sex” (p77). 

The arbitrary way in which recognition as an ‘oppressed group’ is accorded, together with the massive benefits accruing to demographics that have secured such recognition, has created a perverse process that Browne aptly terms “competitive victimhood” (p44). 

Few things are more powerful in public debate than… victim status, and the rewards… are so great that there is a large incentive for people to try to portray themselves as victims” (p13-4) 

Thus, groups currently campaigning for ‘victim status’ include, he reports, “the obese, Christians, smokers and foxhunters” (p14). 

The result is what economists call perverse incentives

By encouraging people to strive for the bottom rather than the top, political correctness undermines one of the main driving forces in society, the individual pursuit of self-improvement” (p45) 

This outcome can perhaps even be viewed as the ultimate culmination of what Nietzsche called the transvaluation of values

Euroscepticism & Brexit

Unfortunately, despite his useful definition of the phenomenon of political correctness, Browne goes on to use the term political correctness in a broader fashion that goes beyond this original definition, and, in my opinion, extends the concept beyond its sphere of usefulness. 

For example, he classifies Euroscepticism – i.e. opposition to the further integration of the European Union – as a politically incorrect viewpoint (p60-62). 

Here, however, there is no obvious ‘oppressed group’ in need of protection. 
 
Moreover, although widely derided as ignorant and jingoistic, Eurosceptical opinions have never been actually deemed ‘offensive’ or beyond the bounds of acceptable opinion.

On the contrary, they are regularly aired in mainstream media outlets, and even on the BBC, and recently scored a final victory in Britain with the Brexit campaign of 2016.  

Browne’s extension of the concept of political correctness in this way is typical of many critics of political correctness, who succumb to the temptation to define as ‘political correctness’ as any view with which they themselves happen to disagree. 
 
This enables them to tar any views with which they disagree with the pejorative label of ‘political correctness’. 
 
It also, perhaps more importantly, allows ostensible opponents of political correctness to condemn the phenomenon without ever actually violating its central taboos by discussing any genuinely politically incorrect issues. 

They can therefore pose as heroic opponents of the inquisition while never actually themselves incurring its wrath. 

The term ‘political correctness’ therefore serves a similar function for conservatives as the term ‘fascist’ does for leftists – namely a useful catchall label to be applied to any views with which they themselves happen to disagree.[5]

Jews, Muslims and the Middle East 

Another example of Browne’s extension of the concept of political correctness beyond its sphere of usefulness is his characterization of any defence of the policies of Israel as ‘politically incorrect’. 
 
Yet, here, the ad hominem and guilt-by-association methods of debate (or rather of shutting down debate), which Browne rightly describes as characteristic of political correctness (p21-2), are more often used by defenders of Israel than by her critics – though, here, the charge of ‘anti-Semitism’ is substituted for the usual refrain of ‘racism’.[6]
 
Thus, in the US, any suggestion that the US’s small but disproportionately wealthy and influential Jewish community influences US foreign policy in the Middle East in favour of Israel is widely dismissed as anti-Semitic and roughly tantamount to proposing the existence of a world Jewish conspiracy led by the elders of Zion. 
 
Admittedly, Browne acknowledges: 

The dual role of Jews as oppressors and oppressed causes complications for PC calculus” (p12).  

In other words, the role of the Jews as victims of persecution in National Socialist Germany conflicts with, and weighs against, their current role as perceived oppressors of the Palestinians in the Middle East. 

However, having acknowledged this complication, Browne immediately dismisses its importance, all too hastily going on to conclude in the very same sentence that: 

PC has now firmly transferred its allegiance from the Jews to Muslims” (p12). 

However, in many respects, the Jews retain their ‘victim-status’ despite their hugely disproportionate wealth and political power

Indeed, perhaps the best evidence of this is the taboo on referring to this disproportionate wealth and power. 
 
Thus, while the political Left never tires of endlessly recycling statistics demonstrating the supposed overrepresentation of ‘white males’ in positions of power and privilege, to cite similar statistics demonstrating the even greater per capita overrepresentation of Jews in these exact same positions of power and privilege is deemed somehow deemed beyond the pale, and evidence, not of leftist sympathies, but rather of being ‘far right’. 
 
This is despite the fact that the average earnings of American-Jews and their level of overrepresentation in influential positions in government, politics, media and business relative to population size surely far outstrips that of any other demographic – white males, and indeed White Anglo-Saxon Protestants, very much included.

The Myth of the Gender Pay Gap 

One area where Browne claims that the “politically correct truth” conflicts with the “factually correct truth” is the causes of the gender pay-gap (p8; p59-60). 
 
This is also included by philosopher David Conway as one of six issues, raised by Browne in the main body of the text, for which Conway provides supportive evidence in an afterword entitled ‘Commentary: Evidence supporting Anthony Browne’s Table of Truths Suppressed by PC’, included as a sort of appendix in later editions of Browne’s book. 
 
Although still standard practice in mainstream journalism at the time his book was written, it is regrettable that Browne himself offers no sources to back up the statistics he cites in his text.

This commentary section therefore provides the only real effort to provide sources or citations for many of Browne’s claims. Unfortunately, however, it covers only a few of the many issues addressed by Browne in preceding pages. 
 
In support of Browne’s contention that “different work/life choices” and “career breaks” underlie the gender pay gap (p8), Conway cites the work of sociologist Catherine Hakim (p101-103). 
 
Actually, more comprehensive expositions of the factors underlying the gender pay gap are provided by Warren Farrell in Why Men Earn More (which I have reviewed here, here and here) and Kingsley Browne in Biology at Work: Rethinking Sexual Equality (which I have reviewed here and here). 
 
Moreover, while it indeed true that the pay-gap can largely be explained by what economists call ‘compensating differentials’ – e.g. the fact that men work longer hours, in more unpleasant and dangerous working conditions, and for a greater proportion of their adult lives – Browne fails to factor in the final and decisive feminist fallacy regarding the gender pay gap, namely the assumption that, because men earn more money than women, this necessarily means they have more money than women and are wealthier.

In fact, however, although men earn more money than women, much of this money is then redistributed to women via such mechanisms as marriage, alimony, maintenance, divorce settlements and the culture of dating.

Indeed, as I have previously written elsewhere:

The entire process of conventional courtship is predicated on prostitution, from the social expectation that the man will pay for dinner on the first date, to the legal obligation that he continue to provide for his ex-wife through alimony and maintenance for anything up to ten or twenty years after he has belatedly rid himself of her.

Therefore, much of the money earnt by men is actually spent by, or on, their wives, ex-wives and girlfriends (not to mention daughters) such that, although women earn less than men, women have long been known to researchers in the marketing industry to dominate about 80% of consumer spending
 
Browne does usefully debunk another area in which the demand for equal pay has resulted in injustice – namely the demand for equal prizes for male and female athletes at the Wimbledon Tennis Championships (a demand since cravenly capitulated to). Yet, as Browne observes: 

Logically, if the prize doesn’t discriminate between men and women, then the competition that leads to those prizes shouldn’t either… Those who insist on equal prizes, because anything else is discrimination, should explain why it is not discrimination for men to be denied an equal right to compete for the women’s prize.” (p77) 

Thus, Browne perceptively observes: 

It would currently be unthinkable to make the same case for a ‘white’s only’ world athletics championship… [Yet] it is currently just as pointless being a white 100 metres sprinter in colour-blind sporting competitions as it would be being a women 100 metres sprinter in gender-blind sporting competitions” (p77). 

International Aid 

Another topic addressed by both Browne (p8) and Conway (p113-115) is the reasons for African poverty. 

The politically correct explanation, according to Browne, is that African poverty results from inadequate international aid (p8). However, Browne observes: 

No country has risen out of poverty by means of international aid and cancelling debts” (p20).[7]

Moreover, Browne points out that fashionable policies such as “writing off Third World debt” produce perverse incentives by “encourag[ing] excessive and irresponsible borrowing by governments” (p48), while international aid encourages economic dependence, bureaucracies and corruption (p114).

Actually, in my experience, the usual explanation given for African underdevelopment is not, as Conway suggests, inadequate international aid as such. After all, this explanation only begs the question as to how Western countries such as those in Europe achieved First World status back when there were no other wealthy First World countries around to provide them with international aid to assist with their development.

Instead, in my experience, most leftists blame African poverty and underdevelopment on the supposed legacy of European colonialism. Thus, it is argued that European nations, and indeed white people in general, are themselves to blame for the poverty of Africa. International aid is then reimagined as a form of recompense for past wrongs. 

Unfortunately, however, this explanation for African poverty fares little better. 
 
For one thing, it merely begs the question why it was that Africa was colonized by Europeans rather than vice versa?

The answer, of course, is that much of sub-Saharan Africa was ‘underdeveloped’ (i.e. socially and technologically backward) even before colonization. This was indeed precisely what allowed Africa to be so easily and rapidly conquered and colonized during the late-nineteenth and early-twentieth centuries. 
 
Moreover, if European colonization is really to blame for the poverty of so much of sub-Saharan Africa, then why is it that those few African countries largely spared European colonization, such as Liberia and Ethiopia, are among the most dysfunctional and worst-off in the whole sad and sorry continent? 

The likely answer is that they are worse off than their African neighbours precisely because they lack the infastructure (e.g. roads, railroads) that the much-maligned European colonial overlords were responsible for bequeathing other African states.

In other words, far from holding Africa back, European colonizers often built what little infrastructure and successful industry sub-Saharan Africa still has, and African countries are poor despite colonialism rather than because of it.

This is also surely why, prior to the transition to black-majority rule, South Africa and Rhodesia (now Zimbabwe) enjoyed some of the highest living-standards in Africa, with South Africa long regarded as the only ‘developed economy’ in the entire continent during the apartheid-era.

Further falsifying the assumption that the experience of European colonialism invariably impeded the economic development of those regions formerly subject to European colonial rule is the experience of former European colonies in parts of the world other than Africa.

Here, there have been many notable success stories, including Malaysia, Singapore, Hong Kong, even India, not to mention Canada, Australia, New Zealand, all of which were former European colonies, and many of which gained their independence around the same time of African polities.

An experience with European colonization is, it seems, no bar to economic development outside of Africa. Why then has the experience in Africa itself been so different?

Browne and Conway place the blame firmly on Africans themselves – but on African rulers rather than the mass of African people. The real reason for African is simply “bad governance” on the part of Africa’s post-colonial rulers (p8).

Poverty in African has been caused by misrule rather than insufficient aid” (p113).

Unfortunately, however, this is hardly a complete explanation, since it only merely begs the question as to why Africa has been so prone to “misrule” and “bad governance” in the first place.

It also begs the question as to why regions outside of Africa, but nevertheless populated by people of predominantly sub-Saharan African ancestry, such as Haiti and Jamaica (or even Baltimore and Detriot), are seemingly beset by just the same problems (e.g. chronic violent crime, poverty).

This latter observation, of course, suggests that the answer lies, not in African soil or geography, but rather in differences between races in personality, intelligence and behaviour.[8]

However, this is, one suspects, a conclusion too politically incorrect even for Browne himself to consider.

Is Browne a Victim of Political Correctness Himself? 

The forgoing discussion converges in suggesting a single overarching problem with Browne’s otherwise admirable dissection of the nature and effects of political correctness – namely that Browne, although ostensibly an opponent of political correctness, is, in reality, neither immune to the infection nor ever able to effect a full recovery. 
 
Brown himself observes: 

Political correctness succeeds, like the British Empire, through divide and rule… The politically incorrect often end up appeasing political correctness by condemning fellow travellers” (p37). 

Indeed, this is indeed a characteristic feature of witch-hunts, from Salem to McCarthy, whereby victims were able to partially absolve themselves by ‘outing’ fellow-travellers to be persecuted in their place. 
 
However, Browne himself provides a neat illustration of this very phenomenon when, having deplored the treatment of BNP supporters deprived of employment on account of their political views, he nevertheless issues the almost obligatory disclaimer, condemning the party as “odious” (p52).

In doing so, he thereby ironically perfectly illustrates the very appeasement of political correctness which he has himself identified as central to its power. 
 
Similarly, it is notable that, in his discussion of the suppression of politically incorrect facts and theories, Browne nevertheless fails to address any of the most incendiary such facts and theories, such as those that resulted in death threats to the likes of Jensen, Pizzey and Steinmetz
 
After all, to discuss the really taboo topics would not only bring upon him even greater opprobrium than that which he already faced, but also likely deny him a mainstream platform in which to express his views altogether. 
 
Browne therefore provides his ultimate proof of the power of political correctness, not through the topics he addresses, but rather through those he conspicuously avoids. 
 
In failing to address these issues, either out of fear of the consequences or genuine ignorance of the facts due to the media blackout on their discussion, Browne provides the definitive proof of his own fundamental thesis, namely the political correctness corrupts public debate and subverts free speech.

Endnotes

[1] After the resulting outcry, Gopal insisted she stood by her tweets, which, she insists, “were very clearly speaking to a structure and ideology, not about people”, something actually not at all clear from how she expressed herself, and arguably inconsistent with it, given that it is only people who have, and lose, “lives”, not institutions or ideology, and indeed only people, not institutions or ideology, who can properly be described as “white”.

At best, her tweet was incendiary and grossly irresponsible in a time of increasing anti-white animosity, violence and rioting. At worst, they could be interpreted as a coded exhortation to genocide. Similarly, as far-right philosopher Greg Johnson points out: 

“When the Soviets spoke of ‘eliminating the kulaks as a class’, that was simply a euphemism for mass murder” (The White Nationalist Manifesto: p21). 

Similarly, the Nazis typically referred to the genocide of European Jewry only by such coded euphemisms as resettlement in the East and the Final Solution to the Jewish Question. In this light, it is notable that those leftists like Noel Ignatiev who talk of “abolishing the white race” but insist they are only talking of deconstructing the concept of ‘whiteness’, which is, they argue, a social construct, strangely never talk about ‘abolishing the black race’, or indeed any other race than whites, even though, according to their own ideology, all racial categories are social constructs invented to justify oppression and hence similarly artificial and malignant.

[2] Thus, according to the sort of evolutionary epistemology championed by, among others, Karl Popper, it is only if different theories are tested and subjected to falsification that we are able to assess their merits and thereby choose between them, and scientific knowledge is able to progress. If some theories are simply deemed beyond the pale a priori, then clearly this process of testing and falsification cannot properly occur.

[3] The book in which Horowitz wrote these words was published in 2003. Yet, today, some seventeen years later, “the era of the progressive witch-hunt”, far from abating, seems to be going into overdrive. By Horowitz’s reckoning, then, “the era of the progressive witch-hunt” is now approaching its fourth decade.

[4] Discrimination against men in the provision of insurance policies remains legal in most jurisdictions (e.g. the USA). However, sex discrimination in the provision of insurance policies was belatedly outlawed throughout the European Union at the end of 2012, due to a ruling of the European Court of Justice. This was many years after other forms of sex discrimination had been outlawed in most member-states. For example, in the UK, most other forms of gender discrimination were outlawed almost forty years previously under the 1975 Sex Discrimination Act. However, section 45 of this Act explicitly exempted insurance companies from liability for sex discrimination if they could show that the discriminatory practice they employed was based on actuarial data and was “reasonable”. Yet actuarial data could also be employed to justify other forms of discrimination, such as employers deciding not to employ women of childbearing age. However, this remained unlawful. This exemption was preserved by Section 22 of Part 5 of Schedule 3 of the new Equality Act 2010. As a result, as recently as 2010 insurance providers routinely charged young male drivers double the premiums demanded of young female drivers. Yet, curiously, the only circumstances in which insurance policy providers were barred from discriminating on the grounds of sex was where the differences result from the costs associated with pregnancy or to a woman’s having given birth under section 22(3)(d) of Schedule 3 – in other words, the only readily apparent circumstance where insurance providers might be expected to discriminate against women rather than men. Interestingly, even after the ECJ ruling, there is evidence that indirect discrimination against males continues, simply by using occupation as a marker for gender.

[5] Actually, the term ‘fascist’ is sometimes employed in this way by conservatives as well, as when they refer to certain forms of Islamic fundamentalism as Islamofascism or indeed when they refer to the stifling of debate, and of freedom of expression, by leftists as a form of ‘fascism’. 

[6] This use of the phrase ‘anti-Semitism’ in the context of criticism of Israel’s policies towards the Palestinians is ironic, at least from a pedantic etymological perspective, since the Palestinian people actually have a rather stronger claim to being a ‘Semitic people’, in both a racial and a linguistic sense, than do either Ashkenazi or Sephardi (if not Mizrahi) Jews.

[7] Actually, international aid may sometimes be partially successful. For example, the Marshall Plan for post-WWII Europe is sometimes credited as a success story, though some economists disagree. The success, or otherwise, of foreign aid seems, then, to depend, at least in part, on the identity of the recipients.

[8] For more on this plausible but incendiary theory, see IQ and the Wealth of Nations by Richard Lynn and Tatu Vanhanen and Understanding Human History by Michael Hart.

Richard Lynn’s ‘Race Differences in Intelligence’: Useful as a Reference Work, But Biased as a Book

[Warning: Vastly overlong book review. Casual reader beware.]
Race Differences in Intelligence: An Evolutionary Analysis, by Richard Lynn (Augusta, GA: Washington Summit, 2006) 

Richard Lynn’s ‘Race Differences in Intelligence’ is structured around his massive database of IQ studies conducted among different populations. This collection seems to be largely recycled from his earlier IQ and the Wealth of Nations, and subsequently expanded, revised and reused again in IQ and Global Inequality, The Global Bell Curve, and The Intelligence of Nations (as well as a newer edition of Race Differences in Intelligence, published in 2015). 

Thus, despite its subtitle, “An Evolutionary Analysis”, the focus is very much on documenting the existence of race differences in intelligence, not explaining how or why they evolved. The “Evolutionary Analysis” promised in the subtitle is actually almost entirely confined to the last three chapters. 

The choice of this as a subtitle is therefore misleading and presumably represents an attempt to cash in on the recent rise in, and popularity of, evolutionary psychology and other sociobiological explanations for human behaviours. 

However, whatever the inadequacies of Lynn’s theory of how and why race differences in intelligence evolved (discussed below), his documentation of the existence of these differences is indeed persuasive. The sheer number of studies and the relative consistency over time and place suggests that the differences are indeed real and there is therefore something to be explained in the first place. 

In this respect, it aims to do something similar to what was achieved by Audrey Shuey’s The Testing of Negro Intelligence, first published in 1958, which brought together a huge number of studies, and a huge amount of data, regarding the black-white test score gap in the US. 

However, whereas Shuey focused almost exclusively on the black-white test score gap in North America, Lynn’s ambition is much broader and more ambitious – namely, to review data relating to the intelligences of all racial groups everywhere across the earth. 

Thus, Lynn declares that: 

“The objective of this book [is] to broaden the debate from the local problem of the genetic and environmental contributions to the difference between whites and blacks in the United States to the much larger problem of the determinants of the global differences between the ten races whose IQs are summarised” (p182). 

Therefore, his book purports to be: 

“The first fully comprehensive review… of the evidence on race differences in intelligence worldwide” (p2). 

Racial Taxonomy

Consistent with this, Lynn includes in his analysis data for many racial groups that rarely receive much if any coverage in previous works on the topic of race differences in intelligence. 

Relying on both morphological criteria and genetic data gathered by Cavalli-Sforz et al in The History and Geography of Human Genes, Lynn identifies ten separate human races. These are: 

1) “Europeans”; 
2) “Africans”; 
3) “Bushmen and Pygmies”; 
4) “South Asians and North Africans”; 
5) “Southeast Asians”; 
6) “Australian Aborigines”; 
7) “Pacific Islanders”; 
8) “East Asians”; 
9) “Artic Peoples”; and 
10) “Native Americans”.

Each of these racial groups receives a chapter of their own, and, in each of the respective chapters, Lynn reviews published (and occasionally unpublished) studies that provide data on each group’s: 

  1. IQs
  2. Reaction times when performing elementary cognitive tasks; and
  3. Brain size

Average IQs 

The average IQs reported by Lynn are, he informs us, corrected for the Flynn Effect – i.e. the rise in IQs over the last century (p5-6).  

However, the Flynn Effect has occurred at different rates in different regions of the world. Likewise, the various environmental factors that have been proposed as possible explanations for the phenomenon (e.g. improved nutrition and health as well as increases in test familiarity, and exposure to visual media) have varied in the extent to which they are present in different places. Correcting for the Flynn Effect is therefore easier said than done. 

IQs of “Hybrid populations

Lynn also discusses the average IQs of racially-mixed populations, which are, he reports, consistently intermediate between the average IQs of the two (or more) parent populations. 

However, both, on the one hand, hybrid vigour or heterosis and, on the other, hybrid incompatibility or outbreeding depression could potentially complicate the assumption that racial hybrids should have average IQs intermediate between the average IQs of the two (or more) parent populations. 

However, Lynn only alludes to the possible effect of hybrid vigour in relation to biracial people in Hawaii, not in relation to other hybrid populations whose IQs he discusses, and never discusses the possible effect of hybrid incompatibility or outbreeding depression at all. 

Genotypic IQs 

Finally, Lynn also purports to estimate what he calls the “genotypic IQ” of at least some of the races discussed. This is a measure of genetic potential, distinguished from their actual realized phenotypic IQ. 

He defines the “genotypic IQ” of a population as the average score of a population if they were raised in environments identical to those of the group with whom they are being compared. 

Thus, he writes: 

“The genotypic African IQ… is the IQ that Africans would have if they were raised in the same environment as Europeans” (p69). 

The fact that lower-IQ groups generally provide their offspring with inferior environmental conditions is therefore irrelevant for determining their “genotypic IQ”. However, as Lynn himself later points out: 

“It is problematical whether the poor nutrition and health that impair the intelligence of many third world peoples should be regarded as a purely environmental effect or as to some degree a genetic effect arising from the low intelligence of the populations that makes them unable to provide good nutrition and health for their children” (p193). 

Also, Lynn does not explain why he uses Europeans as his comparison group – i.e. why the African genotypic IQ is “the IQ that Africans would have if they were raised in the same environment as Europeans”, as opposed to, say, if they were raised in the same environments East Asians, Middle Eastern populations or indeed their own environments. 

Presumably this reflects historical reasons – namely, Europeans were the first racial group to have their IQs systematically measured – the same reason that European IQs are arbitrarily assigned an average score of 100. 

Reaction Times 

Reaction times refer to the time taken to perform so-called elementary cognitive tasks. These are tests where everyone can easily work out the right answer, but where the speed with which different people get there correlates with IQ. 

Arthur Jensen has championed reaction time as a (relatively more) direct measure of one key cognitive process underlying IQ, namely speed of mental processing. 

Yet individuals with quicker reaction times would presumably have an advantage in sports, since reacting to, say, the speed and trajectory of a ball in order to strike or catch it is analogous to an elementary cognitive task. 

However, despite lower IQs, African-Americans, and blacks resident in other western economies, are vastly overrepresented among elite athletes. 
 
To explain this paradox, Lynn distinguishes “reaction time proper” – i.e. when one begins to move one’s hand towards the correct button to press – from “movement time” – how long one’s hand takes to get there. 

Whereas whites generally react faster, Lynn reports that blacks have faster movement times (p58-9).[1] Thus, Lynn concludes: 

“The faster movement times of Africans may be a factor in the fast sprinting speed of Africans shown in Olympic records” (p58). 

However, psychologist Richard Nisbett reports that: 

“Across a host of studies, movement times are just as highly correlated with IQ as reaction times” (Intelligence and How to Get It: p222). 

Brain Size

Lynn also reviews data regarding the brain-size of different groups. 

The correlation between brain-size and IQ as between individuals is well-established (Rushton and Ankney 2009). 
 
As between species, brain-size is also thought to correlate with intelligence, at least after controlling for body-size. 

Indeed, since brain tissue is highly metabolically expensive, increases in brain-size would surely never have evolved with conferring some countervailing selective advantage. 

Thus, in the late-1960s, biologist HJ Jerison developed an equation to estimate an animal’s intelligence from its brain- and body-size alone. This is called the animal’s encephalization quotient
 
However, comparing the intelligence of different species poses great difficulties.[2]

In short, if you think a ‘culture fair’ IQ test is an impossibility, then try designing a ‘species fair’ test! 
 
Moreover, dwarves have smaller absolute brain-sizes but larger brains relative to body-size, but usually have normal IQs. 

Sex differences in IQ, meanwhile, are smaller than those between races even though differences in brain-size are greater, at least before one introduces controls for body-size. 
 
Also, Neanderthals had larger brains than modern humans, despite a shorter, albeit more robust, stature.

One theory has it that population differences in brain-size reflect a climatic adaptation that evolved in order to regulate temperature, in accordance with Bermann’s Rule. This seems to be the dominant view among contemporary biological anthropologists, at least those who deign (or dare) to even discuss this politically charged topic.[3] 

Thus, in one recent undergraduate textbook in biological anthropology, authors Mielke, Konigsberg and Relethford contend: 

“Larger and relatively broader skulls lose less heat and are adaptive in cold climates; small and relatively narrower skulls lose more heat and are adaptive in hot climates” (Human Biological Variation: p285). 

On this view, head size and shape represents a means of regulating the relative ratio of surface-area-to-volume, since this determines the proportion of a body that is directly exposed to the elements.

The BermannAllen rules likely also explain at least some of the variation in body-size and stature as between racial groups. 

For example, Eskimos tend to be short and stocky, with short arms and legs and flat faces. This minimizes the ratio of surface-area-to-volume, ensures only a minimal proportion of the body is directly exposed to the elements, and also minimizes the extent of extremities (e.g. arms, legs, noses), which are especially vulnerable to the cold. 

In contrast, populations from tropical climates, such as African blacks and Australian Aboriginals, tend to have relatively long arms and legs as compared to trunk size, a factor which likely contributes towards their success in some athletic events. 

However, with regard to the size and shape of skulls (and of brains), it is surely implausible that an increase in brain tissue, which is metabolically highly expensive, would have evolved solely for the purpose of regulating temperature, when the same result could surely have been achieved by modifying only the external shape of the skull. 
 
Conversely, even if race differences in brain-size did evolve purely for temperature regulation, differences in intelligence could still have emerged as a by-product of such selection.

In other words, if larger brains did evolve among populations inhabiting colder latitudes solely for the purposes of temperature regulation, the extra brain tissue that resulted may still have resulted in greater levels of cognitive ability among these populations, even if there was no direct selection for increased cognitive ability itself.

Europeans

The first racial group discussed by Lynn are those he terms “Europeans” (i.e. white Caucasians). He reviews data on IQ both in Europe and among diaspora populations elsewhere in the world (e.g. North America, Australia). 

The results are consistent, almost always giving an average IQ of about 100 – though this figure is, of course, arbitrary and reflects the fact that IQ tests were first normed by reference to European populations. This is what James Thompson refers to as the ‘Greenwich mean IQ’ and the IQs of all other populations in Lynn’s book are calculated by reference to this figure. 
 
Southeast Europeans, however, score slightly lower. This, Lynn argues, is because: 

“Balkan peoples are a hybrid population or cline, comprising a genetic mix between the Europeans and South Asians in Turkey” (p18). 

Therefore, as a hybrid population, their IQs are intermediate between those of the two parent populations, and, according to Lynn, South Asians score somewhat lower in IQ than do white European populations (see below).[4]

In the newer 2015 edition, Lynn argues that IQs are somewhat lower elsewhere in southern Europe, namely southern Spain and Italy, for much the same reason, namely because: 

“The populations of these regions are a genetic mix of European people with those from the Near East and North Africa, with the result that their IQs are intermediate between the parent populations” (Preface, 2015 Edition).[5]

An alternative explanation is that these regions (e.g. Balkan countries, Southern Italy) have lower living-standards. 

However, instead of viewing differences in living standards as causing differences in recorded IQs as between populations, Lynn argues that differences in innate ability themselves cause differences in living standards, because, according to Lynn, more intelligent populations are better able to achieve high levels of economic development (see IQ and the Wealth of Nations).[6]

Moreover, Lynn observes that in Eastern Europe, living standards are substantially below elsewhere in Europe as a consequence of the legacy of communism. However, populations from Eastern Europe score only slightly below those from elsewhere in Europe, suggesting that even substantial differences in living-standards may have only a minor impact on IQ (p20). 

Portuguese 

The Portuguese also, Lynn claims, score lower than elsewhere in Europe. 

However, he cites just two studies. These give average IQs of 101 and 88 respectively, which Lynn averages to give an average of 94.5 (p19). 

Yet these two results are actually highly divergent, the former being slightly higher than the average for north-west Europe. This suggests an inadequate basis on which to posit a genetic difference in ability. 

However, Lynn provocatively concludes: 

“Intelligence in Portugal has been depressed by the admixture of sub-Saharan Africans. Portugal was the only European country to import black slaves from the fifteenth century onwards” (p19). 

This echoes Arthur De Gobineau’s infamous theory that empires decline because, through their empires, they conquer large numbers of inferior peoples, who then inevitably interbreed with their conquerors, which, according to De Gobineau, results in the dilution the very qualities that permitted their imperial glories in the first place. 

In support of Lynn’s theory, mitochondrial DNA studies have indeed found higher frequency of sub-Saharan African Haplogroup L in Portugal than elsewhere in Europe (e.g. Pereira et al 2005). 

Ireland and ‘Selective Migration 

IQs are also, Lynn reports, somewhat lower than elsewhere in Europe in Ireland. 

Lynn cites four studies of Irish IQs which give average scores of 87, 97, 93 and 91 respectively. Again, these are rather divergent but nevertheless consistently below the European average, all but one substantially so. 
 
Of course, in England, in less politically correct times, the supposed stupidity of the Irish was once a staple of popular humour, Irish jokes being the English equivalent of Polish jokes in America.[7]
 
This seems anomalous given the higher average IQs recorded elsewhere in North-West Europe, especially the UK, Ireland’s next-door neighbour, whose populations are closely related to those in Ireland. 
 
Of course, historically Ireland was, until relatively recently, quite poor by European standards. 

It is also sparsely populated and a relatively high proportion of the population live in rural areas, and there is some evidence that people from rural areas have lower average IQs than those from urban areas

However, economic deprivation cannot explain the disparity. Today, despite the 2008 economic crash, and inevitable British bailout, Ireland enjoys, according to the UN, a higher Human Development Index than does the UK, and has done for some time. Indeed, by this measure, Ireland enjoys one of the highest standards of living in the world

Moreover, although formerly Ireland was much poorer, the studies cited by Lynn were published from 1973 to 1993, yet show no obvious increase over time.[8] 
 
Lynn himself attributes the depressed Irish IQ to what he calls ‘selective migration’, claiming: 

“There has been some tendency for the more intelligent to migrate, leaving less intelligent behind” (p19). 

Of course, this would suggest, not only that the remaining Irish would have lower average IQs, but also that the descendants of Irish émigrés in Britain, Australia, America and other diaspora communities would have relatively higher IQs than other white people. 

In support of this, Americans reporting Irish ancestry do indeed enjoy higher relative incomes as compared to other white American ethnicities. 

Interestingly, Lynn also invokes “selective migration” to explain the divergences in East Asian IQs. Here, however, it was supposedly the less intelligent who chose to migrate (p136; p138; p169).[9]

Meanwhile, other hereditarians have sought to explain away the impressive academic performance of recent African immigrants to the West, and their offspring, by reference to selective immigration of high IQ Africans, an explanation which is wholly inadequate on mathematical grounds alone (see Chisala 2015b; 2019).

It certainly seems plausible that migrants differ in personality from those who choose to remain at home. It is likely that they are braver, have greater determination, drive and willpower than those who choose to stay behind. They may also perhaps be less ethnocentric, and more tolerant of foreign cultures.[10]

However, I see no obvious reason they would differ in intelligence.

As Chanda Chisala writes:

“Realizing that life is better in a very rich country than in your poor country is never exactly the most g-loaded epiphany among Africans” (Chisala 2015b).

Likewise, it likely didn’t take much brain-power for Irish people to realize during the Irish Potato Famine that they were less likely to starve to death if they emigrated abroad.

Of course, wealth is correlated with intelligence and may affect the decision to migrate.

The rich usually have little economic incentive to migrate, while the poor may be unable to afford the often-substantial costs of migration (e.g. transportation).

However, without actual historical data showing certain socioeconomic classes or intellectual ability groups were more likely to migrate than others, Lynn’s claims regarding ‘selective migration’ represent little more than a post-hoc rationalization for IQ differences that are otherwise anomalous and not easily explicable in terms of heredity. 

Ireland, Catholicism and Celibacy

Interestingly, in the 2015 edition of ‘Race Differences in Intelligence’, Lynn also proposes, in addition, a further explanation for the low IQs supposedly found in Ireland, namely the clerical celibacy demanded under Catholicism. Thus, Lynn argues:

“There is a dysgenic effect of Roman Catholicism, in which clerical celibacy has reduced the fertility of some of the most intelligent, who have become priests and nuns” (2015 Edition; see also Lynn 2015). 

Of course, this theory presupposes that it was indeed the most intelligent among the Irish people who became priests. However, this is a questionable assumption, especially given the well-established inverse correlation between intelligence and religiosity (Zuckerman et al 2013).

However, it is perhaps arguable that, in an earlier age, when religious dogmas were relentlessly enforced, religious scholarship may have been the only form of intellectual endeavour that it was safe for intellectually-minded people to engage in.

Anyone investigating more substantial matters, such as whether the earth revolved around the sun or vice versa, was liable to be burnt at the stake if he reached the wrong (i.e. the right) conclusion.

However, such an effect would surely also apply in other historically Catholic countries as well.

Yet there is little if any evidence of depressed IQs in, say, France or Austria, although the populaions of both these countries were, until recently, like that of Ireland, predominantly Catholic.[11]

Africans 

The next chapter is titled “Africans”. However, Lynn uses this term to refer specifically to black Africans – i.e. those formerly termed ‘Negroes’. He therefore excludes from this chapter, not only the predominantly ‘Caucasoid’ populations of North Africa, but also African Pygmies and the Khoisan of southern Africa, who are considered separately in a chapter of their own. 

Lynn’s previous estimate of the average sub-Saharan African IQ as just 70 provoked widespread incredulity and much criticism. However, undeterred, Lynn now goes even further, estimating the average African IQ even lower, at just 67.[12]

Curiously, according to Lynn’s data, populations from the Horn of Africa (e.g. Ethiopia and Somalia) have IQs no higher than populations elsewhere in sub-Saharan Africa.[13]

Yet populations from the Horn of Africa are known to be partly, if not predominantly, Caucasoid in ancestry, having substantial genetic affinities with populations from the Middle East.[14].

Therefore, just as populations from Southern Europe have lower average IQs than other Europeans because, according to Lynn, they are genetically intermediate between Europeans and Middle Eastern populations, so populations from the Horn of Africa should score higher than those from elsewhere in sub-Saharan Africa because of intermixture with Middle Eastern populations.

However, Lynn’s data gives average IQs for Ethiopia and Somalia of just 68 and 69 respectively – no higher than elsewhere in sub-Saharan Africa (The Intelligence of Nations: p87; p141-2).

On the other hand, blacks resident in western economies score rather higher, with average IQs around 85. 

The only exception, strangely, are the Beta Israel, who also hail from the Horn of Africa, but are now mostly resident in Israel, yet who score no higher than those blacks still resident in Africa. From this, Lynn concludes:

“These results suggest that education in western schools does not benefit the African IQ” (p53). 

However, why then do blacks resident in other western economies score higher? Are blacks in Ethiopia somehow treated differently than those resident in the UK, USA or France? 

For his part, Lynn attributes the higher scores of blacks resident in these other Western economies to both superior economic conditions and, more controversially, to racial admixture. 

Thus, African-Americans in particular are known to be a racially-mixed population, with substantial European ancestry (usually estimated at around 20%) in addition to their African ancestry.[15]

Therefore, Lynn argues that the higher IQs of African-Americans reflect, in part, the effect of the European portion of their ancestry. 

However, this explanation is difficult to square with the observation that recent African immigrants to the US, themselves presumably largely of unmixed African descent, actually consistently outperform African-Americans (and sometimes whites as well) both academically and  economically (Chisala 2015a2015cAnderson 2015).[16]

Musical Ability” 

Lynn also reviews the evidence pertaining to one class of specific mental ability not covered in most previous reviews on the subject – namely, race differences in musical ability. 

The accomplishments of African-Americans in twentieth century jazz and popular music are, of course, much celebrated. To Lynn, however, this represents a paradox, since musical abilities are known to correlate with general intelligence and African-Americans generally have low IQs. 
 
In addressing this perceived paradox, Lynn reviews the results of various psychometric measures of musical ability. These tests include: 

  • Recognizing a change in pitch; 
  • Remembering a tune; 
  • Identifying the constituent notes in a chord; and 
  • Recognizing whether different songs have similar rhythm (p55). 

In relation to these sorts of tests, Lynn reports that African-Americans actually score somewhat lower in most elements of musical intelligence than do whites, and their musical ability is indeed generally commensurate with their general low IQs. 

The only exception is for rhythmical ability. 

This is, of course, congruent with the familiar observation that black musical styles place great emphasis on rhythm. 

However, even with respect to rhythmical ability, blacks score no higher than whites. Instead, blacks’ scores on measures of rhythmical ability are exceptional only in that this is the only form of musical ability on which blacks score equal to, but no higher than, whites (p56). 

For Lynn, the low scores of African-Americans in psychometric tests of musical ability are, on further reflection, little surprise. 

“The low musical abilities of Africans… are consistent with their generally poor achievements in classical music. There are no African composers, conductors, or instrumentalists of the first rank and it is rare to see African players in the leading symphony orchestras” (p57). 

However, who qualifies as a composer, conductor or instrumentalist “of the first rank” is, ultimately, unlike the results of psychometric testing, a subjective assessment, as are all artistic judgements. 

Moreover, why is achievement in classical music, an obviously distinctly western genre of music, to be taken as the sole measure of musical accomplishment? 

Even if we concede that the ability required to compose and perform classical music is greater than that required for other genres (e.g. jazz and popular music), musical intelligence surely facilitates composition and performance in other genres too – and, given the financial rewards offered by popular music often dwarf those enjoyed by players and composers of classical music, the more musically-gifted race would have every incentive to dominate this field too. 

Perhaps, then, these psychometric measures fail to capture some key element of musical ability relevant to musical accomplishment, especially in genres other than classical. 

In this context, it is notable that no lesser champion of standardized testing than Arthur Jensen has himself acknowledged that intelligence tests are incapable of measuring creativity (Langan & LoSasso 2002: p24-5). 

In particular, one feature common to many African-American musical styles, from rap freestyling to jazz, is improvisation.  

Thus, Dinesh D’Souza speculates tentatively that: 

“Blacks have certain inherited abilities, such as improvisational decision making, that could explain why they predominate in… jazz, rap and basketball” (The End of Racism: p440-1). 

Steve Sailer rather less tentatively expands upon this theme, positing an African advantage in: 

“Creative improvisation and on-the-fly interpersonal decision-making” (Sailer 1996). 

On this basis, Sailer concludes that: 

“Beyond basketball, these black cerebral superiorities in ‘real time’ responsiveness also contribute to black dominance in jazz, running with the football, rap, dance, trash talking, preaching, and oratory” (Sailer 1996). 

Bushmen and Pygmies” 

Grouped together as the subjects of the next chapter are black Africans’ sub-Saharan African neighbours, namely San Bushmen and Pygmies

Quite why these two populations are grouped together by Lynn in a single chapter is unclear. 

He cites Cavalli-Sforza et al in The History and Geography of Human Genes as providing evidence that: 

“These two peoples have distinctive but closely related genetic characteristics and form two related clusters” (p73). 

However, although both groups are obviously indigenous to sub-Saharan Africa and quite morphologically distinct from the other black African populations who today represent the great majority of the population of sub-Saharan Africa, they share no especial morphological similarity to one another.[17]

Moreover, since Lynn acknowledges that they have “distinctive… genetic characteristics and form two… clusters”, they presumably should each of merited chapters of their own.[18]

One therefore suspects that they are lumped together more for convenience than on legitimate taxonomic grounds. 

In short, both are marginal groups of hunter-gatherers, now few in number, few if any of whom have been exposed to the sort of standardized testing necessary to provide a useful estimate of their average IQs. Therefore, since his data on neither group alone is really sufficient to justify its own chapter, he groups them together in a single chapter.  

However, the lack of data on IQ for either group means that even this combined chapter remains one of the shorter chapters in Lynn’s book, and, as we will see, the paucity of reliable data on the cognitive ability of either group almost leads one to suspect that he might almost have been better omitting both groups from his survey of race differences in cognitive ability altogether. 

San Bushmen 

It may be some meagre consolation to African blacks that, at least in Lynn’s telling, they no longer qualify as the lowest scoring racial group when it comes to IQ. Instead, this dubious honour is now accorded their sub-Saharan African neighbours, San Bushmen
 
In Race: The Reality of Human Differences (which I have reviewed here and here), authors Vincent Sarich and Frank Miele quote anthropologist and geneticist Henry Harpending as observing: 

“All of us have the impression that Bushmen are really quick and clever and are quite different from their [Bantu] neighbors… Bushmen don’t look like their black African neighbors either. I expect that there will soon be real data from the Namibian school system about the relative performance of Bushmen… and Bantu kids – or more likely, they will suppress it” (Race: The Reality of Human Differences (reviewed here): p227). 

Today, however, some fifteen or so years after Sarich and Miele published this quotation, the only such data I am aware of is that reported by Lynn in this book, which suggests, at least according to Lynn, a level of intelligence even lower than that of other sub-Saharan Africans. 

Unfortunately, however, the data in question is very limited and, in my view, inadequate to support Lynn’s controversial conclusions regarding Bushman ability.  

It also consists of just three studies, none of which remotely resemble a full IQ test (p74-5). 

Yet, from this meagre dataset, Lynn does not hesitate to attribute to Bushmen an average IQ of just 52. 

If Lynn’s estimate of the average sub-Saharan African IQ at around 70 provoked widespread incredulity, then his much lower estimate for Bushmen is unlikely to fare better. 

Lynn anticipates such a reaction, and responds by pointing out:  

“An IQ of 54 represents the mental age of the average European 8-year-old, and the average European 8-year-old can read, write, and do arithmetic and would have no difficulty in learning and performing the activities of gathering foods and hunting carried out by the San Bushmen. An average 8-year-old can easily be taught to pick berries put them in a container and carry them home, collect ostrich eggs and use the shells for storing water and learn how to use a bow and arrow” (p76). 

Indeed, Lynn continues, other non-human animals survive in difficult, challenging environments with even lower levels of intelligence:  

“Apes with mental abilities about the same as those of human 4-year olds survive quite well as gatherers and occasional hunters and so also did early hominids with IQs around 40 and brain sizes much smaller than those of modern Bushmen. For these reasons there is nothing puzzling about contemporary Bushmen with average IQs of about 54” (p77). 

Here, Lynn makes an important point. Many non-human animals survive and prosper in ecologically challenging environments with levels of intelligence much lower than that of any hominid, let alone any extant human race. 

On the other hand, however, I suspect Lynn would not last long in Kalahari Desert – the home environment of most contemporary Bushmen.

Pygmies 

Lynn’s data on the IQs of Pygmies is even more inadequate than his data for Bushmen. Indeed, it amounts to just one study, which again fell far short of a full IQ test. 

Moreover, the author of the study, Lynn reports, did not quantify his results, reporting only that Pygmies scored much “much worse” than other populations tested using the same test (p78). 

However, while the other populations tested using the same test and outperforming Pygmies included “Eskimos, Native American and Filipinos”, Lynn conspicuously does not mention that they included other black Africans, or indeed other very low IQ groups such as Australian Aboriginals (p78). 

Thus, Lynn’s assumption that Pygmies are lower in cognitive ability than other black Africans is not supported even by the single study that he cites. 

Lynn also infers a low level of intelligence for Pygmies from their lifestyle and mode of sustenance: 

“Most of them still retain a primitive hunter-gatherer existence while many of the Negroid Africans became farmers over the last few hundred years” (p78). 

Thus, Lynn assumes that whether a population has successfully transitioned to agriculture is largely a product of their intelligence (p191). 

In contrast, most historians and anthropologists would emphasize the importance of environmental factors in explaining whether a group transitions to agriculture.[19]

Finally, Lynn also infers a low IQ from the widespread enslavement of Pygmies by neighbouring Bantus: 

“The enslavement of Pygmies by Negroid Africans is consistent with the general principle that the more intelligent races generally defeat and enslave the less intelligent, just as Europeans and South Asians have frequently enslaved Africans but not vice versa” (p78). 

However, while it may be a “general principle that the more intelligent races typically defeat and enslave the less intelligent” (p78), this is hardly a rigid rule. 

After all, Arabs often enslaved Europeans.[20] Yet, according to Lynn, the Arabs belong to a rather less intelligent race than do the Europeans whom they so often enslaved. 

Interestingly, it is notable that Pygmies are the only racial group whom Lynn includes in his survey for whom he does not provide an actual figure as an estimate their average IQ, which presumably reflects a tacit admission of the inadequacy of the available data.[21] 

Curiously, unlike for all the other racial groups discussed, Lynn also fails to provide any data on Pygmy brain-size. 

Presumably, Pygmies have small brains as compared to other races, if only on account of their smaller body-size – but what about their brain-size relative to body-size? Is there simply no data available?

Australian Aborigines 

Another group who are barely mentioned at all in most previous discussions of the topic of race differences in intelligence are Australian Aborigines. Here, however, unlike for Bushmen and Pygmies, data from Australian schools are actually surprisingly abundant. 

These give, Lynn reports, an average Aboriginal IQ of just 62 (p104). 

Unlike his estimates for Bushmen and Pygmies, this figure seems to be reliable, given the number of studies cited and the consistency of their results. One might say, then, that Australian Aboriginals have the lowest recorded IQs of any human race for whom reliable data is available. 

Interestingly, in addition to his data on IQ, Lynn also reports the results of Piagetian measures of development conducted among Aboriginals. He reports, rather remarkably, that a large minority of Aboriginal adults fail to reach what Piaget called the concrete operational stage of development – or, more specifically, fail to recognize a substance, transferred to a new container, necessarily remains of the same quantity (p105-7). 

Perhaps even more remarkable, however, are reports of Aborigine spatial memory (p107-8). This refers to the ability to remember the location of objects, and their locations relative to one another. 

Thus, he reports, one study found that, despite their low general cognitive ability, Aborigines nevertheless score much higher than Europeans in tests of spatial memory (Kearins 1981).  

Another study found no difference in the performance of whites and Aborigines (Drinkwater 1975). However, since Aborigines have much lower IQs overall, even equal performance on spatial memory as against Europeans is still out of sync with the performance of whites and Aborigines on other types of intelligence test (p108). 

Lynn speculates that Aboriginal spatial memory may represent an adaptation to facilitate navigation in a desert environment with few available landmarks.[22]

The difference, Lynn argues, seems to be innate, since it was found even among Aborigines who had been living in an urban environment (i.e. not a desert) for several generations (p108; but see Kearins 1986). 

Two other studies reported lower scores than for Europeans. However, one was an unpublished dissertation and hence must be treated with caution, while the and the other (Knapp & Seagrim 1981) “did not present his data in such a way that the magnitude of the white advantage can be calculated” (p108). 

Intriguingly, Lynn reports that this ability even appears to be reflected in neuroanatomy. Thus, despite smaller brains overall, Aborigines’ right visual cortex, implicated in spatial ability, is relatively larger than in Europeans (Klekamp et al 1987; p108-9).

New Guineans and Jared Diamond 

In his celebrated Guns, Germs and Steel, Jared Diamond famously claimed: 

“In mental ability New Guineans are probably genetically superior to Westerners, and they surely are superior in escaping the devastating developmental disadvantages under which most children in industrialized societies grow up” (Guns, Germs and Steel: p21). 

Diamond bases this claim on the fact that, in the West, survival, throughout most of our recent history, depended on who was struck down by disease, which was largely random. 

In contrast, in New Guinea, he argues, people had to survive on their wits, with survival depending on one’s ability to procure food and avoid homicide, activities in which intelligence was likely to be at a premium (Guns, Germs and Steel: p20-21). 

He also argues that the intelligence of western children is likely reduced because they spend too much time watching television and movies (Guns, Germs and Steel: p21). 

However, there is no evidence television has a negative impact on children’s cognitive development. Indeed, given the rise in IQs over the twentieth century has been concomitant with increases in television viewing, it has even been speculated that increasingly stimulating visual media may have contributed to rising IQs. 

On the basis of two IQ studies, plus three studies of Piagetian development, Lynn concludes that the average IQ of indigenous New Guineans is just 62 (p112-3). 

This is, of course, exactly the same as his estimate for the average IQ of Australian Aboriginals.  

It is therefore consistent with Lynn’s racial taxonomy, since, citing Cavalli-Sforza et al, he classes New Guineans as in the same genetic cluster, and hence as part of the same race as Australian Aboriginals (p101). 

Pacific Islanders 

Other Pacific Islanders, however, including PolynesiansMicronesiansMelanesians and Hawiians, are grouped separately and hence receive a chapter of their own. 

They also, Lynn reports, score rather higher in IQ, with most such populations having average IQs of about 85 (p117). However, the Māoris of New Zealand score rather higher, with an average IQ of about 90 (p116). 

Hawaiians and Hybrid Vigor 

For the descendants of the inhabitants of one particular Pacific Island, namely Hawaii, Lynn also reports data regarding the IQs of racially-mixed individuals, both those of part-Native-Hawiian and part-East Asian ancestry, and those of part-Native-Hawiian and part-European ancestry. 

These racial hybrids, as expected, score on average between the average scores for the two parent populations. However, Lynn reports: 

“The IQs of the two hybrid groups are slightly higher than the average of the two parent races. The average IQ of the Europeans and Hawaiians is 90.5, while the IQ of the children is 93. Similarly, the average IQ of the Chinese and Hawaiians is 90, while the IQ of the children is 91. The slightly higher than expected IQs of the children of the mixed race parents may be a hybrid vigor or heterosis effect” (p118). 

Actually, the difference between the “expected IQs” and the IQs actually recorded for the hybrid groups is so small (only one point for the Chinese-Hawaiians), that it could easily be dismissed as mere noise, and I doubt it would reach statistical significance. 

Nevertheless, Lynn’s discussion begs the question as to why hybrid vigor has not similarly elevated the IQs of the other hybrid, or racially-mixed, populations discussed in other chapters, and why Lynn has not discussed this issue when reporting the average IQs of other racially-mixed populations in other chapters. 

Of course, while hybrid vigor is a real phenomenon, so is outbreeding depression and hybrid incompatibilities

Presumably, then, which of these countervailing effects outweighs the other for different types of hybrid depends on the degree of genetic distance between the two parent populations. This, of course, varies for different races. 

It is therefore possible that some racial mixes may tend to elevate intelligence, whereas others, especially between more distantly-related populations, may tend, on average, to depress intelligence. 

For what it’s worth, Pacific Islanders, including Hawiians, are thought to be genetically closer to East Asians than to Europeans. 

South Asians and North Africans

Another group rarely treated separately in earlier works are those whom Lynn terms “South Asians and North Africans”, though this group also includes populations from the Middle East. 

Physical anthropologists often lumped these peoples together with Europeans as collectively “Caucasian” or “Caucasoid”. However, while acknowledging that they are “closely related to the Europeans”, Lynn cites Cavalli-Sforza et al as showing they form “a distinctive genetic cluster” (p79). 

He also reports that they score substantially lower in IQ than do Europeans. Their average IQ in their native homelands is just 84 (p80), while South Asians resident in the UK score only slightly higher with an average IQ of just 89 (p82-4). 

This conclusion is surely surprising and should, in my opinion, be treated with caution. 

For one thing, all of the earliest known human civilizations – namely, MesopotamiaEgypt and the Indus Valley civilization – surely emerged among these peoples, or at least in regions today inhabited primarily by people of this race.[23]

Moreover, people of Indian ancestry in particular are today regarded as a model majority in both Britain and America, whose overrepresentation in the professions, especially medicine, is widely commented upon.[24]

Indeed, according to some measures, British-Indians are now the highest earning ethnicity in Britain, or the second-highest earning after the Chinese, and Indians are also the highest earners in the USA.[25]

Interestingly, in this light, one study cited by Lynn showed a massive gain of 14-points for children from India who had been resident in the UK for more than four years as compared to those who had been resident for less than four years, the former scoring almost as high in IQ as the indigenous British, with an average IQ of 97 (p83-4; Mackintosh & Mascie-Taylor 1985).[26]

In the light of this study, it would be interesting to measure the IQs of a sample composed exclusively of people who traced their ancestry to India but who had been resident in the UK for the entirety of their lives (or even whose ancestors had been resident in the UK for several generations), since all of the other studies cited by Lynn of the IQs of Indian children in the UK presumably include both recent arrivals and long-term residents grouped together. 

Interestingly, the high achievement of immigrants, and their descendants, from India is not matched by those from neighbouring countries such as Bangladesh or Pakistan. Indeed, the same data suggesting that Indians are the highest earning ethnicity in Britain also show that British-Pakistanis and Bangladeshis are among the lowest earners

The primary divide between these three countries is, of course, not racial but rather religious. This suggests a religion as a causal factor in the difference.[27]

Thus, one study found that Muslim countries tend to have lower average IQs than do non-Muslim countries (Templer 2010; see also Dutton 2020). 

Perhaps, then, cultural practices in Muslim countries are responsible for reducing IQs. 

For example, the prevalence of consanguineous (i.e. incestuous) marriage, especially cross-cousin marriage may have an effect on intelligence due to inbreeding depression (Woodley 2009). 

Another cultural practice that could affect intelligence in Muslim countries is the practice of even pregnant women fasting during daylight hours during Ramadan (cf. Aziz et al 2004). 

However, Lynn’s own data show little difference between IQs in India and those in Pakistan and Bangladesh, nor indeed between IQs in India and those in Muslim countries in the Middle East or North Africa. Nor, according to Lynn’s data, do people of Indian ancestry resident in the UK score noticeably higher in IQ than do people who trace their ancestry to Bagladeshi, Pakistani or Middle Eastern countries. 

An alternative suggestion is that Middle-Eastern and North African IQs have been depressed as a result of interbreeding with sub-Saharan Africans, perhaps as a result of the Islamic slave trade.[28]

This is possible because, although male slaves in the Islamic world were routinely castrated and hence incapable of procreation, female slaves outnumbered males and were often employed as concubines, a practice which, unlike in puritanical North America, was regarded as perfectly socially acceptable on the part of slave owners. 

This would be consistent with the finding that Arab populations from the Middle East show some evidence of sub-Saharan African ancestry in their mitochondrial DNA, which is passed down the female line, but not in their Y-chromosome ancestry, passed down the male line (Richards et al 2003). 

In contrast, in the United States, the use of female slaves for sexual purposes, although it certainly occurred, was, at least in theory, very much frowned upon. 

In addition, in North America, due to the one-drop rule, all mixed-race descendants of slaves with any detectable degree of black African ancestry were classed as black. Therefore, at least in theory, the white bloodline would have remained ‘pure’, though some mixed-race individuals may have been able to pass

Therefore, sub-Saharan African genes may have entered the Middle Eastern, and North African, gene-pools in a way they were not able to do so among whites in North America. 

This might explain why genotypic intelligence among North African and Middle Eastern populations may have declined in the period since the great civilizations of Mesopotamia and ancient Egypt and even since the Golden Age of Islam, when the intellectual achievements of Middle Eastern and North African peoples seemed so much more impressive.

Jews

Besides Indians, another economically and intellectually overachieving model minority who derive, at least in part, from the race whom Lynn classes as “South Asians and North Africans” are the Jews. 

Lynn has recently written a whole book on the topic of Jewish intelligence and achievement, titled The Chosen People: A Study of Jewish Intelligence and Achievement (review forthcoming). 

However, in ‘Race Differences in Intelligence’, Jews do not even warrant a chapter of their own. Instead, they are discussed only at the end of the chapter on “South Asians and North Africans”, although Ashkenazi Jews also have substantial European ancestry. 

The decision not to devote an entire chapter to the Jewish people is surely correct, because, although even widely disparate groups (e.g. AshkenazimSephardic and Mizrahim, even the Lemba) do indeed share genetic affinities, Jews are not racially distinct (i.e. reliably physically distinguishable on phenotypic criteria) from other peoples. 

However, the decision to include them in the chapter on “South Asians and North Africans” is potentially controversial, since, as Lynn readily acknowledges, the Ashkenazim in particular, who today constitute the majority of world Jewry, have substantial European as well as Middle Eastern ancestry. 

Lynn claims British and US Jews have average IQs of around 108 (p68). His data for Israel are not broken down by ethnicity, but give an average IQ for Israel as a whole of 95, which Lynn, rather conjecturally, infers scores of 103 for Ashkenazi Jews, 91 for Mizrahi Jews and 86 for Palestinian-Arabs (p94). 

Lynn’s explanations for Ashkenazi intelligence, however, are wholly unpersuasive. 

First, he observes that, despite Biblical and Talmudic admonitions against miscegenation with Gentiles, Jews inevitably interbred to some extent with the host populations alongside whom they lived. From this, Lynn infers that: 

“Ashkenazim Jews in Europe will have absorbed a significant proportion of the genes for higher intelligence possessed by… Europeans” (p95). 

It is indeed true that, if, as Lynn claims, Europeans are indeed a more intelligent race than are populations from the Middle East, then interbreeding with Europeans may indeed explain how Ashkenazim came to score higher in IQ than do other populations tracing their ancestry to the Middle East. 

However, interbreeding with Europeans can hardly explain how Ashkenazi Jews came to outscore, and outperform academically and economically, even the very Europeans with whom they are said to have interbred! 

This explanation therefore fails to explain why Ashkenazim have higher IQs than do Europeans. 

Lynn’s second explanation for high Ashkenazi Jewish IQs is equally unpersuasive. He suggests that: 

“The second factor that has probably operated to increase the intelligence of Ashkenazim Jews in Europe and the United States as compared with Oriental Jews is that the Ashkenazim Jews have been more subject to persecution… Oriental Jews experienced some persecution sufficient to raise their IQ of 91, as compared with 84 among other South Asians and North Africans, but not so much as that experienced by Ashkenazim Jews in Europe.” (p95).[29]

On purely theoretical grounds, the idea that persecution selects for intelligence may seem reasonably plausible, if hardly compelling.[30] 

However, there is no evidence that persecution does indeed reduce a population’s level of intelligence. On the contrary, other groups who have been subject to persecution throughout much of their histories – e.g. the Roma (i.e. Gypsies) and African-Americans – are generally found to have relatively low IQs. 

East and South-East Asians 

Excepting Jews, the highest average IQs are found among East Asians, who have, according to Lynn’s data, an average IQ of 105, somewhat higher than that of Europeans (p121-48). 

However, whereas Jews score relatively higher in verbal intelligence than spatio-visual ability, East Asians show the opposite pattern, with relatively higher scores for spatio-visual ability.[31]

However, it is important to emphasize that this relatively high figure applies only to East Asians – i.e. Chinese, Japanese Koreans, Taiwanese etc. 

It does not apply to the related populations of Southeast Asia (i.e. Thais, Filipinos, Vietnamese, Malaysians, Cambodians, Indonesians etc.), who actually score much lower in IQ, with average scores of only around 87 in their indigenous homelands, but rising to 93 among those resident in the US. 

Thus, Lynn distinguishes the East Asians from Southeast Asians as a separate race, on the grounds that the latter, despite “some genetic affinity with East Asians” form a distinct genetic cluster in data gathered and analyzed by Cavalli-Sforza et al, and also have distinct morphological features, with “the flattened nose and epicanthic eye-fold… [being] less prominent” than among East Asians (p97). 

This is an important point, since many previous writers on the topic have implied that the higher average IQs of East Asians applied to all ‘Asians’ or ‘Mongoloids’, which would presumably include South-East Asians.[32]

Yet, in Lynn’s opinion, it is just as misleading to group all these groups together as ‘Mongoloid’ or ‘Asian’ as it was to group “Europeans” and “South Asians and North Africans” together as ‘Caucasian’ or ‘Caucasoid’. 

However, that low scores throughout South-East Asia are entirely genetic in origin is unclear. Thus, Vietnamese resident in the West have sometimes, but not always, scored considerably higher, and Jason Malloy suggests that Lynn exaggerates the overrepresentation of ethnic Chinese among Vietnamese immigrants to the West so as attribute such results to East Asians rather than South-East Asians (Malloy 2014).[33]

Moreover, in relation to Lynn’s ‘Cold Winters Theory’ (discussed below), whereby it is claimed that populations were exposed to colder temperatures during their evolution evolved higher levels of intelligence in order to cope with the adaptive challenges that surviving cold temperatures posed, it is notable that climate varies greatly across China, reflecting the geographic size of the country, with Southern China having a subtropical climate with mild winters.

However, perhaps East Asians, like the Han Chinese, are to be regarded as only relatively recent arrivals in what is now Southern China. This would be consistent with claim of some physical anthropologists that the some aspects of the morphology of East Asians reflects adaptation to the extreme cold of Siberia and the Steppe, and also with the historical expansion of the Han Chinese.

More problematic for ‘Cold Winters Theory’ is the fact that, although Lynn classifies them as East Asian (p121), the higher average IQ scores of East Asians (as compared to whites), does not even extend to the people after whom the Mongoloid race was named – namely the Mongols themselves.

According to Lynn, Mongolians score only around the same as whites, with an average IQ of only 101 (Lynn 2007).

This report is based on just two studies. Moreover, it had not been published at the time the first edition of ‘Race Differences in Intelligence’ came off the presses.

However, Lynn infers a lower IQ for Mongolians from their lower level of cultural, technological and economic development (p240).

Yet, inhabiting the Mongolian-Manchurian grassland Steppe and Gobi Desert, Mongolians were subjected to an environment even colder and more austere than that of other East Asians.

Lynn’s explanation for this anomaly is that the low population-size of the Mongols, and their isolation from other populations, meant that the necessary mutations for higher IQ never arose (p240).[34]

This is the same explanation that Lynn provides for the related anomaly of why Eskimos (“Arctic Peoples”), to whom Mongolians share some genetic affinity, also score low in IQ, an explanation that is discussed in the final part of this review.

Native Americans

Another group sometimes subsumed with Asian populations as “Mongoloids” are the indigenous populations of the American continent, namely “Native Americans”. 

However, on the basis of both genetic data from Cavalli-Sforza et al and morphological differences (“darker and sometimes reddish skin, hooked or straight nose, and lack of the complete East Asian epicanthic fold”), Lynn classifies them as a separate race and hence accords them a chapter of their own. 

His data suggest average IQs of about 86, for both Native Americans resident in Latin America, and also for those resident in North America, despite the substantially higher living standards of the latter (p158; 162-3; p166). 

Mestizo populations, however, have somewhat higher scores, with average IQs intermediate between those of the parent populations (p160).[35]

Like the Asian populations with whom they share their ancestry, Native Americans score rather higher on spatio-visual intelligence than on verbal intelligence (p156). 

In particular, they also have especially high visual memory (p159-60). 

As he did for African-Americans, Lynn also discusses the musical abilities of Native Americans. Interestingly, psychometrical testing shows that their musical ability is rather higher than their general cognitive ability, giving a MQ (Musical Quotient) of approximately 92 (p160). 

They also show the same pattern of musical abilities as do African-Americans, with higher scores for rhythmical ability than for other forms of musical ability (p160). 

However, whereas blacks, as we have seen, only score as high as Europeans for rhythmical ability, but no higher, Native Americans, because of higher IQs (and MQs) overall, actually outscore both Europeans and African-Americans when it comes to rhythmical ability. 

These results are curious. Unlike African-Americans, Native Americans are not, to my knowledge, known for their contribution to any genres of western music, and neither are their indigenous musical traditions especially celebrated. 

Artic Peoples” (i.e. Eskimos) 

Distinguished from other Native Americans are the inhabitants of the far north of the American landmass. These, together with other indigenous populations from the area around the Bering straight, namely those from Greenland, the Aleutian Islands, and the far north-east of Siberia, together form the racial group whom Lynn refers to as “Arctic Peoples”, though the more familiar, if less politically correct, term would be ‘Eskimos’.[36]

As well as forming a distinctive genetic cluster per Cavalli-Sforza et al, they are also morphologically distinct, not least in their extreme adaptation to the cold, with, Lynn reports: 

Shorter legs and arms and a thick trunk to conserve heat, a more pronounced epicanthic eye-fold, and a nose well flattened into the face to reduce the risk of frostbite” (p149). 

As we will see, Lynn is a champion of what is sometimes called Cold Winters Theory – namely the theory that the greater environmental challenges, and hence cognitive demands, associated with living in colder climates selected for increased intelligence among those races inhabiting higher latitudes. 

Therefore, on the basis of this theory, one might imagine that Eskimos, who surely evolved in one of the most difficult, and certainly in the coldest, environment of any human group, would also have the highest IQs. 

This conclusion would also be supported by the observation that, according to the data cited by Lynn himself, Eskimos also have the largest average brain-size of any race (p153). 

Interestingly, some early reports did indeed suggest that Eskimos had high levels of cognitive ability as compared to whites.[37] However, Lynn now reports that Eskimos actually have rather lower IQ scores than do whites and East Asians, with results from 15 different studies giving an average IQ of around 90. 

Actually, however, viewed in global perspective, this average IQ of 90 for Eskimos is not that low. Indeed, of the ten major races surveyed by Lynn, only Europeans and East Asians score higher.[38]

It is an especially high score for a population who, until recently, lived exclusively as hunter-gatherers. Other foraging groups, or descendants of peoples who, until recently, subsisted as foragers, tend, according to Lynn’s data, to have low IQs (e.g. Australian Aboriginals, San Bushmen, Pygmies). 

One obvious explanation for the relatively low IQs of Eskimos as compared to Europeans and East Asians would be their deprived living conditions

However, Lynn is skeptical of the claim that environmental factors are entirely to blame for the difference in IQ between Eskimos and whites, since he observes: 

“The IQ of the Arctic Peoples has not shown any increase relative to that of Europeans since the early 1930s, although their environment has improved in so far as in the second half of the twentieth century they received improved welfare payments and education. If the intelligence of the Arctic Peoples had been impaired by adverse environmental conditions in the 1930s it should have increased by the early 1980s” (p153-4). 

He also notes that all the children tested in the studies he cites were enrolled in schools (since this was where the testing took place), and hence were presumably reasonably familiar with the procedure of test-taking (p154).

Lynn’s explanation for the relatively low scores of Eskimos is discussed below in the final part of this review.

Visual Memory, Spatial Memory and Hunter-Gathering 

Eskimos also score especially high on tests of visual memory, something not usually measured in standard IQ tests (p152-3). 

This is a proficiency they share in common with Native Americans (p159-60), to whom they are obviously closely related. 

However, as we have seen, Australian Aboriginals, who are not closely related to either group, also seem to possess a similar ability, though Lynn refers to this as “spatial Memory” rather than “visual Memory” (p107-8). 

These are, strictly speaking, somewhat different abilities, although they may not be entirely separate either, and may also be difficult to distinguish between in tests. 

If Aboriginals score high on spatial memory, they may then also score high on visual memory, and vice versa for Eskimos and Native Americans. However, since Lynn does not provide comparative data on visual memory among Aboriginals, or on spatial memory among Eskimos or Native Americans, this is not certain. 

Interestingly, one thing all these three groups share in common is a recent history of subsisting, at least in part, as hunter-gatherers.[39]

One is tempted, then, to attribute this ability to the demands of a hunter-gatherer lifestyle, perhaps reflecting the need to remember the location of plant foods which appear only seasonally, or to find one’s way home after a long hunting expedition.[40] 

It would then be interesting to test the visual and spatial memories of other groups who either continue to subsist as hunter-gatherers or only recently transitioned to agriculture or urban life, such as Pygmies and San Bushmen. However, since tests of spatial and visual memory are not included in most IQ tests, the data is probably not yet available.  

For his part, Lynn attributes Eskimo visual memory to the need to “find their way home after going out on long hunting expeditions” (p152-3). 

Thus, just as the desert environment of Australian Aboriginals provides few landmarks, so: 

“The landscape of the frozen tundra [of the Eskimos] provides few distinctive cues, so hunters would need to note and remember such few features as do exist” (p153). 

Proximate Causes: Heredity or Environment?

Chapter fourteen discusses the proximate causes of race differences in intelligence and the extent to which the differences observed can be attributed to either heredity or environmental factor, and, if partly the latter, which environmental factors are most important.  

Lynn declares at the beginning of the chapter that the objective of his book is “to broaden the debate” from an exclusive focus on the black-white test score gap in the US, to instead looking at IQ differences among all ten racial groups across the world for whom data on IQ or intelligence is presented in Lynn’s book (p182). 

Actually, however, in this chapter alone, Lynn does indeed focus primarily on black-white differences, if only because it is in relation to this difference that most research has been conducted, and hence to this difference that most available evidence relates. 

Downplaying the effect of schooling, Lynn identifies malnutrition as the major environmental influence on IQ (p182-7). 

However, he rejects malnutrition as an explanation for the low scores of American blacks, noting there is no evidence of short stature in black Americans and nor have surveys have found a greater prevalence of malnutrition (p185). 

As to global differences, he concludes that: 

“The effect of malnourishment on Africans in sub-Saharan Africa and the Caribbean probably explains about half of the low IQs, leaving the remaining half to genetic factors” (p185). 

However, it is unclear what is meant by “half of the low scores” as he has identified no comparison group.[41] 

He also argues that the study of racially mixed individuals further suggests a genetic component to observed IQ differences. Thus, he claims: 

“There is a statistically significant association between light skin and intelligence” (p190). 

As evidence he cites his own study (Lynn 2002) to claim: 

“When the amount of European ancestry in American blacks is assessed by skin color, dark-skinned blacks have an IQ of 85 and light-skinned blacks have an IQ of 92” (p190). 

However, he fails to explain how he managed to divide American blacks into two discrete groups by reference to a trait that obviously varies continuously. 

More importantly, he neglects to mention altogether two other studies that also investigated the relationship between IQ and degree of racial admixture among African-Americans, but used blood-groups rather than skin tone to assess ancestry (Loehlin et al 1973; Scarr et al 1977). 

This is surely a more reliable measure of ancestry than is skin tone, since the latter is affected by environmental factors (e.g. exposure to the sun darkens the skin), and could conceivably have an indirect psychological effect.[42]

However, both these studies found no association between ancestry and IQ (Loehlin et al 1973; Scarr et al 1977).[43] 

Meanwhile, Lynn mentions the Eyferth study (1961) of the IQs of German children fathered by black and white US servicemen in the period after World War II, only to report, “the IQ of African-Europeans [i.e. those fathered by the black US servicemen] was 94 in relation to 100 for European women” (p63). 

However, he fails to mention that the IQ of those German children fathered by black US servicemen (i.e. those of mixed race) was actually almost identical to that of those fathered by white US servicemen (who, with German mothers, were wholly white). This finding is, of course, evidence against the hereditarian hypothesis with respect to race differences. 

Yet Lynn can hardly claim to be unaware of this finding, or its implications with respect to race differences, since this is actually among the studies most frequently cited by opponents of the hereditarian hypothesis with respect to the black-white test score gap for precisely this reason. 

Lynn’s presentation of the evidence regarding the relative contributions of heredity and environment to race differences in IQ is therefore highly selective and biased. 

An Evolutionary Analysis 

Only in the last three chapters does Lynn provide the belated “Evolutionary Analysis” promised in his subtitle. 

Lynn’s analysis is evolutionary in two senses. 

First, he presents both a functionalist explanation of why race differences in intelligence (supposedly) evolved (Chapter 16). This is the sort of ultimate evolutionary explanation with which evolutionary psychologists are usually concerned. 

However, in addition, Lynn also traces evolution of intelligence over evolutionary history, both in humans of different races (Chapter 17) and among our non-humans and our pre-human ancestors (Chapter 15). 

In other words, he addresses the questions of both adaptation and phylogeny, two of Niko Tinbergen’s famous Four Questions

In discussing the former of these two questions (namely, why race differences in intelligence evolved: Chapter 16), Lynn identifies climate as the ultimate environmental factor responsible for the evolution of race differences in intelligence. 

Thus, he claims that, as humans spread out beyond Africa towards regions further from the equator and hence generally with colder temperatures, especially during winters, the colder climates that these pioneers encountered posed greater challenges for the humans who encountered them in terms of feeding themselves and obtaining shelter etc., and that different human races evolved different levels of intelligence in response to the adaptive challenges posed by such difficulties. 

Hunting vs. Gathering 

The greater problems supposedly posed by colder climates included not just difficulties of keeping warm (i.e. the need for clothing, fires, insulated homes), but also the difficulties of keeping fed. 

Thus, Lynn emphasizes the dietary differences between foragers inhabiting different regions of the world: 

Among contemporary hunter-gatherers the proportions of foods obtained by hunting and gathering varies by hunting and by gathering varies according to latitude. Peoples in tropical and subtropical latitudes are largely gatherers, while peoples in temperate environments rely more on hunting, and peoples in arctic and sub-arctic environments rely almost exclusively on hunting and fishing and have to do so because plant foods are unavailable except for berries and nuts in the summer and autumn” (p227). 

I must confess that I was previously unaware of this dietary difference. However, in my defence, this is perhaps because many anthropologists seem all too ready to overgeneralize from the lifestyles of the most intensively studied tropical groups (e.g. the San of Southern Africa) to imply that what is true of these groups is true of all foragers, and was moreover necessarily also true of all our hunter-gatherer ancestors before they transitioned to agriculture. 

Thus, for example, feminist anthropologists seemingly never tire of claiming that it is female gatherers, not male hunters, who provide most of the caloric demands of foraging peoples. 

Actually, however, this is true only for tropical groups, where plant foods are easily obtainable all year round, not of hunter-gatherers in general (Ember 1978). 

It is certainly not true, for example, of Eskimos, among whom females are almost entirely reliant on male hunters to provision them for most of the year, since plant foods are hardly available at all except for during a few summer months. 

Similarly, radical-leftist anthropologist Marshall Sahlins famously characterized hunter-gatherer peoples as “The Original Affluent Society”, because, according to his data, they do not want for food and actually have more available leisure-time than do most agriculturalists, and even most modern westerners. 

Unfortunately, however, he relied primarily on data from tropical peoples such as the !Kung San to arrive at his estimates, and these findings do not necessarily generalize to other groups such as the Inuit or other Eskimos

The idea that it was our ancestor’s transition to a primarily carnivorous diet that led to increases in hominid brain-size and intelligence was once a popular theory in paleoanthropology. 

However, it has now fallen into disfavour, if only because it put accorded male hunters the starring role in hominid evolution, with female gatherers relegated to a supporting role, and hence offended the sensibilities of feminists, who have become increasingly influential in academia, even in science. 

Nevertheless, it is seems to be true that, across taxa, carnivores tend to have larger brains than herbivores. 

Of course, non-human carnivores did not evolve the exceptional intelligence of humans.  

However, Desmond Morris in The Naked Ape argued that, because our hominid ancestors only adopted a primarily carnivorous diet relatively late in their evolution, they were unable to compete with such specialized hunters as lions and tigers in terms of their fangs and claws. They therefore had to adopt a different approach, using intelligence instead or claws and fangs, hence inventing handheld weapons and cooperative group hunting. 

Lynn’s argument, however, is somewhat different to the traditional version of the Hunting Ape Hypothesis, as championed by popularizers like Desmond Morris and Robert Ardley

Thus, in the traditional version, it is the intelligence of early hominids, the descendants all populations of contemporary humans, that increased as a result of the increasing cognitive demands that hunting placed upon us. 

However, Lynn argues that it is only certain races that were subject to such selection, as their dependence on hunting increased as they populated colder regions of the globe. 

Indeed, Lynn’s arguments actually cast some doubt on the traditional version of the Hunting Ape Theory

After all, anatomically modern humans are thought to have first evolved in Africa. Yet if African foragers actually subsisted primarily on a diet of wild plant foods, and only occasionally hunted or scavenged meat to supplement this primarily herbivorous diet, then the supposed cognitive demands of hunting can hardly be invoked to explain the massive increase in hominid brain-size that occurred during the period before our ancestors left Africa to colonize the remainder of the world.[44]

Indeed, Lynn is seemingly clear that he rejects the ‘Hunting Ape Hypothesis’, writing that the increases in hominid brain-size after our ancestors “entered a new niche of the open savannah in which survival was more cognitively demanding” occurred, not because of the cognitive demands of hunting, but rather that: 

The cognitive demands of the new niche would have consisted principally of finding a variety of different kinds of foods and protecting themselves from predators” (p202)[45]

Cold Winters Theory’ 

There are several problems with so-called ‘Cold Winters Theory’ as an explanation for the race differences in IQ reported by Lynn. 

For one thing, other species have adapted themselves to colder climates without evolving a level of intelligence as high as human population, let alone of Europeans and East Asians. 

Indeed, I am not aware of any studies even suggesting a relationship between brain-size or intelligence and the temperature or latitude of their species-ranges among non-human species. However, one might expect to find an association between temperature and brain-size, if only because of Bergmann’s rule

Similarly, Neanderthals were ultimately displaced and driven to extinction throughout Eurasia by anatomically-modern humans, who, at least according to the conventional account, outcompeted Neanderthals due to their superior intelligence and tool-making ability. 

Yet, whereas anatomically modern humans are thought to have evolved in tropical Africa before spreading outwards to Eurasia, the Neanderthals were a cold-adapted species of hominid who had evolved and thrived in Eurasia during the last Ice age

At any rate, even if the conditions were indeed less demanding in tropical Africa than in temperate or arctic latitudes, then, according to basic Darwinian (and Malthusian) theory, in the absence of some other factor limiting population growth (e.g. warfare, predation, homicide, disease), this would presumably mean that humans would respond to greater resource abundance in the tropics by reproducing until they reached the carrying capacity of the environment.   

By the time the carrying capacity of the environment was reached, however, the environment would no longer be so resource-abundant given the greater number of humans competing for its resources. 

This leads me to believe that the key factors selecting for increases in the intelligence of hominids were not ecological but rather social – i.e. not access to food and shelter etc., but rather competition with other humans. 

Also, I remain unconvinced that the environments inhabited by the two races that have, according to Lynn, the lowest average IQs, namely, San Bushmen and Australian Aborigines, are cognitively undemanding. 

These are, of course, the Kalahari Desert and Australian outback (also composed, in large part, of deserts) respectively, two notoriously barren and arid environments.[46]

Meanwhile, the Eskimos occupy what is certainly the coldest, and also undoubtedly one of the most demanding, environments anywhere in the world, and also have, according to Lynn’s own data, the largest brains. 

However, according to Lynn’s data, their average IQ is only about 90, high for a foraging group, but well below that of Europeans and East Asians.[47] 

For his part, Lynn attempts to explain away this anomaly by arguing that Arctic Populations were precluded from evolving higher IQs by small and dispersed populations, reflecting of the harshness of the environment. This meant the necessary mutations either never arose or never spread through the population (p153; p239-40; p221).[48]
 
On the other hand, he explains their large brains as reflecting visual memory rather than general intelligence, as well as a lack of mutations for neural efficiency (p153; p240) 
 
However, these seem like post-hoc rationalizations 
 
After all, if conditions were harsher in Eurasia than in Africa, then this would presumably also have resulted in smaller and more dispersed populations in Eurasia than in Africa. However, this evidently did not prevent mutations for higher IQ spreading among Eurasians. 

Why then, when the environment becomes even harsher, and the population even more dispersed, would this pattern suddenly reverse itself? 
 
Likewise, if whole-brain-size is related to general intelligence, it is inconsistent to invoke specific abilities to explain Inuit brains. 

Thus, according to Lynn, Australian Aborigines have high spatial memory, which is closely related to visual memory. However, also according to Lynn, only their right visual cortex is enlarged (p108-9) and they have small overall brain-size (p108-9; p210; p212). 

Endnotes

[1] Curiously, Lynn reports, this black advantage for movement-time does not appear in the simplest form of elementary task (simple reaction time), where the subject simply has to press a button on the lighting of a light, rather than hitting a specific button, rather than alternative buttons, on the lighting of a particular light rather than other lights (p58). These latter forms of elementary cognitive test presumably involve some greater degree of cognitive processing. 

[2] First, there are the practical difficulties. Obviously, non-human animals cannot use written tests, or an interview format. Designing a maze for laboratory mice may be relatively straightforward, but building a comparable maze for elephants is rather more challenging. Second, and more important, different species likely have evolved different specialized abilities for dealing with specific adaptive problems. For example, migratory birds may have evolved specific spatio-visual abilities for navigation. However, this is not necessarily reflective of high general intelligence, and to assess their intelligence solely on the basis of their migratory ability, or even their general spatio-visual ability, would likely overestimate their general level of cognitive ability. In other words, it reflects a modulardomain-specific adaptation.

Admittedly, the same is true to some extent for human races. Thus, some races score relatively higher on certain types of intellectual ability. For example, East Asians tend to score higher on spatio-visual ability than on verbal ability; Ashkenazi Jews show the opposite pattern, scoring higher in verbal intelligence than in spatio-visual ability; while American blacks score relatively higher in tests involving rote memory than in those requiring abstract reasoning ability. Similarly, as discussed by Lynn, some races seem to have certain quite specific abilities not commensurate to their general intelligence (e.g. Aborigine visual memory). However, in general, both between and within races, most variation in human intelligence loads onto the ‘g-factor’ of general intelligence.

[3] American anthropologist Carleton Coon is credited as the first to first to propose that population differences in skull size reflect a thermoregulatory adaptation to climatic differences (Coon 1955). An alternative theory, less supported, is that it was differing levels of ambient light that resulted in differences in brain-size as between different populations tracing their ancestry to different parts of the globe (Pearce & Dunbar 2011). On this view, the larger brains of populations who trace their descent to areas of greater latitude presumably reflect only the demands of the visual system, rather than any differences in general intelligence. Yet another theory, less politically-correct than these, is so-called ‘Cold Winters Theory’, which posits that colder climates placed a greater premium on intelligence, which caused populations inhabiting colder regions of the globe to evolve larger brains and higher levels of intelligence. This is, of course, the theory championed by Lynn himself, and I will discuss the problems with this theory below.

[4] Conversely, Lynn also suggests that Turkish people score slightly higher than other Middle-Eastern populations, because they are somewhat intermixed with Europeans (p80).

[5] Lynn has recently published research regarding differences in IQ across different regions of Italy (Lynn 2010).

[6] Actually, Lynn acknowledges causation in both directions, possibly creating a feedback loop. He also acknowledges other factors in contributing to differences in economic development and prosperity, including the effects of the economic system adopted. For example, countries that adopted communism tend to be poorer than comparable countries that have capitalist economies (e.g. Eastern Europe is poorer than Western Europe, and North Korea poorer than South Korea).  

[7] Incidentally, Lynn cites two studies of Polish IQ, whose results are even more divergent than those of Portugal or Ireland, giving average IQs of 106 and 91 respectively. One of these scores is substantially below the European average, while the other the substantially above. 

[8] Essayist Ron Unz has argued that IQs in Ireland have risen in concert with living standards in Ireland (Unz 2012a; Unz 2012b). However, judging from dates when the studies cited by Lynn in ‘Race Differences in Intelligence’ were published, there is no obvious increase over time. True the earliest study, an MA thesis, published in 1973 gives the lowest figure, with an average IQ of just 87 (Gill and Byrt 1973). This rises to 97 in a study published in 1981 that provided little details on its methodology (Buj 1981). However, it declines again for in the latest study cited by Lynn on Irish IQs, which was published in 1993 but gives average IQs of just 93 and 91 for two separate samples (Carr 1993). In the more recent 2015 edition, Lynn cites a few extra studies, eleven in total. Again, however, there is no obvious increase over time, the latest study cited by Lynn, which was published in 2012, giving an average IQ of just 92 (2015 edition).

[9] While this claim is made in reference to immigrants to America and the West, it is perhaps worth noting that East Asians in South-East Asia, namely the Overseas Chinese, largely dominate the economies of South-East Asia, and are therefore on average much wealthier than the average Chinese person still residing in China (see World on Fire by Amy Chua). Given the association of intelligence with wealth, this would suggest that Chinese immigrants to South-East Asia are not substantially less intelligent than those who remained in China. Did the more intelligent Chinese migrate to South-East Asia, while the less intelligent migrated to America? If so, why would this be?

[10] According to Daniel Nettle in Personality: What Makes You the Way You Are, in the framework of the five-factor model of personality, a liking for travel is associated primarily with extraversion. One study found that an intention to migrate was positively associated with both extraversion and openness to experience, but negatively associated with agreeableness, conscientiousness, and neuroticism (Fouarge et al 2019). A study of migration within the United States found a rather more complex set of relationships between migration and each of the big five personality traits (Jokela 2009).

[11] Other Catholic countries, namely those in Southern Europe, such as Italy and Spain, may indeed have slightly lower IQs, at least in the far south of these countries. However, as we have seen, Lynn explains this in terms of racial admixture from Middle-Eastern and North African populations. Therefore, there is no need to invoke priestly celibacy in order to explain it. The crucial test case, then, is Catholic countries other than Ireland from Northern Europe, such as Austria and France.

[12] In the 2015 edition, he returns to a slightly higher figure of 71.

[13] In the 2006 edition, Lynn cites no studies from the Horn of Africa. However, in the 2015 edition, he cites five studies from Ethiopia, and, in The Intelligence of Nations, he and co-author David Becker also cite a study on Somalian IQs.

[14] Indeed, physical anthropologist John Baker, in his excellent Race (which I have reviewed here, here and here) argues that:

“The ‘Aethiopid’ race of Ethiopia and Somaliland are an essentially Europid subrace with some Negrid admixture” (Race: p225).

This may be an exaggeration. However, recent genetic studies indeed show affinities between populations from the Horn of Africa and those from the Middle East (e.g. Ali et al 2020; Khan 2011a; Khan 2011b; Hodgson 2014).

[15] However, it is not at all clear that the same is true for black African minorities resident in other western polities, whose IQs are also, according to Lynn’s data, also considerably above those for indigenous Africans. Here, I suspect black populations are more diverse. For example, in Britain, Afro-Caribbean people, who emigrated to Britain by way of the West Indies, are probably mostly mixed-race, like African-Americans, since both descend from white-owned slave populations. However, Britain also plays host to many immigrants direct from Africa, most of whom are, I suspect, of relatively unmixed sub-Saharan African descent. Yet African immigrants to the UK outperform Afro-Caribbeans in UK schools (Chisala 2015a).

[16] Blogger John ‘Chuck’ Fuerst suggests, the higher scores for Somali immigrants might reflect the fact that the peoples of the Horn of Africa actually, as we have seen, have substantial Caucasoid ancestry, and genetic affinities with North African and Middle Eastern populations (Fuerst 2015). However, the problem with attributing the relatively high scores of Somali refugees and immigrants to Caucasoid-admixture is that, as we have seen, according to the data collected by Lynn, IQs are no higher in the Horn of Africa than elsewhere in sub-Saharan Africa.

[17] If anything, “Bushmen” should presumably be grouped, not with Pygmies, with rather the distinct but related Khoikhoi pastoralists. However, the latter are now all but extinct as an independent people and are not mentioned by Lynn.

[18] For example, Lynn also acknowledges that those whom he terms “South Asians and North Africans” are “closely related to the Europeans” (p79). However, they nevertheless merit a chapter of their own. Likewise, he acknowledges that “South-East Asians” share “some genetic affinity with East Asians with whom they are to some degree interbred” (p97). Nevertheless, he justifies considering these two ostensible races in separate chapters, partly on the basis that “the flattened nose and epicanthic eye-fold are less prominent” among the former (p97). Yet the morphological differences between Pygmies and Khoisan are even greater, but they are lumped together in the same chapter.

[19] There is indeed, as Lynn notes, a correlation between a group’s IQ and their lifestyle (i.e. whether they are foragers or agriculturalists). However, the direction of causation is unclear. Does high intelligence allow a group to transition to agriculture, or does an agriculturalist lifestyle somehow increase a group’s average IQ? And, if the latter, is this a genetic or a purely environmental effect?

[20] Indeed, the very word slave is thought to derive from the ethnonym Slav, because of the frequency with which Slavic peoples were enslaved during the Middle Ages.

[21] Indeed, Lynn could hardly have arrived at an actual figure for the average Pygmy IQ, since, as we have seen, he reports the results of only a single actual study of Pygmy intelligence, the author of which did not present his results in a quantitative format.

[22] Thus, he suggests that the lower performance of the Aboriginals tested by Drinkwater (1975), as compared to those tested by Kearins (1981), may reflect the fact that the latter were the descendants of coastal populations of Aborigines, for whom the need to navigate in deserts without landmarks would have been less important. 

[23] The fact that the earliest civilization emerged among Middle Eastern, North African and South Asian populations is attributed by Lynn to the sort of environmental factors that, elsewhere in his book, he largely discounts. Thus, Lynn writes: 

“[Europeans] were not able to develop early civilizations like those built by the South Asians and North Africans because Europe was still cold, was covered with forest, and had heavy soils that were difficult to plough unlike the light soils on which the early civilizations were built, and there were no river flood plains to provide annual highly fertile alluvial deposits from which agricultural surpluses could be obtained to support an urban civilization and an intellectual class” (p237).

[24] An interesting question is whether there exist differences in IQ as between different caste groups within the Indian subcontinent, since, at least in theory, these represented endogamous breeding populations between whom strict separation was maintained. Thus, it would be interesting to know the average IQ of Brahmins or of the high-achieving Parsi people (though the latter are not strictly a caste, since they are not Hindu).

[25] However, all of these comparisons, in both Britain and America, omit to include Jewish people as a separate ethnicity, instead grouping them with other whites. Jews earn more, on average, than any other religion in Britain and America, including Hindus.

[26] I assume that this is the study that Lynn is citing, since this is the only matching study included in his references. However, curiously, Lynn refers to this study here as “Mackintosh et al 1985” (p83-4), despite their being only two authors listed in his references, such that “Mackintosh & Mascie-Taylor 1985” would be the more usual citation. Indeed, Lynn uses this latter form of citation (i.e. “Mackintosh & Mascie-Taylor 1985”) elsewhere when citing what seems to be the same paper in his earlier chapter on Africans (p47; p49).

[27] In order to determine whether religion or national origin is the key determining factor, it would be interesting to have data on the incomes (and IQs) of Pakistani Hindus, Bangladeshi Hindus and Muslim Indians resident in the West.

[28] An alternative possibility is that it was the spread of Arab genes, as a result of the Arab conquests, and resulting spread of Islam, that depressed IQs in the Middle-East and North Africa, since Arabs were, prior to the rise of Islam, a relatively backward group of desert nomads, whose intellectual achievements were minimal compared to those of many of the groups whom they conquered (e.g. Persians, Mesopotamians, Assyrians, and Egyptians). Indeed, even the achievements of Muslim civilization during the Islamic Golden Age were disproportionately those of the Persians, not the Arabs. 

[29] One might, incidentally, question Lynn’s assumption that Oriental Jews were less subject to persecution than were the Ashkenazim in Europe. This is, of course, the politically correct view, which sees Islamic civilization as, prior to recent times, more tolerant than Christendom. On this view, anti-Jewish sentiment only emerged in the Middle East as a consequence of Zionism and the establishment of the Jewish state in what was formerly Palestine. However, for alternative views, see The Myth of the Andalusian Paradise. See also Robert Spencer’s The Truth About Muhammad (which I have reviewed here), in which he argues that Islam is inherently antisemitic (i.e. anti-Jewish). Interestingly, Kevin Macdonald, in A People That Shall Dwell Alone (which I have reviewed here and here) makes almost the opposite argument to that of Lynn. Thus, he argues that it was precisely because Jews were so discriminated against in the Muslim world that their culture, and ultimately their IQs, were to decline, as they were, according to Macdonald, largely excluded from high-status and cognitively-demanding occupations, which were reserved for Muslims (p301-4). Thus, Macdonald concludes: 

“The pattern of lower verbal intelligence, relatively high fertility, and low-investment parenting among Jews in the Muslim world is linked ultimately to anti-Semitism” (A People That Shall Dwell Alone (reviewed here): p304). 

[30] For example, one might speculate that only the relatively smarter Jews were able to anticipate looming pogroms and hence escape. Alternatively, since wealth is correlated with intelligence, perhaps only the relatively richer, and hence generally smarter, Jews could afford the costs of migration, including bribes to officials, in order to escape pogroms. These are, however, obviously speculative, post-hoc ‘just-so stories’ (in the negative Gouldian sense), and I put little stock in them.

[31] This pattern among East Asians of lower scores on the verbal component of IQ tests was initially attributed to a lack of fluency in the language of the test, since the first East Asians to be tested were among diaspora populations resident in the West. However, the same pattern has now been found even among East Asians tested in their first language, in both the West and East Asia.

[32] For example, Sarich and Miele, in Race: The Reality of Human Differences (which I have reviewed here and here) write that “Asians have a slightly higher IQ than do whites” (Race: The Reality of Human Differences: p196). However, in actuality, this applies only to East Asians, not to South-East Asians (nor to South Asians and West Asians, who are “Asian” in at least the geographical, and the British-English, sense.) Similarly, in his own oversimplified tripartite racial taxonomy in Race, Evolution and Behavior (which I have reviewed here), Philippe Rushton seems to imply that the traits he attributes to Mongoloids, including high IQs and large brain-size, apply to all members of this race, including South-East Asians and even Native Americans.

[33] Ethnic Chinese were overrepresented among Vietnamese boat people, though less so among later waves of immigrants. However, perhaps a greater problem is that they were disproportionately middle-class and drawn from the business elite, and hence unrepresentative of the Vietnamese as a whole, and likely of disproportionately high cognitive ability.

[34] In his paper on Mongolian IQs, Lynn also suggests that Mongolians have lower IQs than other East Asians because they are genetically intermediate between East Asians and Eskimos (“Arctic Peoples”), who themselves have lower IQs (Lynn 2007). However, this merely begs the question as to why Eskimos themselves have lower IQs than East Asians, another anomaly with respect to ‘Cold Winters Theory’, which is discussed in the final part of this review.

[35] With regard to the population of Colombia, Lynn writes: 

“The population of Colombia is 75 percent Native American and Mestizo, 20 percent European, and 5 percent African. It is reasonable to assume that the higher IQ of the Europeans and the lower IQ of the Africans will approximately balance out and that the IQ of 84 represents the intelligence of the Native Americans” (p58). 

However, this assumption that the African and European genetic contributions will balance out seems dubious since, by Lynn’s own reckoning, the European contribution to the Colombian gene-pool is three times greater than that of Africans.

[36] The currently-preferred term Inuit is not sufficiently inclusive, because it applies only to those Eskimos indigenous to the North American continent, not the related but culturally distinct populations inhabiting Siberia or the Aleutian Islands. I continue to use the term Eskimos, because it is more accurate, not obviously pejorative, probably more widely understood, and also because I deplore the euphemism treadmill. Elsewhere, I have generally deferred to Lynn’s own usage, for example mostly using ‘Aborigine’, rather than the now preferred ‘Aboriginal’, a particularly preposterous example of the euphemism treadmill since the terms are so similar, comparable to how, today, it is acceptable to say ‘people of colour’, but not ‘coloured people’.

[37] For example, Hans Eysenck made various references in his writings to the fact that Eskimo children performed as well as European children in IQ tests as evidence for his claim that economic deprivation did not necessarily reduce IQ scores (e.g. The Structure and Measurement of Intelligence: p23). See also discussion in: Jason Malloy, A World of Difference: Richard Lynn Maps World Intelligence (Malloy 2016).

[38] Certain specific subpopulations also score higher (e.g. Ashkenazim and Māoris, though the latter only barely). However, these are subpopulations within the major ten races that Lynn identifies, not races in and of themselves.

[39] Actually, by the time Columbus landed in the Americas, many Native Americans had already partly transitioned to agriculture. However, not least because of a lack of domesticated animals that they could use as a meat source, most supplemented this with hunting and sometimes gathering too.

[40] However, Lynn reports that Japanese also score high on tests of visual memory (p143). However, excepting perhaps the Ainu, the Japanese do not have a recent history of subsisting as foragers. This suggests that foraging is not the only possible cause of high visual memory in a population.

[41] Presumably the comparison group Lynn has in mind are Europeans, since, as we have seen it is European living standards that he takes as his baseline for the purposes of estimating a group’s ”genotypic IQ” (p69), and, in a sense, all the IQ scores that he reports are measured against a European standard in so far as they are calculated by reference to an arbitrarily assigned average of 100 for European populations.

[42] Thus, it is at least theoretically possible that a relatively darker-skinned African-American child might be treated differently than a lighter-skinned child, especially one whose race is relatively indeterminate, by others (e.g. teachers) in a way that could conceivably affect their cognitive development and IQ. In addition, a darker skinned African-American child might, as a consequence of their darker complexion, come to identify as an African American to a greater extent than a lighter skinned child, which might affect who they socialize with, which celebrities they identify with and the extent to which they identify with broader black culture, all of which could conceivably have an effect on IQ. I do not contend that these effects are likely or even plausible, but they are at least theoretically possible. Using blood group to assess ancestry, especially if one actually introduces controls for skin tone (since this may be associated with blood-group, since both are presumed to be markers of degree of African ancestry), obviously eliminates this possibility. Today, this can also be done by looking at subjects’ actual DNA, which obviously has the potential to provide a more accurate measure of ancestry than either skin-tone or blood-group (e.g. Lasker et al 2019).

[43] More recently, a better study has been published regarding the association between European admixture and intelligence among African-Americans, which used genetic data to assess ancestry, and actually sought to control for the possible confounding effect of skin-colour and appearance (Lasker et al 2019). Unlike the blood-group studies, this largely supports the hereditarian hypothesis. However, this was not available at the time Lynn authored his book. Also, it ought to be noted that it was published in a controversial pay-to-publish academic journal, and therefore the quality of peer review to which the paper was subjected may be open to question. No doubt in the future, with the reduced costs of genetic testing, more studies using a similar methodology will be conducted, finally resolving the question of the relative contributions of heredity and environment to the black-white test score gap in America, and perhaps disparities between other ethnic groups too.

[44] It is a fallacy, however, to assume that what is true for those foraging peoples that have managed to survive as foragers in modern times and hence come to be studied by anthropologists was necessarily also true of all foraging groups before the transition to agriculture. On the contrary, those foraging groups that have survived into modern times, tend to have done so only in the ecologically most marginal and barren environments (e.g. the Kalahari Desert occupied by the San), since these areas are of least use to agriculturalists, and therefore represent the only regions where more technologically and socially advanced agriculturalists have yet to displace them (see Ember 1978). However, this would seem to suggest that African hunter-gatherers, prior to the expansion of Bantu agriculturalists, would have occupied more fertile areas, and therefore might have had even less need to rely on hunting than do contemporary hunter-gatherers such as the San, who are today largely restricted to the Kalahari Desert.

[45] Here, interestingly, Lynn departs from the theory of fellow race realist, and fellow exponent of ‘Cold Winters Theory’, Philippe Rushton. The latter, in his book, Race, Evolution and Behavior (which I have reviewed here), argues that: 

“Hunting in the open grasslands of northern Europe was more difficult than hunting in the woodlands of the tropics and subtropics where there is plenty of cover for hunters to hide in” (Race, Evolution and Behavior: p228). 

In contrast, Lynn argues “open grasslands”, albeit on the African Savannah rather than in Northern Europe, actually made things harder, not for predators, but rather for prey – or at least arboreal primate prey. Thus, Lynn writes: 

“The other principle problem of the hominids living in open grasslands would have been to protect themselves against lions, cheetahs and leopards. Apes and monkeys escape from the big cats by climbing into trees and swinging or jumping form one tree to another. For the Autralopithecines and the later hominids in open grasslands this was no longer possible” (p203). 

[46] To clarify, this is not to say that either San Bushmen or Australian Aborigines evolved primarily in these desert environments. On the contrary, many of them formerly occupied more fertile areas, before being displaced by more advanced neighbours, Bantu agriculturalists in the case of Khoisan, and European (more specifically British) colonizers, in the case of Aborigines. However, that they are nevertheless capable of surviving in these demanding desert environments suggests either:

(1) They are more intelligent than Lynn concludes; or
(2) That surviving in challenging environments does not require the level of intelligence that Lynn’s ‘Cold Winters Theory’ supposes.

[47] Besides Eskimos, another potential test case for ‘Cold Winters Theory’ are the Sámi (or Lapps) of Northern Scandinavia. Like Eskimos, they have inhabited an extremely cold, northern environment for many generations and are genetically quite distinct from other populations. Also, again like Eskimos, they maintained a foraging lifestyle until modern times. According to Armstrong et al (2014), the only study of Sámi cognitive ability of which I am aware, the average IQ of the Sámi is almost identical to that of neighbouring populations of Finns (about 101).

[48] Lynn gives the same explanation for the relatively lower recorded IQs of Mongolians, as compared to other East Asians (p240).

References

Ali et al (2020) Genome-wide analyses disclose the distinctive HLA architecture and the pharmacogenetic landscape of the Somali population. Science Reports 10:5652.

Anderson M (2015) Chapter 1: Statistical Portrait of the U.S. Black Immigrant Population. In A Rising Share of the U.S. Black Population Is Foreign Born. Pew Research Center: Social & Demographic Trends, April 9, 2015. 

Armstrong et al (2014) Cognitive abilities amongst the Sámi population. Intelligence 46: 35-39.

Aziz et al (2004) Intellectual development of children born of mothers who fasted in Ramadan during pregnancy International Journal for Vitamin and Nutrition Research (2004), 74, pp. 374-380 

Buj (1981) Average IQ values in various European countries Personality and Individual Differences 2(2): 168-9.

Carr (1993) Twenty Years a Growing: A Research Note on Gains in the Intelligence Test Scores of Irish Children over Two Decades Irish Journal of Psychology 14(4): 576-582.

Chisala (2015a) The IQ Gap Is No Longer a Black and White IssueUnz Review, 25 June. 

Chisala (2015b) Closing the Black-White IQ Gap Debate, Part I, Unz Review, 5 October.

Chisala (2015c) Closing the Black-White IQ Gap Debate, Part 2Unz Review, 22 October. 

Chisala (2019) Why Do Blacks Outperform Whites in UK Schools? Unz Review, November 29

Coon (1955) Some Problems of Human Variability and Natural Selection in Climate and Culture. American Naturalist 89(848): 257-279

Drinkwater (1975) Visual memory skills of medium contact aboriginal childrenAustralian Journal of Psychology 28(1): 37-43. 

Dutton (2020) Why Islam Makes You Stupid . . . But Also Means You’ll Conquer The World (Whitefish, MT: Washington Summit, 2020).

Ember (1978) Myths about Hunter-Gatherers Ethnology 17(4): 439-448 

Eyferth (1959) Eine Untersuchung der Neger-Mischlingskinder in Westdeutschland. Vita Humana, 2:102–114. 

Fouarge et al (2019) Personality traits, migration intentions, and cultural distance. Papers in Regional Science 98(6): 2425-2454

Fuerst (2015) The Measured proficiency of Somali Americans, HumaVarieties.org

Gill & Byrt (1973). The Standardization of Raven’s Progressive Matrices and the Mill Hill Vocabulary Scale for Irish School Children Aged 6–12 Years. University College, Cork: MA Thesis.

Hodgeson et al (2014) Early Back-to-Africa Migration into the Horn of Africa. PLoS Genetics 10(6): e1004393.

Jokela (2009) Personality predicts migration within and between U.S. states Journal of Research in Personality 43(1): 79-83.

Kearins (1986) Visual spatial memory in aboriginal and white Australian childrenAustralian Journal of Psychology 38(3): 203-214. 

Kearins (1981) Visual spatial memory in Australian Aboriginal children of desert regions Cognitive Psychology 13(3): 434-460. 

Khan (2011a) The genetic affinities of Ethiopians. Discover Magazine, January 10.

Khan (2011b) A genomic sketch of the Horn of Africa. Discover Magazine, June 10

Klekamp et al (1987) A quantitative study of Australian aboriginal and Caucasian brains. Journal of Anatomy 150: 191–210.

Knapp & Seagrim (1981) Visual memory Australian aboriginal children and children of European descent International Journal of Psychology 16(1-4): 213-231. 

Langan & LoSasso (2002) Discussions on Genius and Intelligence: Mega Foundation Interview with Arthur Jensen‘ (Eastport, New York: MegaPress) 

Lasker et al (2019) Global ancestry and cognitive abilityPsych 1(1), 431-459 

Loehlin et al (1973) Blood group genes and negro-white ability differences. Behavior Genetics 3(3): 263-270 

Lynn (2002) Skin Color and Intelligence in African-Americans. Population & Environment 23:201-207 

Lynn (2007) IQ of Mongolians. Mankind Quarterly 47(3).

Lynn (2010) In Italy, north–south differences in IQ predict differences in income, education, infant mortality, stature, and literacy. Intelligence, 38, 93-100. 

Lynn (2015) Selective Emigration, Roman Catholicism and the Decline of Intelligence in the Republic of Ireland. Mankind Quarterly 55(3): 242-253.

Mackintosh & Mascie-Taylor (1985). The IQ question. In Education for All. Cmnd paper 4453. London: HMSO. 

Malloy (2014) HVGIQ: VietnamHumanvarieties.org, June 19. 

Malloy (2006) A World of Difference: Richard Lynn Maps World Intelligence. Gnxp.com, February 01. 

Pearce & Dunbar (2011) Latitudinal variation in light levels drives human visual system size, Biology Letters, 8(1): 90–93. 

Pereira et al (2005). African female heritage in Iberia: a reassessment of mtDNA lineage distribution in present timesHuman Biology77 (2): 213–29. 

Richards et al (2003) Extensive Female-Mediated Gene Flow from Sub-Saharan Africa into Near Eastern Arab PopulationsAmerican Journal of Human Genetics 72(4):1058–1064.

Rushton, J. P., & Ankney, C. D. (2009). Whole brain size and general mental ability: A reviewInternational Journal of Neuroscience119, 691-731

Sailer (1996) Great Black HopesNational Review, August 12

Scarr et al (1977) Absence of a relationship between degree of white ancestry and intellectual skills within a black population. Human Genetics 39(1):69-86 

Templer (2010) The Comparison of Mean IQ in Muslim and Non-Muslim CountriesMankind Quarterly 50(3):188-209 

Torrence (1983) Time budgeting and hunter-gatherer technology. In G. Bailey (Ed.). Hunter-Gatherer Economy in Prehistory: A European Perspective. Cambridge, Cambridge University Press.

Woodley (2009) Inbreeding depression and IQ in a study of 72 countries Intelligence 37(3): 268-276 

John Gray’s ‘Straw Dogs’: In Praise of Pessimism

‘Straw Dogs: Thoughts on Humans and Other Animals’, by John Gray, Granta Books, 2003.

The religious impulse, John Gray argues in a later work elaborating on the themes first set out in ‘Straw Dogs’, is as universal as the sex drive. Like the latter, when repressed, it re-emerges in the form of perversion.[1]

Thus, the Marxist faith in our passage into communism after the revolution represents a perversion of the Christian belief in our passage into heaven after death or Armageddon – the former, communism (i.e. heaven on earth), being quite as unrealistic as the otherworldly, celestial paradise envisaged by Christians, if not more so. 

Marxism is thus, as Edmund Wilson was the first to observe, the opiate of the intellectuals

What is true of Marxism is also, for Gray, equally true of what he regards as the predominant secular religion of the contemporary West – namely humanism. 

Its secular self-image notwithstanding, humanism is, for Gray, a substitute religion that replaces an irrational faith in an omnipotent god with an even more irrational faith in the omnipotence of Man himself (p38). 

Yet, in doing so, Gray concludes, humanism renounces the one insight that Christianity actually got right – namely the notion that humans are “radically flawed” as captured by the doctrine of original sin.[2]

Progress and Other Delusions

Of course, in its ordinary usage, the term ‘humanism’ is hopelessly broad, pretty much encompassing anyone who is neither, on the one hand, religious nor, on the other, a Nazi. 
 
For his purposes, Gray defines humanism more narrowly, namely as a “belief in progress” (p4). 

More specifically, however, he seems to have in mind a belief in the inevitability of social, economic, moral and political progress. 

Belief in the inevitability of progress is, he contends, a faith universal across the political spectrum – from neoconservatives who think they can transform Islamic tribal theocracies and Soviet Republics into liberal capitalist democracies, to Marxists who think Islamic tribal theocracies and liberal capitalist democracies alike will themselves ultimately give way to communism

Gray, however, rejects the notion of any grand narrative arc in human history.

Looking for meaning in history is like looking for patterns in clouds” (p48). 

Scientific Progress and Social Progress 

Although in an early chapter he digresses on the supposed “irrational origins” of western science,[3] Gray does not question the reality of scientific progress. 
 
Instead, what Gray questions is the assumption that social, moral and political progress will inevitably accompany scientific progress. 
 
Progress in science and technology, does not invariably lead to social, moral and political progress. On the contrary, new technologies can readily be enlisted in the service of governmental repression and tyranny. Thus, Gray observes: 

Without the railways, telegraph and poison gas, there could have been no Holocaust” (p14). 

Thus, by Gray’s reckoning, “Death camps are as modern as laser surgery” (p173).
 
Scientific progress is, he observes, unstoppable and self-perpetuating. Thus, if any nation unilaterally renounces modern technology, it will be economically outcompeted, or even militarily conquered, by other nations who harness modern technologies in the service of their economy and military: 

Any country that renounces technology makes itself prey to those that do not. At best it will fail to achieve the self-sufficiency at which it aims – at worst it will suffer the fate of the Tasmanians” (p178). 

However, the same is not true of political, social and moral progress. On the contrary, a nation excessively preoccupied with moral considerations would surely be defeated in war or indeed in economic competition by an enemy willing to cast aside morality for the sake of success. 
 
Thus, Gray concludes:

Technology is not something that humankind can control. It is an event that has befallen the world” (p14). 

Thus, Gray anticipates: 

Even as it enables poverty to be diminished and sickness to be alleviated, science will be used to refine tyranny and perfect the art of war” (p123). 

This leads him to predict: 

If one thing about the present century is certain, it is that the power conferred on humanity by new technologies will be used to commit atrocious crimes against it” (p14). 

Human Nature

This is because, according to Gray, although technology progresses, human nature itself remains stubbornly intransigent. 

Though human knowledge will very likely continue to grow and with it human power, the human animal will stay the same: a highly inventive animal that is also one of the most predatory and destructive” (p4). 

As a result, “The uses of knowledge will always be as shifting and crooked as humans are themselves” (p28). 
 
Thus, the fatal flaw in the humanist theory that political progress will inevitably accompany scientific progress is, ironically, its failure to come to grips with one particular sphere of scientific progress – namely progress in the scientific understanding of human nature itself. 
 
Sociobiological theory suggests humans are innately selfish and nepotistic to an extent incompatible with the utopias envisaged by reformers and revolutionaries
 
Evolutionary psychologists like to emphasize how natural selection has paradoxically led to the evolution of cooperation and altruism. They are also at pains to point out that innate psychological mechanisms are responsive to environmental variables and hence amenable to manipulation. 
 
This has led some thinkers to suggest that, even if utopia is forever beyond our grasp, nevertheless society can be improved by social engineering and well-meaning reform (see Peter Singer’s A Darwinian Left: which I have reviewed herehere and here). 

However, this ignores the fact that the social engineers themselves (e.g. politicians, civil servants) are possessed of the same essentially selfish and nepotistic nature as those whose behaviour they are seeking to guide and manipulate. Therefore, even if they were able to successfully reengineer society, they would do so for their own ends, not those of society or humankind as a whole.

Of course, human nature itself could itself be altered through genetic engineering or eugenics. However, once again, those charged with doing the work (scientists) and those from whom they take their orders (government, big business) will, at the time their work is undertaken, be possessed of the same nature that it is their intention to improve upon. 
 
Therefore, Gray concludes, if human nature itself is remodelled: 

It will be done haphazardly, as an upshot of struggles in the murky realm where big business, organized crime and the hidden parts of government vie for control” (p6). 

It will hence reflect the interests, not of humankind as a whole, but of rather those responsible for undertaking the project. 

The Future

In contrast to the optimistic vision of such luminaries as Steven Pinker in The Better Angels of Our Nature and Enlightenment Now and Matt Ridley in his book The Rational Optimist (which I have reviewed here), Gray’s vision of the future is positively dystopian. He foresees a return of resource wars and “wars of scarcity… waged against the world’s modern states by the stateless armies of the militant poor” (p181-2).

This is an inevitable result of a Malthusian trap

So long as population grows, progress will consist in labouring to keep up with it. There is only one way that humanity can limit its labours, and that is by limiting its numbers. But limiting human numbers clashes with powerful human needs” (p184).[4]

These “powerful human needs” include, not just the sociobiological imperative to reproduce, but also the interests of various ethnic groups in ensuring their survival and increasing their military and electoral strength (Ibid.). 

Zero population growth could be enforced only by a global authority with draconian powers and unwavering determination” (p185). 

Unfortunately (or perhaps fortunately, depending on your perspective), he concludes: 

There has never been such a power and never will be” (Ibid.). 

Thus, Gray compares the rise in human populations to the temporary “spikes that occur in the numbers of rabbits, house mice and plague rats” (p10). Thus, he concludes: 

Humans… like any other plague animal…cannot destroy the earth, but… can easily wreck the environment that sustains them” (p12). 

Thus, Gray darkly prophesizes, “We may well look back on the twentieth century as a time of peace” (p182). 

As Gray points out in his follow-up book: 

War or revolution… may seem apocalyptic possibilities, but they are only history carrying on as it has always done. What is truly apocalyptic is the belief [of Marx and Fukuyamathat history will come to a stop” (Heresies: Against Progress and Other Illusions: p67).[5]

Morality

While Gray doubts the inevitability of social, political and moral progress, he perhaps does not question sufficiently its reality. 

For example, citing improvements in sanitation and healthcare, he concludes that, although “faith in progress is a superstition”, progress itself “is a fact” (p155). 
 
Yet every society, by definition, views its own moral and political values as superior to those of other societies. Otherwise, they would not be its own values. They therefore view the recent changes in moral and political values that led to their own moral and political values as a form of moral progress. 
 
However, what constitutes moral, social and political progress is entirely a subjective assessment
 
For example, the ancient Romans, transported to our times, would surely accept the superiority of our science and technology and, if they did not, we would outcompete them both economically and militarily and thereby prove it ourselves. 

However, they would view our social, moral and political values as decadent, immoral and misguided and we would have no way of proving them wrong. 
 
In other words, while scientific and technological progress can be proven objectively, what constitutes moral and political progress is a mere matter of opinion. 
 
Gray occasionally hints in this direction (namely, moral relativism), declaring in one of his many countless quotable aphorisms 

Ideas of justice are as timeless as fashions in hats” (p103). 

He even flirts with outright moral nihilism, describing “values” as “only human needs and the needs of other animals turned into abstractions” (p197), and even venturing, “the idea of morality” may be nothing more than “an ugly superstition” (p90). 
 
However, Gray remains somewhat confused on this point. For example, among his arguments against morality is that observation that: 

Morality has hardly made us better people” (p104). 

However, the very meaning of “better people” is itself dependent on a moral judgement. If we reject morality, then there are no grounds for determining if some people are “better” than others and therefore this can hardly be a ground for rejecting morality. 

Free Will

On the issue of free will, Gray is more consistent. Relying on the controversial work of neuroscientist Benjamin Libet, he contends: 

In nearly all our life willing decides nothing – we cannot wake up or fall asleep, remember or forget our dream, summon or banish our thoughts, by deciding to do so… We just act and there is no actor standing behind what we do” (p69). 

Thus, he observes, “Our lives are more like fragmentary dreams then the enactments of conscious selves” (p38) and “Our actual experience is not of freely choosing the way we live but of being driven along by our bodily needs – by fear, hunger and, above all, sex” (p43). 
 
Rejection of free will is, moreover, yet a further reason to reject morality. 
 
Whether one behaves morally or not, and what one regards as the moral way to behave, is, Gray contends, entirely a matter of the circumstances of one’s upbringing (p107-8).[6] Thus, according to Gray “being good is good luck” and not something for which one deserves credit or blame (p104).

Gray therefore concludes: 

The fact that we are not autonomous subjects deals a death blow to morality – but it is the only possible ground of ethics” (p112). 

Yet, far from truly free, Gray contends: 

We spend our lives coping with what comes along” (p70). 

However, in expecting humankind to take charge of its own destiny: 

We insist that mankind can achieve what we cannot: conscious control of its existence” (p38). 

Self-Awareness

For Gray, then, what separates us from the remainder of the animal kingdom is not then free will, or even consciousness, but rather merely self-awareness.
 
Yet this, for Gray, is a mixed blessing at best. 
 
After all, it has long been known that musicians and sportsmen often perform best, not when consciously thinking about, or even aware of, the movements and reactions of their hands and bodies, but rather when acting ‘on instinct’ and momentarily lost in what positive psychologists call flow or being in the zone (p61). 

This is a theme Gray returns to in The Soul of the Marionette, where he argues that, in some sense, the puppet is freer, and more unrestrained in his actions, than the puppet-master.

The Gaia Cult

Given the many merits of his book, it is regrettable that Gray has an unfortunate tendency to pontificate about all manner of subjects, many of them far outside his own field of expertise. As a result, almost inevitably, he sometimes gets it completely wrong on certain specific subjects. 
 
A case in point is environmentalist James Lovelock’s Gaia theory, which Gray champions throughout his book. 

According to ‘Gaia Theory’, the planet is analogous to a harmonious self-regulating organism – in danger of being disrupted only by environmental damage wrought by man. 

Given his cynical outlook, not to mention his penchant for sociobiology, Gray’s enthusiasm for Gaia is curious.

As Richard Dawkins explains in Unweaving the Rainbow, the adaptation of organisms to their environment, which consists largely of other organisms, may give the superficial appearance of eco-systems as harmonious wholes, as some organisms exploit and hence come to rely on the presence of other organisms in order to survive (Unweaving the Rainbow: p221). 
 
However, a Darwinian perspective suggests that, far from existing in benign harmony, organisms are in a state of continuous competition and conflict. Indeed, it is paradoxically precisely their exploitation of one another that gives the superficial appearance of harmony. 
 
In other words, as Dawkins concludes: 

Individuals work for Gaia only when it suits them to do so – so why bother to bring Gaia into the discussion” (Unweaving the Rainbow: p225). 

Yet, for many of its adherents, Gaia is not so much a testable, falsifiable scientific theory as it is a kind of substitute religion. Thus, Dawkins describes ‘Gaia theory’ as “a cult, almost a religion” (Ibid: p223).

It is therefore better viewed, within Gray’s own theoretical framework, as yet another secular perversion of humanity’s innate religious impulse. 
 
Perhaps, then, Gray’s own curious enthusiasm for this particular pseudo-scientific cult suggests that Gray is himself no more immune from the religious impulse than those whom he attacks. If so, this, paradoxically, only strengthens his case that the religious impulse is indeed universal and innate.

The Purpose of Philosophy

Gray is himself a philosopher by background. However, he is contemptuous of most of the philosophical tradition that has preceded him. 

Thus, he contends:  

As commonly practised, philosophy is the attempt to find good reasons for conventional beliefs” (p37). 

In former centuries such conventional beliefs were largely religious dogma. Yet, from the nineteenth century on, they increasing became political creeds emphasizing human progress, such as Whig historiography, and the theories of Marx and Hegel.

Thus, Gray writes:  

In the Middle Ages, philosophy gave intellectual scaffolding to the Church; in the nineteenth and twentieth centuries it served a myth of progress” (p82). 

Today, however, despite the continuing faith in progress that Gray so ably dissects, philosophy has ceased to fulfil even this function and hence abandoned even these dubious raisons d’être.

The result, according to Gray, is that:

Serving neither religion nor a political faith, philosophy is a subject without a subject-matter; scholasticism without the charm of dogma” (p82). 

Yet Gray reserves particular scorn for moral philosophy, which is, according to him, “an exercise in make-believe” (p89) and “very largely a branch of fiction” (p109), albeit one “less realistic in its picture of human life than the average bourgeois novel” (p89), which, he ventures, likely explains why “a philosopher has yet to write a great novel” (p109). 

In other words, compared with outright fiction, moral philosophy is simply less realistic. 

Anthropocentrism

Although, at the time ‘Straw Dogs’ was first published, Gray held the title ‘Professor of European Thought’ at the London School of Economics, he is particularly scathing in his comments regarding Western philosophy. 

Thus, like Schopenhauer, his pessimist precursor, (who is, along with Hume, one of the few Western philosophers whom he mentions without also disparaging), Gray purports to prefer Eastern philosophical traditions. 

These and other non-Western religious and philosophical traditions are, he claims, unpolluted by the influence of Christianity and hence view humans as merely another animal, no different from the rest. 

I do not have sufficient familiarity with Eastern philosophical traditions to assess this claim. However, I suspect that anthropocentrism and the concomitant belief that humans are somehow special, unique and different from all other organisms is a universal and indeed innate human delusion. 

Indeed, paradoxically, it may not even be limited to humans. 
 
Thus, I suspect that, to the extent they were, or are, capable of conceptualizing such a thought, earthworms and rabbits would also conceive of themselves as special and unique over and above all other species in just the same way we do.

Death or Nirvanva?

Ultimately, however, Gray rejects eastern philosophical and religious traditions too – including Buddhism
 
There is no need, he contends, to spend lifetimes striving to achieve nirvāna and the cessation of suffering as the Buddha proposed. On the contrary, he observes, there is no need for any such effort, since: 

Death brings to everyone the peace Buddha promised only after lifetimes of striving” (p129). 

All one needs to do, therefore, is to let nature take its course, or, if one is especially impatient, perhaps hurry things along by suicide or an unhealthy lifestyle.

Aphoristic Style

I generally dislike books written in the sort of pretentious aphoristic style that Gray adopts. In my experience, they generally replace the argumentation necessary to support their conclusions with bad poetry.

Indeed, sometimes the poetic style is so obscurantist that it is difficult even to discern what these conclusions are in the first place. 
 
However, in ‘Straw Dogs’, the aphoristic style seems for once appropriate. This is because Gray’s arguments, though controversial, are straightforward and not requiring of additional explication. 
 
Indeed, one suspects the inability of earlier thinkers to reach the same conclusions reflects a failure of ‘The Will’ rather than ‘The Intellect’ – an unwillingness to face up to and come to terms with the reality of the human condition. 

A Saviour to Save us from Saviours’?

Unlike other works dealing with political themes, Gray does not conclude with a chapter proposing solutions to the problems identified in previous chapters. Instead, his conclusion is as bleak as the pages that precede it.

At its worst, human life is not tragic, but unmeaning… the soul is broken but life lingers on… what remains is only suffering” (p101).

Personally, however, I found it refreshing that, unlike other self-important, self-appointed saviours of humanity, Gray does not attempt to portray himself as some kind of saviour of mankind. On the contrary, his ambitions are altogether more modest.

Moreover, he does not hold our saviours in particularly high esteem but rather seems to regard them as very much part of the problem. 
 
He does therefore consider briefly what he refers to as the Buddhist notion that we actually require “A Saviour to Save Us From Saviours”. 

Eventually, however, Gray renounces even this role. 

Humanity takes its saviours too lightly to need saving from them… When it looks to deliverers it is for distraction, not salvation” (p121). 

Gray thus reduces our self-important, self-appointed saviours – be they philosophers, religious leaders, self-help gurus or political leaders – to no more than glorified competitors in the entertainment industry.

Distraction as Salvation?

Indeed, for Gray, it is not only saviours who function as a form of distraction for the masses. On the contrary, for Gray, ‘distraction’ is now central to life in the affluent West. 
 
Thus, in the West today, standards of living have improved to such an extent that obesity is now a far greater health problem than starvation, even among the so-called ‘poor’ (indeed, one suspects, especially among the so-called ‘poor’!). 
 
Yet clinical depression is now rapidly expanding into the greatest health problem of all. 
 
Thus, Gray concludes: 

Economic life is no longer geared chiefly to production… [but rather] to distraction” (p162). 

In other words, where once, to acquiesce in their own subjugation, the common people required only bread and circuses, today they seem to demand cake, ice cream, alcohol, soap operas, Playstations, Premiership football and reality TV!

Indeed, Gray views most modern human activity as little more than distraction and escapism. 

It is not the idle dreamer who escapes from reality. It is practical men and women who turn to a life of action as a refuge from insignificance” (194). 

Indeed, for Gray, even meditation is reduced to a form of escapism: 

The meditative states that have long been cultivated in Eastern traditions are often described as techniques for heightening consciousnessIn fact they are ways of by-passing self-awareness” (p62). 
 

Yet Gray does not disparage escapism as a superficial diversion from serious and worthy matters. 
 
On the contrary, he views distraction, or even escapism, as the key to, if not happiness, then at least to the closest we can ever approach to this elusive but chimeric state.

Moreover, the great mass of mankind instinctively recognizes as much:

Since happiness is unavailable, the mass of mankind seeks pleasure” (p142). 

Thus, in a passage which is perhaps the closest Gray comes to self-help advice, he concludes: 

Fulfilment is found, not in daily life, but in escaping from it” (p141-2). 

Perhaps then, escapism is not such a bad thing, and there is something is to be said for sitting around watching TV all day after all. 
____________ 

 
By his own thesis then, it is perhaps as a form of ‘Distraction’ that Gray’s own book ought ultimately to be judged. 
 
By this standard, I can only say that, with its unrelenting cynicism and pessimism, ‘Straw Dogs’ distracted me immensely – and, according to the precepts of Gray’s own philosophy, there can surely be no higher praise!

Endnotes

[1] John Gray, Heresies: Against Progress and Other Illusions: p7; p41. 

[2] John Gray, Heresies: Against Progress and Other Illusions: p8; p44. 

[3] John Gray, ‘Straw Dogs’: p20-23.

[4] Of course, the assumption that human population will continue to grow contradicts the demographic transition model, whereby it is assumed that a decline in fertility inevitably accompanies economic development. However, while it is true that declining fertility has accompanied increasing prosperity in many parts of the world, it is not at all clear why this has occurred. Indeed, from a sociobiological perspective, increases in wealth should lead to an increased reproductive rate, as organisms channel their greater material resources into increased reproductive success, the ultimate currency of natural selection. It is therefore questionable how much faith we should place in the universality of a process the causes of which are so little understood. Moreover, the assumption that improved living-standards in the so-called ‘developing world’ will inevitably lead to reductions in fertility obviously presupposes that the so-called ‘developing world’ will indeed ‘develop’ and that living standards will indeed improve, a obviously questionable assumption. Ultimately, the very term ‘developing world’ may turn out to represent a classic case of wishful thinking. 

[5] Thus, of the bizarre pseudoscience of cryonics, whereby individuals pay private companies for the service of freezing their brains or whole bodies after death, in the hope that, with future advances in technology, they can later be resurrected, he notes that the ostensible immortality promised by such a procedure is itself dependent on the very immortality of the private companies offering the service, and of the very economic and legal system (including contractual obligations) within which such companies operate.

If the companies that store the waiting cadavers do not go under in stock market crashes, they will be swept away by war or revolutions” (Heresies: Against Progress and Other Illusions: p67).

[6] Actually, heredity surely also plays a role, as traits such as empathy and agreeableness are partly heritable, as is sociopathy and criminality.

Richard Dawkins’ ‘The Selfish Gene’: Selfish Genes, Selfish Memes and Altruistic Phenotypes

[In the process of resurrecting this long inactive blog, I have decided to start posting, among other things, full extended versions (i.e. vastly overlong versions) of my Amazon and Goodreads book reviews, since these, being vastly overlong, usually have to edited in order to comply with the amazon and Goodreads word-limits. I start, however, with a relatively shorter review (by my standards) of a favourite book, namely Richard Dawkins’ ‘The Selfish Gene’.]
_____________________________
‘The Selfish Gene’, by Richard Dawkins, Oxford University Press, 1976.

Selfish Genes ≠ Selfish Phenotypes

Richard Dawkins’s ‘The Selfish Gene’ is among the most celebrated, but also the most misunderstood, works of popular science.

Thus, among people who have never read the book (and, strangely, a few who apparently have) Dawkins is widely credited with arguing that humans are inherently selfish, that this disposition is innate and inevitable, and even, in some versions, that behaving selfishly is somehow justified by our biological programming, the titular ‘Selfish Gene’ being widely misinterpreted as referring to a gene that causes us to behave selfishly.

Actually, Dawkins is not concerned, either directly or primarily, with humans at all.

Indeed, he professes to be “not really very directly interesting in man”, whom he dismisses as “a rather aberrant species” and hence peripheral to his own interest, namely how evolution has shaped the bodies and especially the behaviour of organisms in general (Dawkins 1981: p556).

‘The Selfish Gene’ is then, unusually, if not uniquely, for a bestselling work of popular science, a work, not of human biology nor even of non-human zoology, ethology or natural history, but rather of theoretical biology.

Moreover, in referring to genes as ‘selfish’, Dawkins has in mind not a trait that genes encode in the organisms they create, but rather a trait of the genes themselves.

In other words, individual genes are themselves conceived of as ‘selfish’ (in a metaphoric sense), in so far as they have evolved by natural selection to selfishly promote their own survival and replication by creating organisms designed to achieve this end.

Indeed, ironically, as Dawkins is at pains to emphasise, selfishness at the genetic level can actually result in altruism at the level of the organism or phenotype.

This is because, where altruism is directed towards biological kin, such altruism can facilitate the replication of genes shared among relatives by virtue of their common descent. This is referred to as kin selection or inclusive fitness theory and is one of the central themes of Dawkins’ book.

Yet, despite this, Dawkins still seems to see organisms themselves, humans very much included, as fundamentally selfish – albeit a selfishness tempered by a large dose of nepotism.

Thus, in his opening paragraphs no less, he cautions:

If you wish, as I do, to build a society in which individuals cooperate generously and unselfishly towards a common good, you can expect little help from our biological nature. Let us try to teach generosity and altruism, because we are born selfish” (p3).

The Various Editions

In later editions of his book, namely those published since 1989, Dawkins tempers this rather cynical view of human and animal behaviour by the addition of a new chapter – Chapter 12, titled ‘Nice Guys Finish First’.

This new chapter deals with the subject of reciprocal altruism, a topic he had actually already discussed earlier, together with the related, but distinct, phenomenon of mutualism,[1] in Chapter 10 (entitled, ‘You Scratch My Back, I’ll Ride on Yours’).

In this additional chapter, he essentially summarizes the work of political scientist Robert Axelrod, as discussed in Axelrod’s own book The Evolution of Co-Operation. This deals with evolutionary game theory, specifically the iterated prisoner’s dilemma, and the circumstances in which a cooperative  strategy can, by cooperating only with those who have a history of reciprocating, survive, prosper, evolve, and, in the long-term, ultimately outcompete  and hence displace those strategies which maximize only short-term self-interest.

Post-1989 editions also include another new chapter titled ‘The Long Reach of the Gene’ (Chapter 13).

If, in Chapter 12, the first additional chapter, Dawkins essentially summarised the contents of of Axelrod’s book, The Evolution of Cooperation, then, in Chapter 13, he summarizes his own book, The Extended Phenotype.

In addition to these two additional whole chapters, Dawkins also added extensive endnotes to these post-1989 editions.

These endnotes clarify various misunderstandings which arose from how he explained himself in the original version, defend Dawkins against some criticisms levelled at certain passages of the book and also explain how the science progressed in the years since the first publication of the book, including identifying things he and other biologists got wrong.

With still more recent new editions, the content of ‘The Selfish Gene’ has burgeoned still further. Thus, he 30th Anniversary Edition boasts only a new introduction; the recent 40th Anniversary Edition, published in 2016, boasts a new Epilogue too. Meanwhile, the latest so-called Extended Selfish Gene boasts, in addition to this, two whole new chapters.

Actually, these two new chapters are not that new, being lifted wholesale from, once again, The Extended Phenotype, a work whose contents Dawkins has already, as we have seen, summarized in Chapter 13 (‘The Long Reach of the Gene’), itself an earlier addition to the book’s seemingly ever expanding contents list.

The decision not to entirely rewrite ‘The Selfish Gene’ was apparently that of Dawkins’ publisher, Oxford University Press.

This was probably the right decision. After all, ‘The Selfish Gene’ is not a mere undergraduate textbook, in need of revision every few years in order to keep up-to-date with the latest published research.

Rather, it was a landmark work of popular science, and indeed of theoretical biology, that introduced a new approach to understanding the evolution of behaviour and physiology to a wider readership, composed of biologist and non-biologist alike, and deserves to stand in its original form as a landmark in the history of science.

However, while the new introductions and the new epilogue is standard fare when republishing a classic work several years after first publication, the addition of four (or two, depending on the edition) whole new chapters strikes me less readily defensible.

For one thing, they distort the structure of the book, and, though interesting in and of themselves, always read for me rather as if they have been tagged on at the end as an afterthought – as indeed they have.

The book certainly reads best, in a purely literary sense, in its original form (i.e. pre-1989 editions), where Dawkins concludes with an optimistic, if fallacious, literary flourish (see below).

Moreover, these additional chapters reek of a shameless marketing strategy, designed to deceive new readers into paying the full asking price for a new edition, rather than buying a cheaper second-hand copy or just keeping their old one.

This is especially blatant in respect of the book’s latest incarnation, The Extended Selfish Gene, which, according to the information of Oxford University Press’s website, was released only three months after the previous 40th Anniversary Edition yet includes two additional chapters.

One frankly expects better from so celebrated a publisher such as Oxford University Press, and indeed so celebrated a biologist and science writer as Richard Dawkins, especially as I suspect neither are especially short of money.

If I were recommending someone who has never read the book before on which edition to buy, I would probably advise them to get a second-hand copy of any post-1989 editions, since these can now be picked up very cheap, and include the additional endnotes which I found personally very interesting.

On the other hand, if you want to read three additional chapters either from or about The Extended Phenotype then you are probably best to buy, instead, well… The Extended Phenotype – as this is also now a rather old book of which, as with ‘The Selfish Gene’, old copies can now be picked up very cheap.

The ‘Gene’s-Eye-View’ of Evolution

The Selfish Gene is a seminal work in the history of biology primarily because Dawkins takes the so-called ‘gene’s-eye-view’ of evolution to its logical conclusion. To this extent, contrary to popular opinion, Dawkins’ exposition is not merely a popularization, but actually breaks new ground theoretically.

Thus, John Maynard Smith famously talked of ‘kin selection’ by analogy with ‘group selection’ (Smith 1964). Meanwhile, William Hamilton, who formulated the theory underlying these concepts, always disliked the term ‘kin selection’ and talked instead of the ‘direct’, ‘indirect’ and ‘inclusive fitness’ of organisms (Hamilton 1964a; 1964b).

However, Dawkins takes this line of thinking to its logical conclusion by looking – not at the fitness or reproductive success of organisms or phenotypes – but rather at the success in self-replication of genes themselves.

Thus, although he certainly stridently rejects group-selection, Dawkins replaces this, not with the familiar individual-level selection of classical Darwinism, but rather with a new focus on selection at the level of the gene itself.

Abstract Animals?

Much of the interest, and no little of the controversy, arising from ‘The Selfish Gene’ concerned, of course, its potential application to human behaviour. However, in the book itself, humans, whom, as mentioned above, Dawkins dismisses as a “rather aberrant species” in which he professes to be “not really very directly interested” (Dawkins 1981: p556) are actually mentioned only occasionally and briefly.

Indeed, most of the discussion is purely theoretical. Even the behaviour of non-human animals is described only for illustrative purposes, and even these illustrative examples often involve simplified hypothetical creatures rather than descriptions of the behaviour of real organisms.

For example, he illustrates his discussion of the relative pros and cons of either fighting or submitting in conflicts over access to resources by reference to ‘hawks’ and ‘doves’ – but is quick to acknowledge that these are hypothetical and metaphoric creatures, with no connection to the actual bird species after whom they are named:

The names refer to conventional human usage and have no connection with the habits of the birds from whom the names are derived: doves are in fact rather aggressive birds” (p70).

Indeed, even Dawkins’ titular “selfish genes” are rather abstract and theoretical entities. Certainly, the actual chemical composition and structure of DNA is of only peripheral interest to him.

Indeed, often he talks of “replicators” rather than “genes” and is at pains to point out that selection can occur in respect of any entity capable of replication and mutation, not just DNA or RNA. (Hence his introduction of the concept of memes: see below).

Moreover, Dawkins uses the word ‘gene’ in a somewhat different sense to the way the word is employed by most other biologists. Thus, following George C. Williams in Adaptation and Natural Selection, he defines a “gene” as:

Any portion of chromosomal material that potentially lasts for enough generations to serve as a unit of natural selection” (p28).

This, of course, makes his claim that genes are the principle unit of selection something approaching a tautology or circular argument.

Sexual Selection in Humans?

Where Dawkins does mention humans, it is often to point out the extent to which this “rather aberrant species” apparently conspicuously fails to conform to the predictions of selfish-gene theory.

For example, at the end of his chapter on sexual selection (Chapter 9: “Battle of the Sexes”) he observes that, in contrast to most other species, among humans, at least in the West, it seems to be females who are most active in using physical appearance as a means of attracting mates:

One feature of our own society that seems decidedly anomalous is the matter of sexual advertisement… It is strongly to be expected on evolutionary grounds that where the sexes differ, it should be the males that advertise and the females that are drab… [Yet] there can be no doubt that in our society the equivalent of the peacock’s tail is exhibited by the female, not the male” (p164).

Thus, among most other species, it is males who have evolved more elaborate plumages and other flashy, sexually selected ornaments. In contrast, females of the same species are often comparatively drab in appearance.

Yet, in modern western societies, Dawkins observes, it is more typically women who “paint their faces and glue on false eyelashes” (p164).

Here, it is notable that Dawkins, being neither an historian nor an anthropologist, is careful to restricts his comments to “our own society” and, elsewhere, to “modern western man”.

Thus, one explanation is that it is only our own ‘WEIRD’, western societies that are anomalous?

Thus, Matt Ridley, in The Red Queen, proposes that maybe:

Modern western societies have been in a two-century aberration from which they are just emerging. In Regency England, Louis XIV’s France, medieval Christendom, ancient Greece, or among the Yanomamö, men followed fashion as avidly as women. Men wore bright colours, flowing robes, jewels, rich materials, gorgeous uniforms, and gleaming, decorated armour. The damsels that knights rescued were no more fashionably accoutred than their paramours. Only in Victorian times did the deadly uniformity of the black frock coat and its dismal modern descendant, the grey suit, infect the male sex, and only in this century have women’s hemlines gone up and down like yo-yos” (The Red Queen: p292).

There is an element of truth here. However, I suspect it partly reflects a misunderstanding of the different purposes for which men and women use clothing, including bright and elaborate clothing.

Thus, it rather reminds me of Margaret Mead’s claim that, among the Tschambuli of Papua New Guinea, sex-roles were reversed because, here, it was men who painted their faces and wore ‘make-up’, not women.

Yet what Mead neglected to mention that the ‘make-up’ in question that Mead found so effeminate was actually war-paint that a Tschambuli warrior was only permitted to wear after killing his first enemy warrior (see Homicide: Foundations of Human Behavior: p152).

Of course, clothes and makeup are an aspect of behaviour rather than morphology, and thus more directly analogous to, say, the nests (or, more precisely, the bowers) created by male bowerbirds than the tail of the peacock.

However, behaviour is, in principle, no less subject to natural selection (and sexual selection) than is morphology, and therefore the paradox remains.

Moreover, even focusing exclusively on morphology, the sex difference still seems to remain.

Thus, perhaps the closest thing to a ‘peacock’s tail’ in humans (i.e. a morphological trait designed to attract mates) is a female trait, namely breasts.

Thus, as Desmond Morris first observed, in humans, the female breasts seem to have been co-opted for a role in sexual selection, since, unlike among other mammals, women’s breasts are permanent, from puberty on, not present only during lactation, and composed primarily of fatty tissues, not milk (Møller 1995; Manning et al 1997; Havlíček et al 2016).

In contrast, men possess no obvious equivalent of the ‘peacock’s tail’ (i.e. a trait that has evolved in response to female choice) – though Geoffrey Miller makes a fascinating (but ultimately unconvincing) case that the human brain may represent a product of sexual selection (see The Mating Mind: How Sexual Choice Shaped the Evolution of Human Nature).[2]

Interestingly, in an endnote to post-1989 editions of ‘The Selfish Gene’, Dawkins himself tentatively speculates that maybe the human penis might represent a sexually-selected ‘fitness indicator’.

Thus, he points out that the human penis is large as compared to that of other primates, yet also lacks a baculum (i.e. penis bone) that facilitates erections. This, he speculates, could mean that the capacity to maintain an erection might represent an honest signal of health in accordance with Zahavis handicap principle (307-8).

However, it is more likely that the large size, or more specifically the large width, of the human penis reflects instead a response to the increased size of the vagina, which itself increased in size to enable human females to give birth to large-brained, and hence large-headed, infants (see Bowman 2008; Sexual Selection and the Origins of Human Mating Systems: pp61-70).[3]

How then can we make sense of this apparent paradox, whereby, contrary to Bateman’s principle, sexual selection appears to have operated more strongly on women than on men?

For his part, Dawkins himself offers no explanation, merely lamenting:

What has happened in modern western man? Has the male really become the sought-after sex, the one that is in demand, the sex that can afford to be choosy? If so, why?” (p165).

However, in respect of what David Buss calls short-term mating strategies (i.e. casual sex, hook-ups and one night stands), this is certainly not the case.

On the contrary, patterns of everything from prostitution and rape to erotica and pornography consumption confirm that, in respect of short-term ‘commitment’-free casual sex, it remains women who are very much in demand and men who are the ardent pursuers (see The Evolution of Human Sexuality: which I have reviewed here).

Thus, in one study conducted on a University campus, 72% of male students agreed to go to bed with a female stranger who approached them with a request to this effect. In contrast, not a single one of the 96 females approached agreed to the same request from a male questioner (Clark and Hatfield 1989).

(What percentage of the students sued the university for sexual harassment was not revealed.)

However, humans also form long-term pair-bonds to raise children, and, in contrast to males of most other mammalian species, male parents often invest heavily in the offspring of such unions.

Men are therefore expected to be relatively choosier in respect of long-term romantic partners (e.g. wives) than they are for casual sex partners. This may then explain the relatively high levels of reproductive competition engaged in by human females, including high levels of what Dawkins calls ‘sexual advertising’.

Reproductive competition between women may be especially intense in western societies practising what Richard Alexander termed ‘socially-imposed monogamy’.

This refers to societies where there are large differences between males in social status and resource holdings, but where even wealthy males are prohibited by law from marrying multiple women at once.[4]

Here, there may be intense competition as between females for exclusive rights to resource-abundant ‘alpha male’ providers (Gaulin and Boser 1990).

Thus, to some extent, the levels of sexual competition engaged in by women in western societies may indeed be higher than in non-western, polygynous societies.

This, then, might explain why females use what Dawkins terms ‘sexual advertising’ to attract long-term mates (i.e. husbands). However, it still fails to explain why males don’t – or, at least, don’t seem to do so to anything like the same degree.

The answer may be that, in contrast to mating patterns in modern western societies, ‘female choice’ may actually have played a surprisingly limited role in human evolutionary history, given that, in most pre-modern societies, arranged marriages were, and are, the norm.

Male mating competition may then have taken the form of ‘male-male contest competition’ (i.e. fighting) rather than displaying to females – i.e. what Darwin called intra-sexual selection’ rather than ‘inter-sexual selection’.

Thus, while men indeed possess no obvious analogue to the peacock’s tail, they do seem to possess traits designed for fighting – namely considerably greater levels of upper-body musculature and violent aggression as compared to women (see Puts 2010).

In other words, human males may not have any obvious ‘peacock’s tail’, but we perhaps we do have, if you like, ‘stag’s antlers’.

From Genes to Memes

Dawkins’ eleventh chapter, which was, in the original version of the book (i.e. pre-1989 editions), the final chapter, is also the only chapter to focus exclusively on humans.

Entitled ‘Memes: The New Replicators’, it focuses again on the extent to which humans are indeed an “aberrant species”, being subject to cultural as well as biological evolution to a unique degree.

Interestingly, however, Dawkins argues that the principles of natural selection discussed in the preceding chapters of the book can be applied just as usefully to cultural evolution as to biological evolution.

In doing so, he coins the concept of the ‘meme’ as the cultural unit of selection, equivalent to a gene, passing between minds analogously to a virus.

This term has been enormously influential in intellectual discourse, and indeed in popular discourse, and even passed into popular usage.

The analogy of memes to genes makes for an interesting thought-experiment. However, like any analogy, it can be taken too far.

Certainly ideas can be viewed as spreading between people, and as having various levels of fitness depending on the extent to which they catch on.

Thus, to take one famous example, Dawkins famously described religions to ‘Viruses of the Mind’, which travel between, and infect, human minds in a manner analogous to a virus.

Thus, proponents of Darwinian medicine contend that pathogens such as flu and the common cold produce symptoms such as coughing, sneezing and diarrhea precisely because these behaviours promote the spread and replication of the pathogen to new hosts through the bodily fluids thereby expelled.

Likewise, rabies causes dogs and other animals to become aggressive and bite, which likewise facilitates the spread of the rabies virus to new hosts.[5]

By analogy, successful religions are typically those that promote behaviours that facilitate their own spread.

Thus, a religion that commands its followers to convert non-believers, persecute apostates, ‘be fruitful and multiply’ and indoctrinate your offspring with their beliefs is, for obvious reasons, likely to spread faster and have greater longevity than a religious doctrine that commands adherents become celibate hermits and that proselytism is a mortal sin.

Thus, Christians are admonished by scripture to save souls and preach the gospel among heathens; while Muslims are, in addition, admonished to wage holy war against infidels and persecute apostates.

These behaviour facilitate the spread of Christianity and Islam just as surely as coughing and sneezing promote the spread of the flu.[6]

Like genes, memes can also be said to mutate, though this occurs not only through random (and not so random) copying errors, but also by deliberate innovation by the human minds they ‘infect’. Memetic mutation, then, is not entirely random.

However, whether this way of looking at cultural evolution is a useful and theoretically or empirically productive way of conceptualizing cultural change remains to be seen.

Certainly, I doubt whether ‘memetics’ will ever be a rigorous science comparable to genetics, as some of the concept’s more enthusiastic champions have sometimes envisaged. Neither, I suspect, did Dawkins ever originally intend or envisage it as such, having seemingly coined the idea as something of an afterthought.

At any rate, one of the main factors governing the ‘infectiousness’ or ‘fitness’ of a given meme, is the extent to which the human mind is receptive to it and the human mind is itself a product of biological evolution.

The basis for understanding human behaviour, even cultural behaviour, is therefore how natural selection has shaped the human mind – in other words evolutionary psychology not memetics.

Thus, humans will surely have evolved resistance to memes that are contrary to their own genetic interests (e.g. celibacy) as a way of avoiding exploitation and manipulation by third-parties.

For more recent discussion of the status of the meme concept (the ‘meme meme’, if you like) see The Meme Machine; Virus of the Mind; The Selfish Meme; and Darwinizing Culture.

Escaping the Tyranny of Selfish Replicators?

Finally, at least in the original, non-‘extended’ editions of the book, Dawkins concludes ‘The Selfish Gene’, with an optimistic literary flourish, emphasizing once again the alleged uniqueness of the “rather aberrant” human species.[7]

Thus, his final paragraph ends:

We are built as gene machines and cultured as meme machines, but we have the power to turn against our creators. We, alone on earth, can rebel against the tyranny of the selfish replicators” (p201).

This makes for a dramatic, and optimistic, conclusion. It is also flattering to anthropocentric notions of human uniqueness, and of free will.

Unfortunately, however, it ignores the fact that the “we” who are supposed to be doing the rebelling are ourselves a product of the same process of natural selection and, indeed, of the same selfish replicators against whom Dawkins calls on us to rebel. Indeed, even the (alleged) desire to revolt is a product of the same process.[8]

Likewise, in the book’s opening paragraphs, Dawkins proposes:

Let us try to teach generosity and altruism, because we are born selfish. Let us understand what our selfish genes are up to, because we may then at least have the chance to upset their designs.” (p3)

However, this ignores, not only that the “us” who are to do the teaching and who ostensibly wish to instil altruism in others are ourselves the product of this same evolutionary process and these same selfish replicators, but also that the subjects whom we are supposed to indoctrinate with altruism are themselves surely programmed by natural selection to be resistant to any indoctrination or manipulation by third-parties to behave in ways that conflict with their own genetic interests.

In short, the problem with Dawkins’ cop-out Hollywood Ending is that, as anthropologist Vincent Sarich is quoted as observing, Dawkins has himself “spent 214 pages telling us why that cannot be true”. (See also Straw Dogs: Thoughts on Humans and Other Animals: which I have reviewed here and here).[9]

The preceding 214 pages, however, remain an exciting, eye-opening and stimulating intellectual journey, even over thirty years after their original publication.

__________________________

Endnotes

[1] Mutualism is distinguished from reciprocal altruism by the fact that, in the former, both parties receive an immediate benefit from their cooperation, whereas, in the latter, for one party, the reciprocation is delayed. It is reciprocal altruism that therefore presents the greater problem for evolution, and for evolutionists, because, here, there is the problem policing the agreement – i.e. how is evolution to ensure that the immediate beneficiary does indeed reciprocate, rather than simply receiving the benefit without later returning the favour (a version of the free rider problem). The solution, according to Axelrod, is that, where parties interact repeatedly over time, they come to engage in reciprocal altruism only with other parties with a proven track record of reciprocity, or at least without a proven track record of failing to reciprocate. 

[2] Certainly, many male traits are attractive to women (e.g. height, muscularity). However, these also have obvious functional utility, not least in increasing fighting ability, and hence probably have more to do with male-male competition than female choice. In contrast, many sexually-selected traits are positive hindicaps to their bearers, in all spheres except attracting mates. Indeed, one influential theory of sexual selection claims that it is precisely because they represent a handicap that they serve as an honest indicator of fitness and hence a reliable index of genetic quality.

[3] Thus, Edwin Bowman writes:

As the diameter of the bony pelvis increased over time to permit passage of an infant with a larger cranium, the size of the vaginal canal also became larger” (Bowman 2008).

Similarly, in their controversial book Human Sperm Competition: Copulation, Masturbation and Infidelity, Robin Baker and Mark Bellis persuasively contend:

The dimensions and elasticity of the vagina in mammals are dictated to a large extent by the dimensions of the baby at birth. The large head of the neonatal human baby (384g brain weight compared with only 227g for the gorilla…) has led to the human vagina when fully distended being large, both absolutely and relative to the female body… particularly once the vagina and vestibule have been stretched during the process of giving birth, the vagina never really returning to its nulliparous dimensions” (Human Sperm Competition: p171).

In turn, larger vaginas probably select for larger penises in order to fill the vagina (Bowman 2008).

According to Baker and Bellis, this is because the human penis functions as a suction piston, functioning to remove the sperm deposited by rival males, as a form of sperm competition, a theory that actually has some experimental support (Gallup et al 2003; Gallup and Burch 2004; Goetz et al 2005; see also Why is the Penis Shaped Like That).

Thus, according to this view:

In order to distend the vagina sufficiently to act as a suction piston, the penis needs to be a suitable size [and] the relatively large size… and distendibility of the human vagina (especially after giving birth) thus imposes selection, via sperm competition, for a relatively large penis” (Human Sperm Competition: p171).

However, even in the absence of sperm competition, Alan Dixson observes:

In primates and other mammals the length of the erect penis and vaginal length tend to evolve in tandem. Whether or not sperm competition occurs, it is necessary for males to place ejaculates efficiently, so that sperm have the best opportunity to migrate through the cervix and gain access to the higher reaches of the female tract” (Sexual Selection and the Origins of Human Mating Systems: p68).

[4] In natural conditions, it is assumed that, in egalitarian societies, where males have roughly equal resource holdings, they will each attract an equal number of wives (i.e. given an equal sex ratio, one wife for each man). However, in highly socially-stratified societies, where there are large differences in resource holdings between men, it is expected that wealthier males will be able to support, and provide for, multiple wives, and will use their greater resource-holdings for this end, so as to maximize their reproductive success (see here). This is a version of the polygyny threshold model (see Kanazawa and Still 1999).

[5] There are also pathogens that affect the behaviour of their hosts in more dramatic ways. For example, one parasite, Toxoplasma gondii, when it infects a mouse, reduces the mouse’s aversion to cat urine, which is theorized to increase the risk of its being eaten by a cat, facilitating the reproductive life-cycle of the pathogen at the expense of that of its host. Similarly, the fungus, ophiocordyceps unilateralis turns ants into so-called zombie ants, who willingly leave the safety of their nests, and climb and lock themselves onto a leaf, again in order to facilitate the life cycle of their parasite at the expense of their own. Another parasite, dicrocoelium dendriticum (aka the lancet liver fluke) also affect the behaviour of ants whom it infects, causing them to climb to the tip of a blade of grass during daylight hours, increasing the chance they will be eaten by cattle or other grazing animals, facilitating the next stage of the parasite’s life-history

[6] In contrast, biologist Richard Alexander in Darwinism and Human Affairs cites the Shakers as an example of the opposite type of religion, namely one that, because of its teachings (namely, strict celibacy) largely died out.

In fact, however, Shakers did not quite entirely disappear. Rather, a small rump community of Shakers the Sabbathday Lake Shaker Village survives to this day, albeit greatly reduced in number and influence. This is presumably because, although the Shakers did not, at least in theory, have children, they did proselytise.

In contrast, any religion which renounced both reproduction and proselytism would presumably never spread beyond its initial founder or founders, and hence never come to the attention of historians, theorists of religion, or anyone else in the first place.

[7]  As noted above, this is among the reasons that ‘The Selfish Gene’ works best, in a purely literary sense, in its original incarnation. Later editions have at least two further chapters tagged on at the end, after this dramatic and optimistic literary flourish.

[8] Dawkins is then here here guilty of a crude dualism. Marxist neuroscientist Steven Rose, in an essay in Alas Poor Darwin (which I have reviewed here and here) has also accused Dawkins of dualism for this same passage, writing:

Such a claim to a Cartesian separation of these authors’ [Dawkins and Steven Pinker] minds from their biological constitution and inheritance seems surprising and incompatible with their claimed materialism” (Alas Poor Darwin: Arguments Against Evolutionary Psychology: p262).

Here, Rose may be right, but he is also a self-contradictory hypocrite, since his own views represent an even cruder form of dualism. Thus, in an earlier book, Not in Our Genes: Biology, Ideology, and Human Nature, co-authored with fellow-Marxists Leon Kamin and Richard Lewontin, Rose and his colleagues wrote, in a critique of sociobiological conceptions of a universal human nature:

Of course there are human universals that are in no sense trivial: humans are bipedal; they have hands that seem to be unique among animals in their capacity for sensitive manipulation and construction of objects; they are capable of speech. The fact that human adults are almost all greater than one meter and less than two meters in height has a profound effect on how they perceive and interact with their environment” (passage extracted in The Study of Human Nature: p314).

Here, it is notable that all the examples “human universal that are in no sense trivial” given by Rose, Lewontin and Kamin are physiological not psychological or behavioural. The implication is clear: yes, our bodies have evolved through a process of natural selection, but our brains and behaviour have somehow been exempt from this process. This of course, is an even cruder form of dualism than that of Dawkins.

As John Tooby and Leda Cosmides observe:

This division of labor is, therefore, popular: Natural scientists deal with the nonhuman world and the “physical” side of human life, while social scientists are the custodians of human minds, human behavior, and, indeed, the entire human mental, moral, political, social, and cultural world. Thus, both social scientists and natural scientists have been enlisted in what has become a common enterprise: the resurrection of a barely disguised and archaic physical/mental, matter/spirit, nature/human dualism, in place of an integrated scientific monism” (The Adapted Mind: Evolutionary Psychology and the Generation of Culture: p49).

A more consistent and thoroughgoing critique of Dawkins dualism is to be found in John Gray’s excellent Straw Dogs (which I have reviewed here and here).

[9] This quotation comes from p176 of Marek Kohn’s The Race Gallery: The Return of Racial Science (London: Vintage, 1996). Unfortunately, Kohn does not give a source for this quotation.

__________________________

References

Bowman EA (2008) Why the human penis is larger than in the great apes Archives of Sexual Behavior 37(3): 361.

Clark & Hatfield (1989) Gender differences in receptivity to sexual offers, Journal of Psychology & Human Sexuality, 2:39-53.

Dawkins (1981) In defence of selfish genes, Philosophy 56(218):556-573.

Gallup et al (2003). The human penis as a semen displacement device. Evolution and Human Behavior, 24, 277-289.

Gallup & Burch (2004). Semen displacement as a sperm competition strategy in humans. Evolutionary Psychology, 2, 12-23.

Gaulin & Boser (1990) Dowry as Female Competition, American Anthropologist 92(4):994-1005.

Goetz et al (2005) Mate retention, semen displacement, and human sperm competition: a preliminary investigation of tactics to prevent and correct female infidelity. Personality and Individual Differences, 38: 749-763

Hamilton (1964) The genetical evolution of social behaviour I and II, Journal of Theoretical Biology 7:1-16,17-52.

Havlíček et al (2016) Men’s preferences for women’s breast size and shape in four cultures, Evolution and Human Behavior 38(2): 217–226.

Kanazawa & Still (1999) Why Monogamy? Social Forces 78(1):25-50.

Manning et al (1997) Breast asymmetry and phenotypic quality in women, Ethology and Sociobiology 18(4): 223–236.

Møller et al (1995) Breast asymmetry, sexual selection, and human reproductive success, Ethology and Sociobiology 16(3): 207-219.

Puts (2010) Beauty and the beast: mechanisms of sexual selection in humans, Evolution and Human Behavior 31:157-175.

Smith (1964). Group Selection and Kin Selection, Nature 201(4924):1145-1147.

Pornographic Progress, Sexbots and the Salvation of Man

Women are like elephants – nice to look at but I wouldn’t want to own one.
WC Fields

In my previous post (“The Sex Cartel: Puritanism and and Prudery as Price-fixing among Prostitutes”), I discussed why prostitutes and other promiscuous women have invariably been condemned as immoral by other women on account of their promiscuity, despite the fact that they provide pleasure to, in some cases, literally thousands of men and, therefore, according to the tenets of the theory of ethics known as utilitarianism, are literally giving ‘the greatest happiness to the greatest number’ as Bentham advocated and ought therefore to be lauded as the highest paradigm of moral virtue right up alongside Mother Theresa, who, although she dedicated her life to heeling, feeding and caring for the sick, poor and destitute, never went as far as actually sucking their cocks.

Who can seriously doubt that a few dollars for magazine full of beautiful women expertly fucking and sucking who, on the page, remain young and beautiful for ever, and which costs only a few dollars at most, is better value than marriage to a single solitary real-life woman, who demands half your income, grows older and uglier with each passing year, probably wasn’t exactly a Playboy centerfold even to begin with, and who is legally obligated to fuck you only during the divorce-settlement?

The answer lay, I concluded, in the concept of a price-fixing cartel that I christened The Sex Cartel which functions to artificially inflate the price of sex, to the advantage of women as a whole, by stigmatizing, and where possible criminalizing, those women (e.g. prostitutes) who provide sexual services at below the going rate (e.g. outside of marriage). Puritanism and prudery are thus, I concluded, nothing more than price-fixing among prostitutes.

 In the current essay/post, I expand on this theory, extending the analysis to pornography. In doing so, I explain the gradual liberalization of attitudes towards sexual morality over the course of the twentieth century as a rational and inevitable response to what I term ‘Pornographic Progress’.

Finally, turning my gaze from the past to the future, I prophesize that the future of fucking and the eventual emancipation of man from the sexual subjugation of The Sex Cartel, will come, not by political progress reform, revolution or insurrection, but rather from Virtual Reality Pornography and so-called ‘Sexbots’.

Thus, the so-called ‘Sexual Revolution’ of the Swinging Sixties was but barely a beginning. The Real Sexual Revolution may be yet to come.

In Praise of Pornography

Across a variety of jurisdictions and throughout much of history, pornography in general, or particular genres of pornography, have been outlawed. Moreover, even where pornography is legalized, it is almost invariably heavily restricted and regulated by the state (e.g. age-restrictions).

Indeed, traditionally, not only pornography, but even masturbation itself was regarded as immoral and also a health risk. In the Victorian era, various strategies, devices and mechanisms were invented or adopted to prevent masturbation, from circumcision to Kellogg’s cornflakes.

Therefore, if men had really listened to their self-appointed moral guardians, their doctors, their medical experts, church leaders and other assorted professional damned fools who sought to dictate to them how they should and shouldn’t behave in public and in private and what they should and shouldn’t insert their penis inside of, they would have been completely reliant on women for their sexual relief and women’s sexual subjugation of men would have consequently been complete.

Today, the opposition to porn is dominated by an Unholy Alliance of Radical Feminists and Religious Fundamentalists, who, despite professing to be enemies, appear to be in complete agreement with one another on every particular of the issue.

This is no surprise. Despite their ‘left-liberal’ pretensions, feminists have always been, at heart, puritans, prudes and prohibitionists – from prohibition itself, largely enacted at the behest of women’s groups such as the Women’s Christian Temperance Union, to the current feminist crusades against pornography, prostitution and other such fun and healthy recreational activities.

Why then is porn so universally condemned, criminalized and state-regulated throughout history and across the world?

The production and consumption of pornography is, of course, a victimless crime. The vast majority of women who appear in pornography do so voluntarily, and they have every economic incentive for doing so, earning, as they do, substantial salaries, many times greater than the salaries commanded by the more talented male performers alongside whom they perform, who do much more difficult jobs.

Indeed, far from being inherently harmful, pornography provides joy and happiness to many men, not least many lonely and disadvantaged men, and a lucrative livelihood for many men and women both. There is even evidence it may reduce levels of sex crimes, by providing an alternative outlet for sexually-frustrated men.[1]

Why then is pornography criminalized and regulated?

The usual explanation is that pornography is demeaning towards women.

Yet what is demeaning about, say, a Playboy centerfold. Far from demeaning women, soft porn images seem to involve putting women on a pedestal, as representing something inherently beautiful and desirable and to be gazed at longingly and admiringly by men who pay money to buy pictures of them.

Meanwhile, even most so-called ‘hardcore’ pornography is hardly demeaning. Most simply involves images of consensual, and mutually pleasurable, sexual congress, a natural act. Certainly, it is no more demeaning towards women than towards men, who also appear in pornography but typically earn far less.

True, there is a minor subgenre of so-called ‘male domination’ within the BDSM subgenre. But this is mirrored, and indeed dwarfed, by the parallel genre of ‘female domination’, which seems to be the more popular fetish and involves images at least as demeaning to men as those depicted in ‘male domination’ are to women.[2]

True, if pornography does not portray women in a negative light, it does perhaps portray them unrealistically – i.e. as readily receptive to men’s advances and as desirous of commitment-free promiscuous sex as are men. However, as psychologist Catherine Salmon observes:

“[Whereas] pornography imposes a male-like sexuality on females, a fantasy of sexual utopia for men… consider the other side, the romance novel, or ‘porn’ for women. It imposes a female-like sexuality on men that is in many ways perhaps no more realistic than [pornography]. But no one is out there lobbying to ban romance novels because of the harm they do to women’s attitudes towards men.[3]

As Jack Kammer explains in If Men Have All The Power How Come Women Make The Rules, while pornography represents a male fantasy, BDSM apart, it involves a fantasy, not of male domination, but rather of sexual equality – namely a world where women enjoy sex as much as men do, “participate enthusiastically in sex… love male sexuality, and… don’t hold out for money, dinner or furs”, and thereby lose their sexual power over men.[4]

On this view, Kammer concludes, “pornography does not glorify our sexual domination of women” but rather “expresses our fantasies of overcoming women’s sexual domination of us”.[5]

Pornography and The Sex Cartel

Yet this does not mean that the opposition to pornography is wholly misguided or irrational. On the contrary, I shall argue that, for women, opposition to pornography is wholly rational. However, it reflects, not the higher concerns of morality in which terms such opposition is typically couched, but rather base economic self-interest.

To understand why, we must revisit once again “Sex Cartel Theory”, introduced in my previous post. Whereas the prevalent prejudice against prostitutes reflects price-fixing among prostitutes, opposition to pornography reflects rent-seeking, or protectionism, among prostitutes.

Like price-fixing, rent-seeking and protectionism is a perfectly rational economic strategy. However, again like price-fixing, it is wholly self-interested and anti-competitive. While benefiting women, the rest of society (i.e. men) pay a concomitant cost.

An example is where practitioners in a certain industry (e.g. doctors, physio-therapists, lawyers) seek to prevent or criminalize others (often others lacking a requisite qualification) from providing the same or a similar service rather than allowing the consumer free choice.

It is my contention that when women seek to restrict or criminalize pornography or other form forms of sexual gratification for men, they are also engaging in analogous behaviour in order to reduce competition for their own services.

Catherine Hakim explains:

“Look at social exchange between men and women in terms of women gaining control over men and gaining resources by regulating men’s access to sexual gratification. If pornography is an alternative source of such gratification for men, it… reduces women’s bargaining power in such a sexual/economic arena.”[6]

The essence of my argument is explained by psychologists Baumseister and Twenge in their article in the journal Review of General Psychology in 2002 which I quoted in my previous post. Here, Baumseister and Twenge observe:

Just as any monopoly tends to oppose the appearance of low-priced substitutes that could undermine its market control, women will oppose various alternative outlets for male sexual gratification, even if these outlets do not touch the women’s own lives directly.[7]

As I explained in my previous post, these ‘alternative outlets for male sexual gratification’ include, among other things, homosexuality, sex with animals, corpses, inflatable dolls, household appliances and all other such healthy and natural sexual outlets which are universally condemned by moralists despite the lack, in most cases, of any discernible victims.

However, although homosexuality, sex with animals, corpses, inflatable dolls and household appliances all represent, in one way or another, ‘alternative outlets for male sexual gratification’ per Baumseister and Twenge, undoubtedly pornography is first among equals.

After all, whereas most other outlets for sexual gratification (e.g. homosexuality, bestiality, necrophilia and inflatable dolls) will appeal to only a perverted and fortunate few, and will wholly satisfy even fewer, the same is not true of pornography, whose appeal among males seems to be all but universal.

Women are therefore right to fear and oppose pornography. Already pornography represents a major threat to women’s ability to attract and retain mates. Increasingly, it seems, men are already coming to recognize that pornography offers a better deal than conventional courtship.

For example, in one study published in the Journal of Experimental Research in Social Psychology found that, after viewing pornographic materials, men rated their commitment to their current relationships as lower than they had prior to being exposed to the pornographic materials.[8]

This should be no surprise. After all, compared to the models and actresses featured in porn the average wife or girlfriend is no match.

Who can seriously doubt that a few dollars for magazine full of beautiful women expertly fucking and sucking who, on the page, remain young and beautiful for ever, and which costs only a few dollars at most, is better value than marriage to a single solitary real-life woman, who demands half your income, grows older and uglier with each passing year, probably wasn’t exactly a Playboy centerfold even to begin with, and who is legally obligated to fuck you only during the divorce-settlement?

Yet this desirable state of affairs was not always so. On the contrary, it is, in terms of human history, a relatively recent development.

To understand why and how this came to be and the impact it came to have on the relations between the sexes and, in particular, the relative bargaining positions of the sexes in negotiating the terms of heterosexual coupling, we must first trace the history of what I term ‘pornographic progress’ from porn’s pre-human precursors and Paleolithic Pleistocene prototypes, to the contemporary relative pornographic utopia of Xvideos, Xhamster and Pornohub.

A Brief History of Pornographic Progress

Pornography is, I am convinced, the greatest ever invention of mankind. To my mind, it outranks even the wheel, the internal combustion engine and the splitting of the atom. As for sliced bread, it has always been, in my humble opinion, somewhat overrated.

The wonder of porn is self-evident. You can merrily masturbate to your cock’s content in the comfort and privacy of your own home without the annoyance, inconvenience and boredom of actually having to engage in a conversation with a woman either before or after. These days, one need never even leave the comfort of one’s home.

However, though today we take it for granted, porn was not always with us. On the contrary, it had to be invented. Moreover, it’s quality has improved vastly over time.

Proto-Porn and Pre-Human Precursors

Our pre-human ancestors had to make do without pornography. However, the demand was clearly there. For example, males of various non-human species respond to an image or representation of an opposite-sex conspecific (e.g. a photograph or model) with courtship displays and mating behaviour. Some even attempt, unsuccessfully, to mount the picture or model.

Ophrys
By mimicking the appearance of bees to induce the latter into mating with them, Ophrys flowers function as ‘Nature’s prototype for the inflatable sex doll?’

Ophrys flowers, a subfamily of Orchids, take advantage of this behaviour to facilitate their own reproduction. Orchids of this family reproduce by mimicking both the appearance and pheromones of female insects, especially bees.

This causes male wasps and bees to attempt to copulate with them. Naturally, they fail in this endeavour. However, in so failing on successive occasions, they do successfully facilitate the reproduction of the orchids themselves. This is because, during this process of so-called pseudocopulation, pollen from the orchid becomes attached to the hapless male suitor. This pollen is then carried by the male until he (evidently not having learnt his lesson) attempts to mate with yet another flower of the same species, and thereby spreads the pollen enabling Orchids of the genus Ophrys to themselves reproduce.

Ophrys flowers therefore function as nature’s prototype for the inflatable sex doll.

In mimicking the appearance of female insects to sexually arouse hapless males, Ophrys flowers arguably constitute the first form of pornography. Thus, porn, like sonar and winged flight, was invented by nature (or rather by natural selection) long before humans belatedly got around to repeating this feat for themselves.

At any rate, one thing is clear: Though lacking pornography, our pre-human ancestors were pre-primed for porn. In short, the market was there – just waiting to be tapped by some entrepreneur sufficiently enterprising and sleazy to take advantage of this fact.

Prehistoric Palaeolithic Pleistocene Porn

Early man, it appears, developed porn the same time he developed cave-painting and art. Indeed, as I shall argue, the facilitation of masturbation was likely a key motivating factor in the development of art by early humans.

Venus Figurine
Venus figurines: ‘Palaeolithic/Pleistocene Proto-Porn?’

Take the so-called Venus figurines, so beloved of feminist archaeologists and widely recognised as one of the earliest forms, if not the earliest form, of sculpture. Countless theories have been developed regarding the function and purpose of these small sculptures of women with huge breasts and protruding buttocks.

They have been variously described, by feminist archaeologists and other professional damned fools, as, among other things, fertility symbols, idols of an earth goddess or mother goddess cult (the sole evidence for the existence of which are the figurines themselves) or even symbols of the matriarchy supposedly prevailing in hunter-gather bands (for which alleged social arrangement the figurines themselves again provide the only evidence).

The far more obvious explanation, namely that the figures represent portable, prehistoric Palaeolithic Pleistocene porn – sort of the stone-age equivalent of a 3-d Playboy – has been all but ignored by scholars.

True, they are, to say the least, a bit fat for modern tastes. However, as morbidly obese women never tire of reminding us, standards of beauty vary over time and place.

After all, if, as popular cliché has it, beauty is in the eye of the beholder’, then sexiness is perhaps located in a different part of the male anatomy (‘sexiness is in the cock of the beholder’?), but is nevertheless equally subjective in nature.

Of course this may partly reflect wishful thinking on the part of fat ugly women. Research in evolutionary psychology has demonstrated that some aspects of beauty standards are cross-culturally universal.

Nevertheless, to some extent (albeit only in some respects) the fatties may be right.

After all, in other respects besides their morbid obesity, the images are obviously pornographic.

In particular, it is notable that no detail is shown in the figurine’s faces – no nose, eyes or mouth. Yet, on the other hand, the genitalia and huge breasts are rendered intricately – a view of the important aspects of female physiology unlikely to find favour with feminists.

Surely only a feminist or a eunuch could be so lacking in insight into male psychology as to flick through the pages of Playboy magazine (or, if you prefer, the buried archaeological remains of Playboy magazine a few thousand years hence), observe the focus on unfeasibly large breasts, protruding buttocks and female genitalia, and hence conclude that what he (or, more likely, she) had unearthed or stumbled across was the holy book of an Earth-Mother-Goddess cult!

Art as Porn

I am thoroughly convinced of the thesis that the ultimate function and purpose of all art, and thus indirectly arguably of civilization itself, is the facilitation of fapping

For the next 20 thousand years or so, pornography progressed only gradually. There were, of course, a few technological improvements – e.g. in the quality of paints, canvasses etc. However, the primary advancements were in the abilities and aptitudes of the artists themselves, especially with regard to their capacity for naturalism/realism.

Goya
Eighteenth Century Porn by Goya

Thus, by the early nineteenth century, there were classical nudes. Notwithstanding the pretensions of intellectual snobs towards higher forms of appreciation, anyone with a functioning penis can clearly perceive that the primary function and purpose such works is the facilitation of masturbation.

At this juncture it is perhaps appropriate to declare that I am thoroughly convinced of the thesis that the ultimate function and purpose of all art, and thus indirectly arguably of civilization itself, is the facilitation of fapping.

Crucifix
Crucifixes: An early form of sadomasochistic gay porn?

The Catholic Church, on the other hand, has jealously guarded its own monopoly on pornography catering for more niche tastes. I refer, of course, to the ubiquitous crucifix, which no Catholic Church or pious papist home can ever be complete without.

Yet, on closer inspection, this familiar image is clearly, by any standards, rather suspect, to say the least. It represents, after all, a nearly naked man, wearing nothing more than a loin-clothe – and usually, I might add, a suspiciously lean and rather muscular man, who invariably sports a six-pack – writhing in pain while being nailed to a cross.

In short, the registered trademark of the One True Faith is, in truth, a blatant and undisguised example of sadomasochistic gay porn.

Indeed, it represents precisely the sort of homoerotic sadomasochistic imagery which, if depicted in any other context, would probably be condemned outright by the Church and banned along with The Origin of Species and Galileo. No wonder the Catholic priesthood and holy orders are, by all accounts, so jam-packed with perverts, sadists and pederasts.

Photography, Printing and a Proletarian Pornography for the People

 

The facilitation of masturbation forms the ultimate function and purpose, not only of all art, but also of all significant technological advance, from photography and the printing press, to the internet, robotics, virtual reality and beyond

However, crucifixes were clearly a niche fetish. Moreover, churches, unlike adult booths, generally neither facilitate nor encourage masturbation.

Meanwhile, classical nudes were necessarily of limited distribution. Worth a great deal of money and painted by the great masters, they were strictly for the rich – to hang in the drawing room and wank off to once the servants had safely retired to bed.

Clearly there was a need for a more widely available pornography, something within the grasp of all, howsoever humble. I refer to a Proletarian Pornography, suited to the age of democracy and socialism. A true Pornography for the People.

The invention of photography and of the printing press was eventually to provide the vehicle for this development over the course of the nineteenth century. By the dawn of the twentieth century there were magazines, both cheaper and better than the classical nudes that had preceded them. A true People’s Pornography had arrived.

Yet, once this process had begun, there was to be no stopping it. Soon there were moving pictures as well. It is a little known fact that, in France, the first pornographic movies were filmed within just a few years of the development of moving images in the late nineteenth century. (Here’s another one. But, be warned, pornhub it ain’t and tissues are probably not required.)

From this invention of photography and the printing press onwards, the history of pornographic progress is irretrievably bound up with scientific and technological progress itself.

Indeed, I am firmly of the opinion that the facilitation of masturbation forms the ultimate function and purpose, not only of all art, but also of all significant technological advance, from photography and the printing press, to the internet, robotics, virtual reality and beyond.

The Genre That Dare Not Speak Its Name

However, there remained a problem. As we have seen, the Sex Cartel, in order to maintain its jealously guarded monopoly over the provision of male sexual gratification, has sought to limit the distribution of porn. In addition to employing legal sanction to this end, they have also resorted to the court of public opinion – i.e. shaming tactics.

Thus, men who are make use of pornography are subject to public censure and shaming, and variously castigated as ‘perverts’, ‘dirty old men’ and ‘losers’ incapable of attracting real-life women or girls for themselves.

The result is that the purchase of pornographic materials had long been subject to stigma and shame. A major component of Pornographic Progress has therefore been, not just improvement in the quality of masturbationary material itself, but also ease of access, enjoyment and privacy/anonymity involved in acquiring and making use of such material.

This is illustrated in pornographic publications themselves. Before the internet age, pornographic publications almost invariably masqueraded as something other than pornography. Pornography thus became ‘the genre that dare not speak its name’.

For example, magazines invariably titled themselves with names like ‘Playboy’ or ‘Mayfair’ or ‘Penthouse’, as if wealthy, indolent and promiscuous millionaires were the only people expected, or permitted, to masturbate. Curiously, they virtually never adopted titles like ‘Horny Pervert’, ‘Dirty Old Man’ or ‘The Wanker’s Weekly – a Collection of Masturbationary Aides for the Discerning Self-Abuser’.

Elsewhere, pornography was disguised in sex scenes in mainstream movies, TV shows and newspapers. While page three is well known, even ‘respectable broadsheets’ were not immune, articles about the evils of pornography often written largely, I suspect, as an excuse to include a few necessary illustrative examples beside the text. All of which evasions were, I suspect, designed to deflect some of the shame involved in buying, or owning, pornography.

A major part of pornographic progress is therefore the migration of pornography from adult booths and adult cinemas to the privacy of bedrooms and bathrooms.

Thus, a major development was home-video. Videos might still have to be bought in a shop (or they could be ordered by mail from an advert in the back of a magazine or newspaper), but masturbation itself could occur in private, rather than in an adult booth or seedy cinema.

Pornography was beginning to assume its modern form.

Then there were DVDs and subscription-only satellite TV stations.

Eventually came the Internet. People were spared even the embarrassment of buying porn in a shop. Now, they could not only watch it in the privacy and comfort of their own home – but download it there too.

Pornography had, by this point, assumed its contemporary form.

Pornographic Progress and the Sexual Revolution

What then has pornographic progress meant for the relations between the sexes in general and the terms of romantic coupling in particular?

It is my contention that the gradual liberalization of the standards of sexual morality over the course of the twentieth century is a direct result of the process of pornographic progress outlined in the previous sections.

Whereas most people view the increased availability of pornography as a mere symptom rather than a cause of the so-called ‘Sexual Revolution’ of the Sixties, my theory accords pornographic progress pride of place as the decisive factor explaining the liberalization of attitudes towards sex over the course of the twentieth century.

In short, as pornography has improved in quality and availability, it has come to represent an ever greater threat to women themselves and, in particular, their ability to entrap men into marriage with the lure of sex.

As sexual gratification was increasingly available without recourse to marriage (i.e. through pornography), men had less and less rational reason to subject themselves to marriage with all the inequitable burdens marriage imposes upon them.

After all, when pornography was restricted to Venus Figurines and cave paintings, virtually every man would prefer a real-life wife, howsoever ugly and personally obnoxious, to these poor pornographic substitutes.

However, when the choice is between an endless stream of pornographic models and actresses catering to every niche fetish imaginable expertly fucking and sucking as compared to marriage to a single real-life woman who grows older and uglier with each passing year and is legally obligated to fuck you only during the divorce settlement, the choice surely becomes more evenly balanced.

And, today, in the internet age, when images of Japanese girls in school-uniforms defecating into one another’s mouths are always just a mere mouse-click away, it comes close to being a no-brainer.

In response, as the quality and availability of pornographic materials increased exponentially, women were forced to lower their prices in order to compete with porn. The result was that promiscuity and sex before marriage, while once scandalous, became ever more common over the course of the twentieth century with increasing numbers of women forced, through increasing competition from pornography, to give up their bodies for a price somewhat less than that entailed in the marriage contract.

The male marriage strike is therefore a reaction, not only to the one-sided terms of the marriage contract, but also the increasing availability of sexual relief outside of marriage, largely thanks to the proliferation of and improvements in pornography.

Whereas in the Victorian era, men had little option but to satisfy their biological need for sexual relief through, if not wives, then at least women (e.g. prostitutes), now increasingly pornography provides a real and compelling alternative to women themselves.

The average woman, being fat, ugly and old, is simply no match for the combined power of xvideos, xhamster and pornhub.

The Present

This then is the current state of play (or of playing with oneself) with regard to pornographic progress. The new face of porn is thus the internet.

Nudie magazines are now officially dead. Playboy magazine is now said to lose about $3 million dollars annually, and the company now seems to stay afloat largely by selling pencil-cases to teenage girls.

However, there is no reason to believe that pornographic progress will suddenly stop at the moment this article to published. To believe this, we would be as naïve as the publishers of nudie mags were when they failed to see the writing on the wall and make the move into the virtual sphere.

The current age of internet porn will come to an end, just as peep shows, adult cinemas, nudie mags and Venus figurines did before them. Just like these obsolete mediums of masturbationary-aid were replaced by something altogether superior, so internet pornography will be replaced with something altogether better.

Wanking will only get better. This much is certain. The only uncertainly is the form this improvement will take.

The Future of Fucking

Predicting the future is a notoriously difficult endeavour. Indeed, perhaps the one prediction about the future that we can hold with confidence is that the vast majority of predictions about the future will turn out to be mistaken.

Whereas in all previous porn, it was women themselves who swallowed – along with the cum – the majority of the profits, with virtual reality porn and sexbots, actresses will digitally-generated and women themselves wholly bypassed to cut costs.

Nevertheless, I am sufficiently confident about the future of pornographic progress to venture a few guesses as to the nature of future pornographic progress.

One possibility will be what I term Virtual Reality Porn, namely an improvement in gaming technologies able to provide a more realistic simulation of real-life. The result may be something akin to the ‘holodeck‘ in Star Trek, the pornographic potential of which is only occasionally alluded to in the series.

However, this is not, on reflection, the direction in which I expect pornography to progress.

There are two problems. First, for the moment at least, even the most state-of-the-arc gaming technologies represent a crude simulation of real life, as anyone who has ever played them for more than a few minutes soon realizes.

Second, although the characters with whom one interacts may come to look increasingly beautiful and lifelike, there is still the problem that one will not be able to touch them. In lieu of touching the virtual porn stars with whom one interacts, one will be obliged instead (as in most contemporary pornography) to touch oneself instead, which is, as always, a poor substitute.

I therefore confidently predict that, in the short-term, pornographic progress will come in another sphere instead, namely robotics.

Sex Dolls

Already the best in Japanese sex dolls is better looking than the average woman. In addition, it does not nag, spend your money or grow fatter or uglier with each passing year. It is true that they remain utterly inert, immobile, and unresponsive. However, on the plus side, this also means they already have a personality more pleasant than the average woman.

Whereas all but the most rudimentary ‘Virtual Reality Porn’ remains the stuff of science fiction, the development of, if not true ‘Sexbots’, then at least of their immediate pornographic precursors, is surprisingly well advanced. I refer here the development of sex dolls.

Although they are not, as yet, in any sense truly robotic, sex dolls have already progressed far beyond the inflatable dolls of bawdy popular humour. In Japan – a nation always at the cutting-edge of both technological progress and sexual perversion – sex dolls made of silicone are already available which not only look, but feel to the touch, exactly like a real woman.

As of yet, these sex dolls remain relatively expensive. Costing several thousands of pounds, they are not an idle investment – but are probably, on balance, still cheaper than a girlfriend, let alone a divorce settlement, but not yet comparable to a trip to, say, Thailand.

In some respects, however, sex dolls are already better than a real woman – or, at least, better than the sort of real woman their customers, or, indeed, the average man, is likely to be able to attract.

Already a Japanese Candy Girl (or even its American equivalent Real Doll) is better looking than the average woman. In addition, it does not nag, spend your money, get upset when you have sex with her best friend or grow fatter or uglier with each passing year.

And, of course, they are not yet, in any sense, truly robotic.

In terms of physical appearance, they are distinguishable from a real-life woman only by their physical beauty and lack of imperfections. However, they remain utterly inert, immobile, unresponsive and incapable of even the most rudimentary and inane conversation of the sort in which women specialize.

However, on the plus side, this also means they already have a personality more pleasant than the average woman.

One might say that they are… as lifelike as a corpse.

From Sex Dolls to ‘Sexbots’ – The Future of Pornographic Progress

All this, however, could soon change. Already American manufacturers of real doll, who market themselves as producing “the world’s finest love doll”, have begun experiments to turn love dolls robotic. In other words, within this year, the first so-called ‘SexBots’ – robots designed for the purpose of sexual intercourse with humans – may come off the assembly-line.

Within a few decades, Sexbots will be exactly like women themselves, save, of course, for a few crucial improvements. They will not nag, cheat on you or get angry when you cheat on them. Moreover, they will be designed to your exact specifications of beauty and breast-size and, unlike real wives, will not grow older and uglier with each passing year or seek to divorce you and steal your money.

In addition, they will have one crucial improvement over every woman who has ever existed in every society throughout human history howsoever beautiful and slutty – namely, an off-switch and handy storage place in the cupboard for when one tires of them or they become annoying and clingy. This is both cheaper than divorce and easier to get away with than murder.

The Campaign Against Sexbots

Perhaps the best evidence of the coming obsolescence of womankind is the reaction of women themselves.

It is notable that, although sexbots remain, for the moment at least, a figment of the male imagination, a thing of science-fiction rather than of science, the political campaign against them has already begun. Indeed, it even has its own website.

Just as feminists, moralists and other professional damned fools have consistently opposed other ‘alternative outlets for male sexual gratification’ such as pornography and prostitutes, so the campaign against Sexbots has begun before the first such robots have even come off the assembly line.

Not content with seeking to outlaw sex robots before they even exist, opponents have even sought to censor free speech and discussion regarding the topic. Thus, an academic conference devoted to the topic had to be cancelled after being banned by the authorities in the host nation.

No prizes also for guessing that the campaign is led by a woman, one Dr Kathleen Richardson, a ‘bioethicist’ – or, in layman’s terms, a professional damned fool – who has recently launched a campaign against the (as yet) non-existent sexbots.

It is also no surprise either that the woman herself is, to put it as politely as possible, aesthetically-challenged (i.e. as ugly as a cow’s ass) and therefore precisely the sort of woman who is likely to be the first casualty of competition from even the most primitive of sexbots.

(Just for the record, this is not an ad hominin or gratuitous personal abuse. Whether she is consciously aware of it or not, the fact that she is hideously and repulsively physically unattractive is directly relevant to why she is motivated to ban sexbots. After all, whereas more physically attractive women may be able to fight off competition from robots and still attract male suitors for somewhat longer, it is ugly women such as herself are sure to be the first casualties of competition from even the most rudimentary of robots. Indeed, one suspects even an inflatable doll is more visually alluring, and is probably has a more appealing personality, than this woman.)

There is a key giveaway to the real motivation underlying this ostensibly moral campaign, namely, these same bioethicist luddites have, strangely, never, to my knowledge, objected on moral grounds, let alone launched high-profile media campaigns against, vibrators, dildos and other sex toys for women.

Yet vibrators are surely far more widely used by women than sex dolls are by men and also far less stigmatized. As for actual sexbots, these have yet even to be invented.

So why campaign only against the latter? This is surely a classic example of what feminists are apt to refer to as ‘sexual double-standards’.

Are Women Obsolete?

Within perhaps just a couple of decades, women will be obsolete – just another once useful but now obsolete technology that has been wholly supplanted by superior technologies, like typewriters, video recorders, the Commodore 64 and long drop toilets.

There has, in recent years, been something of a fashion within the publishing industry, and among feminists, for books with outlandish titles like Are Men Necessary? and The End of Men which triumphantly (and gendercidally) hail the (supposed) coming obsolescence of men. Such hysterical ravings are not only published by mainstream publishers, but even taken seriously in the mainstream media

This is, of course, like most feminist claims, wholly preposterous.

The self-same women who loudly proclaim that men are obsolete live in homes built by men, rely on clean water and sewage systems built and maintained by men, on electricity generated by men working in coal mines and on oil rigs and, in the vast majority of cases, live in whole or in part off the earnings of a man, whether that man be a husband, an ex-husband or the taxpayer.

In short, as Fred Reed has observed, Without men, civilization would last until the oil needs changing.

However, while talk of the End of Men is obviously not so much premature as positively preposterous, the same may not be true of the End of Women. As Steve Moxon suggests, were Freud not a complete charlatan, it would be tempting to explain the bizarre notion that men are about to become obsolete by reference to the Freudian concept of projection.[9] For the painful truth is that it is women who on the verge of obsolescence, not men.

Already the best in Japanese sex dolls are better looking than the average woman and lose their looks less rapidly. Already, they are cheaper than the average divorce settlement. And, being unable to speak or interact with their owners in any way, already they have personalities more pleasant and agreeable than the average woman.

Soon with developments in robotics, they will be vastly superior in every way.

Sexbots and the End of Woman

It is time to face facts, howsoever harsh or unwelcome they may be in some quarters.

Sexbots will have one crucial improvement over every woman who has ever existed howsoever beautiful and downright slutty – namely, an off-switch and handy storage place in the cupboard. This is both cheaper than divorce and easier to get away with than murder.

Within just a couple of decades, women will be obsolete – just another once useful but now obsolete technology that has been wholly supplanted by superior technologies, like typewriters, video recorders, the Commodore 64 and the long drop toilet.

Like all cutting-edge scientific advancements and technological developments, sexbots will be invented, designed, built, maintained and repaired almost exclusively by men. Women will thus be cut out of the process altogether.

This is a crucial development. In all pre-existing forms of porn since the development of photography, the primary financial beneficiaries of porn have always been women themselves, or at least a small subsection of women (namely, those willing to undercut their sex industry competitors by agreeing to appear in pornography).

While it was men’s technological expertise that created photography, moving pictures and the internet, and men’s entrepreneurial vision that created the great commercial porn empires, real-life women still had to be employed as models or actresses, and typically demanded exorbitant salaries, many times those of the male performers alongside whom they performed (and whose jobs were much more difficult), for jobs that often involved nothing more than posing naked or engaging in sexual acts in front of a camera.

In short, although it was men’s technological and entrepreneurial brilliance that produced porn, it was women themselves who swallowed – along with the cum – the majority of the profits.

However, with Virtual Reality Porn and Sexbots, there will be no need of ‘actresses’ or ‘models’. Already magazine pictures are digitally-enhanced to remove imperfections. In the future, porn stars will be digitally-generated. Women themselves will be wholly bypassed in order to cut costs.

Increasingly, women will find themselves rendered superfluous to requirements.

From blacksmiths and tailors to cobblers, weavers and thatchers – technological advance and innovation has rendered countless professions obsolete. Soon perhaps the Oldest Profession itself will go the same way. It’s called progress. The Real Sexual Revolution has but barely begun…

After all, who the hell would want a real wife or girlfriend or even a whore when you can download something just the same or better from a hard disk or purchase it as a self-assembly robot for a fraction of the price – minus the incessant nagging, endless inane chattering and obnoxious personality? Plus, this one can be designed according to your precise specifications and doesn’t mind when you screw her best friend or forget her anniversary.

Soon women will be put out to tenure just like any other outdated machinery. Or maybe displayed in museums for educational purposes to show how people used to live long ago.

If it is deemed desirable to maintain the human species, then, so long as a womb is necessary to incubate a baby, a few women may be retained for reproductive purposes – perhaps housed in battery cages for greater reproductive efficiency.

This is why women so despise pornography, with a passion and venom unmatched by other forms of Puritanism. That’s why they create entire ideologies – from Radical Feminism to Religious Fundamentalism – dedicated to its destruction. Because it represents a threat to their own very existence, livelihood and survival!

But the good news is – Women Cannot Win. The ferocity of the feminist onslaught only confirms that what women must already intuitively grasp – namely, the writing is already on the wall.

Technological progress is, for better or worse, unstoppable.

Like the mythical Ned Ludd and his followers who, in response to being rendered unemployable by the mechanization of labour, smashed workplace machinery across the north of England in the Nineteenth Century in the vain hope of stopping progress and their own inevitable obsolescence – the prudes, puritans, luddites and feminists are destined to fail.

Like it or not, Virtual Reality Porn and Sexbots are on the way. The ultimate salvation of man from the tyranny of the Sex Cartel will lie, not in men’s rights activism, campaigning, political action, reform, rape, nor even in revolution – but rather in sexbots and hardcore virtual-reality porn.

After all – from blacksmiths and tailors to cobblers, weavers and thatchers – technological advance and innovation has rendered countless professions obsolete. Soon perhaps the Oldest Profession itself will go the same way.

It’s called progress.

The Real Sexual Revolution has but barely begun!

_________________

Footnotes/Endnotes

[1] E.g. Diamond, M. (1999) ‘The Effects of Pornography: an international perspective’ in Porn 101: Eroticism, Pornography, and the First Amendment

[2] For example, as an admittedly rather pseudo-scientific measure of the popularity of the two genres, it is notable that TubeGalore.com – in my own extensive experience the most comprehensive of the various porn search engines – returned over twenty-five times as many results for the search “Femdom”, as for “Maledom” (284377 vs. 11134).

[3] Salmon C ‘the Pornography Debate: what sex differences in erotica can tell about human sexuality’ in Evolutionary Psychology, Public Policy and Personal Decisions (New Jersey: Lawrence Erlbaum 2004) by Crawford C & Salmon C (eds.) pp217-230 at p227

[4] If Men Have All The Power How Come Women Make The Rules (2002): p56.

[5] If Men Have All The Power How Come Women Make The Rules (2002): p57.

[6] Salmon, C ,‘The Pornography Debate: what sex differences in erotica can tell about human sexuality’ in Evolutionary Psychology, Public Policy and Personal Decisions (New Jersey: Lawrence Erlbaum 2004) by Crawford C & Salmon C (eds.) pp217-230 at p227

[7] Baumeister, RF, & Twenge, JM (2002). ‘Cultural Suppression of Female Sexuality’, Review of General Psychology 6(2) pp166–203 at p172

[8] Kenrick, Gutieres and Goldberg, ‘Influence of popular erotic on judgements of strangers and mates’ Journal of experimental Social Psychology (1985) 29(2): 159–167.

[9] Moxon S The Woman Racket: p133.

The Sex Cartel: Puritanism and Prudery as Price-fixing Among Prostitutes

“There would seem to be, indeed, but small respect among women for virginity per se. They are against the woman who has got rid of hers outside marriage, not because they think she has lost anything intrinsically valuable, but because she has made a bad bargain… and hence one against the general advantage and well-being of the sex. In other words, it is a guild resentment that they feel, not a moral resentment.”

HL Mencken, In Defence of Women 1922

“Why is the woman of the streets who spends her sex earnings upon her lover scorned universally?… These women are selling below the market, or scabbing on the job.”

RB Tobias & Mary Marcy, Women as Sex Vendors 1918

In my previous post, I discussed the curious paradox whereby prostitutes and other promiscuous women are invariably condemned by moralists as sinful and immoral despite the fact that they provide pleasure to, in some cases, literally thousands of men. Therefore, according to the tenets of utilitarianism, they are literally giving ‘the greatest happiness to the greatest number’ as Bentham advocated and ought therefore to be lauded as the highest paradigm of moral virtue right up alongside Mother Theresa, who, although she dedicated her life to heeling, feeding and caring for the sick, poor and destitute, never went as far as actually sucking their cocks.

Why then are prostitutes invariably condemned and castigated as immoral?

Broadening the scope of our discussion, we might also ask why so many other sexual behaviours – from homosexuality and masturbation to pornography and sex with household appliances – have been similarly condemned as immoral despite the lack of a discernible victim.

In this post, I attempt to provide an explanation. The answer, I propose, is to be sought, not so much in arcane theorizing of moral philosophers, nor in the endless hypocritical moralizing of moralists and other assorted ‘professional damned fools’ but rather in the dismal science of economics.

Thus, far from being rooted in morality or ethics, the phenomenon is rooted, like so much else in life, in base economic self-interest – or, more particularly, the base economic self-interest of women.

___________

The entire process of conventional courtship is predicated on prostitution – from the social expectation that the man pay for dinner on the first date, to the legal obligation that he continue to support his ex-wife, through alimony and maintenance, for anything up to ten or twenty years after he has belatedly rid himself of her. The Oxford English Dictionary defines a prostitute as ‘a person who engages in sexual intercourse for payment’. That’s not the definition of a prostitute. That’s the definition of a woman! The distinguishing feature of prostitutes isn’t that they have sex for money – it’s that they provide such excellent value for money.

To understand this phenomenon, one must first register a second curious paradox – namely, that the self-same women who liberally and routinely denounce other women as ‘whores’ and ‘sluts’ on account of the latter’s perceived promiscuity themselves qualify as ‘prostitutes’ by the ordinary dictionary definition of this word.

In The Manipulated Man, her masterpiece of unmitigated misogyny (which I have reviewed here), prominent anti-feminist polemicist Esther Vilar puts it like this:

By the age of twelve at the latest, most women have decided to become prostitutes. Or, to put it another way they have planned a future for themselves which consists of choosing a man and letting him do all the work. In return for his support, they are prepared to let him make use of their vagina at certain given intervals.”

The Oxford English Dictionary defines a prostitute as ‘a person who engages in sexual intercourse for payment’.

That’s not the definition of a prostitute. That’s the definition of a woman!

The distinguishing feature of prostitutes isn’t that they have sex for money – it’s that they provide such excellent value for money.

After all, who can seriously doubt that thirty quid for a bargain basement blowjob in an alleyway or Soho flat provides better value than conventional courtship? Marriage is simply a bad bargain.

If you want sex, pay a hooker. If you want companionship, buy a dog. Marriage is not so much ‘disguised prostitution’ as flagrant extortion. Frankly, in the long-run, one is likely to get better value for money in a Soho clip-joint.

___________

Yet, whereas marriage is a raw deal for men, it is, for precisely the same reason, a very good deal for women. The more that men are obliged to pay out in exorbitant divorce settlements and maintenance demands, the more women receive in these same divorce packages. In short, courtship is a zero-sum game – and women are always the winners.

It is therefore no surprise that, as the feminists incessantly remind us, men earn more money than women. After all, why would any woman take the trouble to earn money when she has the far easier option of stealing it in the divorce courts instead? Moreover, there is no fear of punishment. Far from the courts punishing the wrongdoers, the family courts are actually accessories and enablers, who actively aid and abet the theft.

Marrying money is both quicker and easier than earning it for yourself. Thus, just as slaveholders had a vested interest in defending the institution of slavery, women in general, and wives in particular, have a vested interest in defending the institution of marriage.

However, in doing so, they are faced with a difficulty, namely that no rational man would ever voluntarily choose to get married any more than he would choose to voluntarily enslave himself. It is, as we have seen, simply a bad bargain. Some combination of prostitutes, promiscuity, pornography and perversion is always preferable.

Since women have a vesting interest in defending and promoting the institution of marriage, women also therefore have a vested interest in discouraging these alternative outlets for male sexual desire that threaten the institution of marriage by offering, on the whole, a better deal for men. This then is where sexual morality comes in.

___________

“Just as any monopoly tends to oppose the appearance of low-priced substitutes that could undermine its market control, women will oppose various alternative outlets for male sexual gratification”

The key factor uniting pornography, promiscuity, prostitution, perversion, masturbation, homosexuality, sex with corpses, with animals, with inflatable dolls, with household appliances and all other such fun and healthy activities that are universally condemned by moralists, feminists, politicians, assorted do-gooders and other professional damned fools despite the lack of any discernible victim is that each represents a threat to the monopoly over the provision of men’s sexual pleasure jealously guarded by ‘respectable’ women.

These respectable women, to maintain their monopoly, therefore seek to stigmatize, or even, where possible, criminalize these normal, healthy and natural alternative outlets for male sexual gratification.

Take, for example, pornography. Not only are the performers, producers and consumers of pornography widely stigmatized (as ‘whores’ and ‘perverts’ respectively), but also, in virtually all times and places, pornography is heavily regulated and restricted, if not wholly illegal and an unholy alliance of religious fundamentalists and radical feminists endlessly campaign for still further restrictions.

Thus, in Britain so-called ‘hardcore’ pornography (i.e. featuring real sex between actors) was only legalized in 2000, when the pressures of European integration and the internet had made this change unavoidable. In recent retrograde measures, governments have even tightened restrictions on the porn, even criminalizing mere possession of certain varieties of so-called extreme pornography.

Why is this? Simply because pornography represents a threat to women’s marriage prospects by offering men an alternative outlet for sexual gratification that provides better value for money than marriage.

Baumeister and Twenge explain the basic economic logic in their article ‘Cultural Suppression of Female Sexuality’ published in the journal Review of General Psychology in 2002:

Just as any monopoly tends to oppose the appearance of low-priced substitutes that could undermine its market control, women will oppose various alternative outlets for male sexual gratification[1]

This is because “pornography and other forms of sexual entertainment… offer men sexual stimulation” and, in doing so, “could undermine women’s negotiating position” in their relations with men.[2]

In short, women oppose pornography because they recognise that porn offers manifestly better value for money than does marriage and conventional courtship.

After all, a magazine full of beautiful women expertly sucking and fucking and who remain, on the pages of the magazine, young and beautiful forever is surely better value for money than just a single real-life wife or girlfriend, who grows older and uglier with each passing year and is legally obligated to fuck you only during the divorce proceedings.

In short, a picture of a naked woman in a magazine is usually better value than the real thing. As WC Fields observed, women are like elephants, nice to look at – but I wouldn’t want to own one’.

___________

“A rational economic strategy that many monopolies or cartels have pursued is to try to increase the price of their assets by artificially restricting the supply. With sex, this would entail having the women put pressure on each other to exercise sexual restraint and hold out for a high price (such as a commitment to marriage) before engaging in sex.”

But if, as we have seen, all women are in some sense prostitutes, then why are prostitutes themselves subject to stigma and moral opprobrium? A pornographic magazine, dvd or inflatable doll can indeed be viewed (per Baumeister and Twenge above) as a ‘low priced substitute’ for a real woman. However, the same cannot be said of prostitutes themselves, since most of the latter (rent-boys and transsexuals apart) are themselves women.

Key to understanding the stigma and moral opprobrium attaching to prostitutes and other promiscuous women is the concept of a price-fixing cartel.

By offering sex to men for a cheaper price than that demanded by respectable women, prostitutes and other promiscuous women threaten to undercut the prices other women are able to demand.

In short, if the town whore gives blow-jobs for twenty quid while Miss Prim and Proper in the house next door demands an engagement ring, a wedding ring, a marriage certificate and the promise of a cushy divorce settlement a few years’ hence, then obviously anybody with half a brain knows where to go when they want a blowjob and Miss Prim and Proper is likely to be left curiously bereft of suitors.

The basic economic logic is explained thus by Baumeister and Vohs in their paper ‘Sexual Economics: Sex as Female Resource for Social Exchange in Heterosexual Interactions’, published in 2004 in Personality and Social Psychology Review:

A rational economic strategy that many monopolies or cartels have pursued is to try to increase the price of their assets by artificially restricting the supply. With sex, this would entail having the women put pressure on each other to exercise sexual restraint and hold out for a high price (such as a commitment to marriage) before engaging in sex.”[3]

However, as every first-year economics student knows, a price-fixing cartel is inherently unstable. There is always the ever-present threat that some party to the cartel (or an outsider to the agreement) will renege on the agreement by undercutting her competitors and reaping the resultant windfall as customers flock to receive the lower-priced goods or service. This can only be prevented by the existence of coercive apparatus designed to deter defection.

This is where sexual morality comes in.

In short, women have therefore sought to discourage other women from undercutting them through a quasi-moral censure, and sometimes criminalization, of those women generous enough, enterprising enough and brave enough to risk such censure by offering sexual services at a more reasonable price.

On this view, sexual morality essentially functions, in economic terms, as a form of collusion or price-fixing. As Baumseister and Vohs explain in their article on ‘Sexual Economics’ published in Personality and Social Psychology Review in 2004:

“The so-called “cheap” woman (the common use of this economic term does not strike us as accidental), who dispenses sexual favors more freely than the going rate, undermines the bargaining position of all other women in the community, and they become faced with the dilemma of either lowering their own expectations of what men will give them in exchange for sex or running the risk that their male suitors will abandon them in favor of other women who offer a better deal.”[4]

This is what I refer to as: ‘The Sex Cartel’ or ‘Price-Fixing among Prostitutes’.

___________

On this view, women’s prejudice against prostitutes is analogous to the animosity felt by trade unionists towards strikebreakers during industrial actions.

After all, on the face of it, one would not expect a strikebreaker or scab to be morally condemned. After all, a so-called ‘scab’ or strikebreaker is simply a person willing to perform the same level of work for less remuneration or in worse working conditions than other workers who are currently striking for better pay or conditions. This willingness to do the same work while receiving less in return would, in any other circumstances, be considered a mark of generosity and hence a source of praise rather than condemnation.

Yet, in working-class communities, the strikebreaker is universally scorned and despised. Indeed, his violent victimization, and even murder, is not only commonplace, but even perversely celebrated in at least one well-known English folk song that remains widely performed to this day.

Why then is the scab universally hated and despised? Simply because, in his otherwise commendable willingness to work in return for a little less than his fellow workers, the scab threatens to drive down the wages which the latter are capable of commanding.

And despite its hallowed place in socialist mythos, a trade union (or ‘labor union’ in American English) is, in essence, an anti-competitive monopolistic worker’s cartel, seeking to fix the price of labour to the advantage of its own members. Like all cartels, it is inherently unstable and vulnerable to being undercut by workers willing to work for less. This is why trade unions invariably resort to intimidation (e.g. picket lines) to deter the latter.

The same rational self-interest, therefore, explains women’s hatred of whores. As leading early twentieth century American socialist Mary Marcy observed of prostitutes in the passage quoted at the beginning of this post: “These women are selling below the market, or scabbing on the job”.

This is why TheAntiFeminist has characterised feminism as “The Sexual Trade Union”, representing the selfish sexual and reproductive interests of ageing and/or unattractive women.
___________

However, whereas the striking miner or manual labourer sometimes wins our sympathy simply because he occupies, as socialists have rightly observed, a relatively disadvantaged position in society as a whole, the same cannot be said of wives and women.

Although, as feminists never tire of pointing out, men earn more money than women (not least because they work longer hours, in more dangerous and unpleasant working conditions and for a greater proportion of their adult lives), women are known to be wealthier than men and dominate almost every area of consumer spending. According to researchers in the marketing industry, women control around 80% of household spending.[5]

A more appropriate analogy is therefore perhaps that provided by Baumeister and Vohs themselves. These authors view women’s attempt at artificial price-fixing as analogous to“other rational economic strategies, such as OPEC’s efforts to drive up the world price of oil by inducing member nations to restrict their production.”[6]

The appropriateness of this analogy is underscored by the fact that the exact same analogy was adopted by Warren Farrell, the father of the modern Men’s Rights Movement, a decade or so previously in his seminal The Myth of Male Power (which I have reviewed here). Here, Farrell observed:

In the Middle East, female sex and beauty are to Middle Eastern men what oil and gas are to Americans: the shorter the supply the higher the price. The more women ‘gave’ away sex for free, or for a small price, the more the value of every woman’s prize would be undermined”.[7]

___________

This then explains the prevalence of prejudice against prostitutes and promiscuity, and why this prejudice is especially prevalent among women. Only by slut-shaming whores and other promiscuous women can The Sex Cartel’s monopoly ever be maintained.

In contrast, men’s interests are diametrically opposed to the Sex Cartel. Consistent with this theory, men are found to be more tolerant, liberal and permissive in respect of virtually all aspects of sexual morality.

Thus, one study from the late-Eighties found that the vast majority of women, but only a minority of men, were wholly opposed to prostitution in all circumstances, whereas, in contrast, three times as many men as women saw nothing wrong with the sex trade.[8] Likewise, more women than men report that they are opposed to pornography.[9]

Of course, feminists typically explain so-called ‘sexual double-standards’ as some sort of male patriarchal plot to oppress women. In fact, however, women seem to be more censorious of promiscuity on the part of other women than are men. Thus, ‘sexual double-standards’, to the extent they exist at all, are largely promoted, and enforced, by women themselves. Thus, one recent meta-analysis found significantly greater support for ‘sexual double-standards’ among women than among men.[10]

Men, in contrast, have little incentive for slut-shaming. On the contrary, men actually generally rather enjoy the company of promiscuous women – for obvious reasons.[11]

There is, as far as I am aware, only one exception to the general principle that men are more tolerant and permissive on issues of sexual morality than are women. This is in respect of attitudes towards homosexuality. Here, strangely, women seem to be more permissive than men.[12]

However, opposition to homosexuality can still be explained compatibly with Sex Cartel Theory. Warren Farrell suggests in The Myth of Male Power (which I have reviewed here):

Homophobia reflected an unconscious societal fear that homosexuality was a better deal than heterosexuality for the individual. Homophobia was like OPEC calling nations wimps if they bought oil from a more reasonably priced source. It was the society’s way of giving men no option but to pay full price for sex”.[13]

___________

The Sex Cartel’s efforts to de-legitimize the sex trade involve the stigmatization, not only of prostitutes, but also of their clients. Indeed, these days the patrons of prostitutes seem to get an even worse press than do prostitutes themselves. On the one hand, they are castigated for exploiting women. On the other, they are also derided for being exploited by women and having to pay for what (it is implied) ‘real’ men should have no business having to pay for.[14]

In addition to moral sanction, the force of the criminal law is sometimes co-opted. Thus, around the world, prostitution is frequently wholly prohibited, and, if not, is almost always heavily regulated and restricted, such that both prostitutes and their patrons find themselves subject to the full force of the criminal law for partaking in a victimless and mutually-consensual commercial transaction.

Again, the current trend in law-enforcement is to target the customers rather than the prostitutes themselves (i.e. men rather than women) – a policy that manages to be both inefficient and unjust and is roughly comparable to prosecuting occasional pot smokers while letting drug-dealers off scot-free.

___________

Every woman, from the Whore to the Housewife, the Prostitute to the Prude, the Puritan to the Princess, is each, in her own way, forever a Whore at Heart.So, ironically, for all their fanatical feminist flag-waving and sanctimonious puritanical moral posturing, the real reason women hate prostitutes is precisely because women are prostitutes. Like any other class of commercial trader, they just don’t like the competition.

In reality, however, prostitution per se is never wholly criminalized or prohibited. If it were, then virtually every woman in the country would be behind bars – and so would virtually every man.

After all, as perceptive observers (end even a few feminists) have long recognised, one way or another, all women are prostitutes, according to the ordinary dictionary definition of this word.

Indeed, the entire process of conventional courtship in Western society is predicated on prostitution – from the social expectation that the man pay for dinner on the first date, to the legal obligation that he continue to support his ex-wife, through alimony and maintenance, for anything up to ten or twenty years after he has belatedly rid himself of her.

All the world is a red-light district. And all the men and women merely tricks, suckers, johns, punters, hookers and whores – plus perhaps an occasional pimp. Every woman, from the Whore to the Housewife, the Prostitute to the Prude, the Puritan to the Princess, is each, in her own way, forever a Whore at Heart.

So, ironically, for all their fanatical feminist flag-waving and sanctimonious puritanical moral posturing about saving women from sexual slavery and exploitation, the real reason women hate prostitutes is precisely because women are prostitutes. Like any other class of commercial trader, they just don’t like the competition.
_____________

[1] Baumseister RF & Twenge JM (2002) ‘Cultural Suppression of Female Sexuality’, Review of General Psychology 6(2): 166-203 at p172.

[2] Ibid.

[3] Baumeister RF & Vohs KD (2004) ‘Sexual Economics: Sex as Female Resource for Social Exchange in Heterosexual Interactions’, Personality and Social Psychology Review 8(4) pp339-363 at p344.

[4] Ibid, at p358

[5] See Kanner, B., Pocketbook Power: How to Reach the Hearts and Minds of Today’s Most Coveted Consumer – Women: p5; Barletta, M., Marketing to Women: How to understand reach and increase your share of the world’s largest market segment: p6.

[6] Baumeister RF & Vohs KD (2004) ‘Sexual Economics: Sex as Female Resource for Social Exchange in Heterosexual Interactions’, Personality and Social Psychology Review 8(4) pp339-363 at p357

[7] Farrell, W, The Myth of Male Power (reviewed here) (New York Berkley 1994) at p67.

[8] Klassen, AD, Williams, CJ, & Levitt, EE (1989). Sex and morality in the U.S.: An empirical enquiry under the auspices of the Kinsey Institute Middletown: Wesleyan University Press: cited in Baumseister RF & Twenge JM (2002) ‘Cultural Suppression of Female Sexuality’ at p190. More precisely, 69% of women were wholly opposed to pornography in all circumstances, as compared to only 45% of men, whereas 17% of men versus only 6% of men saw nothing wrong with prostitution.

[9] For example, Lottes, I, Weinberg, M & Weller, I (1993) ‘Reactions to pornography on a college campus: For or against?’ Sex Roles 29(1-2): 69-89.

[10] Oliver MB and Hyde JS (1993) ‘Gender Differences in Sexuality: A Meta-Analysis’ Pyschological Bulletin l14(1): 29-51

[11] Though it is true that men may not wish to marry a promiscuous women. Here, concerns of paternity certainty are paramount.

[12] Harek G (1988) Heterosexuals’ attitudes toward lesbians and gay men: Correlates and gender differences Journal of Sex Research 25(4); Lamar, L & Kite, M. (1998) Sex Differences in Attitudes toward Gay Men and Lesbians: A Multidimensional Perspective The Journal of Sex Research 35(2): 189-196; Kite, M. & Whitney, B. (1996) Sex Differences in Attitudes Toward Homosexual Persons, Behaviors, and Civil Rights A Meta-Analysis Personality and Social Psychology Bulletin 22(4): 336-35;Lim VK (2002) Gender differences and attitudes towards homosexuality Journal of Homosexuality 43(1):85-97.

[13] Farrell, W, The Myth of Male Power (reviewed here) (New York Berkley 1994) at p87.

[14] These two claims are, of course, wildly contradictory. Moreover, it is notable that, while men who pay for prostitutes are routinely ridiculed for ‘having to pay for it’, the same stigma does not attach to the man who takes his girlfriend out to dates at expensive restaurants, buys her jewellery or, worse still, pays the ultimate price by subjecting himself to marriage – yet the latter surely incurs a steeper financial penalty in the long-run.

In Praise of Prostitutes and Promiscuity – A Utilitarian Perspective

Puritanism is the haunting fear that someone somewhere may be happy

HL Mencken, Aphorisms

Sex is one of the most wholesome, spiritual and natural things money can buy. And like all games, it becomes more interesting when played for money.

Sebastian Horsley, The Brothel-Creeper

Prostitutes are like public toilets. On the one hand, they provide a useful service to the public. On the other, they are dirty and unhygienic, and one always somehow feels in danger of catching a disease when inside one.

VEL – The Contemporary Heretic

I have never been able to understand why whores and prostitutes have invariably maligned as sinful and immoral.

Given that the Oxford English Dictionary defines the word nice as meaning giving pleasure or satisfaction, then surely the nice girl is not the girl who, as  the current usage of this phrase typically connotes, refuses to perform oral copulation on the first date, but rather the girl who willingly does so with multiple partners on every night of the week. After all, it is the latter girl who surely gives considerably more ‘pleasure and satisfaction’ than the former.

According to the precepts of utilitarianism, the theory of normative ethics championed by such eminent luminaries as Bentham, Mill and, most recently, Peter Singer, the moral worth of an action is to be determined by the extent to which it contributes to the overall happiness of mankind. On this view, the ultimate determinant of the morality of a given behaviour is the extent to which it promotes (to adopt Bentham’s memorable formulation) ‘the Greatest Happiness to the Greatest Number’.

Well, surely, this is precisely what whores and other indiscriminately promiscuous women do. Prostitutes, for example, over the course of their careers, can give pleasure and happiness to literally thousands of men.

Some crack-whores suck a couple of dozen dicks a day minimum. That’s what I call giving the greatest happiness to the greatest number. If that’s not maximising utility, I don’t know what the hell is!

(Much is made of the scourge of drugs such as heroin and crack cocaine on lives, families, communities as well as society as a whole. But, on the plus side, they do help drive down the cost of a blowjob – Good news for the consumer!)

In utilitarian terms, therefore, rather than condemned as immoral for their behaviour, whores ought to be lauded as the highest paradigm of moral virtue – right up alongside Mother Teresa.

After all, Mother Teresa might have selflessly dedicated her entire live to helping, healing, feeding and caring for the poor and destitute – but she never sucked their cocks, did she?