Judith Harris’s ‘The Nurture Assumption’: By Parent or Peers

Judith Harris, The Nurture Assumption: Why Children Turn Out the Way They Do. Free Press, 1998.

Almost all psychological traits on which individual humans differ, from personality and intelligence to mental illness, are now known to be substantially heritable. In other words, individual differences in these traits are, at least in part, a consequence of genetic differences between individuals.

This finding is so robust that it has even been termed by Eric Turkenheimer the First Law of Behviour Genetics and, although once anathema to most psychologists save a marginal fringe of behavioural geneticists, it has now, under the sheer weight of evidence produced by the latter, belatedly become the new orthodoxy. 

On reflection, however, this transformation is not entirely a revelation. 

After all, it was only in the mid-twentieth century that the curious notion that individual differences were entirely the product of environmental differences first arose, and, even then, this delusion was largely restricted to psychologists, sociologists, feminists and other such ‘professional damned fools’, along with those among the semi-educated public who seek to cultivate an air of intellectualism by aping the former’s affections. 

Before then, poets, peasants and laypeople alike had long recognized that ability, insanity, temperament and personality all tended to run in families, just as physical traits like stature, complexion, hair and eye colour also do.[1]

However, while the discovery of a heritable component to character and ability merely confirms the conventional wisdom of an earlier age, another behavioural genetic finding, far more surprising and counterintuitive, has passed relatively unreported. 

This is the discovery that the so-called shared family environment (i.e. the environment shared by siblings, or non-siblings, raised in the same family home) actually has next to no effect on adult personality and behaviour. 

This we know from such classic study designs in behavioural genetics as twin studies, adoption studies and family studies.

In short, individuals of a given degree of relatedness, whether identical twins, fraternal twins, siblings, half-siblings or unrelated adoptees, are, by the time they reach adulthood, no more similar to one another in personality or IQ when they are raised in the same household than when they are raised in entirely different households. 

The Myth of Parental Influence 

Yet parental influence has long loomed large in virtually every psychological theory of child development, from the Freudian Oedipus complex and Bowby’s attachment theory to the whole literary genre of books aimed at instructing anxious parents on how best to raise their children so as to ensure that the latter develop into healthy, functional, successful adults. 

Indeed, not only is the conventional wisdom among psychologists overturned, but so is the conventional wisdom among sociologists – for one aspect of the shared family environment is, of course, household income and social class

Thus, if the family that a person is brought up in has next to no impact on their psychological outcomes as an adult, then this means that the socioeconomic status of the family home in which they are raised also has no effect. 

Poverty, or a deprived upbringing, then, has no effect on IQ, personality or the prevalence of mental illness, at least by the time a person has reached adulthood.[2]

Neither is it only leftist sociologists who have proved mistaken. 

Thus, just as leftists use economic deprivation as an indiscriminate, catch-all excuse for all manner of social pathology (e.g. crime, unemployment, educational underperformance) so conservatives are apt to place the blame on divorce, family breakdown, having children out of wedlock and the consequential increase in the prevalence of single-parent households

However, all these factors are, once again, part of the shared family environment – and according to the findings of behavioural genetics, they have next to no influence on adult personality or intelligence. 

Of course, chaotic or abusive family environments do indeed tend to produce offspring with negative life outcomes. 

However, none of this proves that it was the chaotic or abusive family environment that caused the negative outcomes. 

Rather, another explanation is at hand – perhaps the offspring simply biologically inherit the personality traits of their parents, the very personality traits that caused their family environment to be so chaotic and abusive in the first place.[3] 

For example, parents who divorce or bear offspring out-of-wedlock likely differ in personality from those who first get married then stick together, perhaps being more impulsive or less self-disciplined and conscientious (e.g. less able refrain from having children from a relationship that was destined to be fleeting, or less able to persevere and make the relationship last). 

Their offspring may, then, simply biologically inherit these undesirable personality attributes, which then themselves lead to the negative social outcomes associated with being raised in single-parent households or broken homes. The association between family breakdown and negative outcomes for offspring might, then, reflect simply the biological inheritance of personality. 

Similarly, as leftists are fond of reminding us, children from economically-deprived backgrounds do indeed have lower recorded IQs and educational attainment than those from more privileged family backgrounds, as well as other negative outcomes as adults (e.g. lower earnings, higher rates of unemployment). 

However, this does not prove that coming from a deprived family background necessarily itself depresses your IQ, educational attainment or future salary. 

Rather, an equally plausible possibility is simply that offspring simply biologically inherit the low intelligence of their parents – the very low intelligence which was likely a factor causing the low socioeconomic status of their parents, since intelligence is known to correlate strongly with educational and occupational advancement.[4]

In short, the problem with all of this body of research which purports to demonstrate the influence of parents and family background on psychology and behavioural outcomes for offspring is that they fail to control for the heritability of personality and intelligence, an obvious confounding factor

The Non-Shared Environment

However, not everything is explained by heredity. As a crude but broadly accurate generalization, only about half the variation for most psychological traits is attributable to genes. This leaves about half of the variation in intelligence, personality and mental illness to be explained environmental factors.  

What are these environmental factors if they are not to be sought in the shared family environment

The obvious answer is, of course, the non-shared family environment – i.e. the ways in which even children brought up in the same family-home nevertheless experience different micro-environments, both within the home and, perhaps more importantly, outside it. 

Thus, even the fairest and most even-handed parents inevitably treat their different offspring differently in some ways.  

Indeed, among the principal reasons why parents treat their different offspring differently is precisely because the different offspring themselves differ in their own behaviour quite independently of any parental treatment.

This is well illustrated by the question of the relationship between corporal punishment and behaviour in children.

Corporal punishment 

Rather than differences in the behaviour of different children resulting from differences in how their parents treat them, it may be that differences in how parents treat their children may reflect responses to differences in the behaviour of the children themselves. 

In other words, the psychologists have the direction of causation precisely backwards. 

Take, for example, one particularly controversial issue, namely the physical chastisement of children by their parents as a punishment for bad behaviour (e.g. spanking). 

Some psychologists have sometimes argued that physical chastisement actually causes misbehaviour. 

As evidence, they cite the fact that children who are spanked more often by their parents or caregivers on average actually behave worse than those whose caregivers only rarely or never spank the children entrusted to their care.  

This, they claim, is because, in employing spanking as a form of discipline, caregivers are inadvertently imparting the message that violence is a good way of solving your problems. 

Actually, however, I suspect children are more than capable of working out for themselves that violence is often an effective means of getting your way, at least if you have superior physical strength to your adversary. Unfortunately, this is something that, unlike reading, arithmetic and long division, does not require explicit instruction by teachers or parents. 

Instead, a more obvious explanation for the correlation between spanking and misbehaviour in children is not that spanking causes misbehaviour, but rather that misbehaviour causes spanking. 

Indeed, once you think about it, this is in fact rather obvious: If a child never seriously misbehaves, then a parent likely never has any reason to spank that child, even if the parent is, in principle, a strict disciplinarian; whereas, on the other hand, a highly disobedient child is likely to try the patience of even the most patient caregiver, whatever his or her moral opposition to physical chastisement in principle. 

In other words, causation runs in exactly the opposite direction to that assumed by the naïve psychologists.[5] 

Another factor may also be at play – namely, offspring biologically inherit from their parents the personality traits that cause both the misbehaviour and the punishment. 

In other words, parents with aggressive personalities may be more likely to lose their temper and physically chastise their children, while children who inherit these aggressive personalities are themselves more likely to misbehave, not least by behaving in an aggressive or violent manner. 

However, even if parents treat their different offspring differently owing to the different behaviour of the offspring themselves, this is not the sort of environmental factor capable of explaining the residual non-shared environmental effects on offspring outcomes. 

After all, this merely begs the question as to what caused these differences in offspring behaviour in the first place? 

If the differences in offspring behaviour exist prior to differences in parental responses to this behaviour, then these differences cannot be explained by the differences in parental responses.  

Peer Groups 

This brings us back to the question of the environmental causes of offspring outcomes – namely, if about half the differences among children’s IQs and personalities are attributable to environmental factors, but these environmental factors are not to be found in the shared family environment (i.e. the environment shared by children raised in the same household), then where are these environmental factors to be sought? 

The search for environmental factors affecting personality and intelligence has, thus far, been largely unsuccessful. Indeed, some behavioural geneticists have almost gone as far as conceding scholarly defeat in identifying correlates for the environmental portion of the variance. 

Thus, leading contemporary behavioural geneticist Robert Plomin in his recent book, Blueprint: How DNA Makes Us Who We Are, concludes that those environmental factors that affect cognitive ability, personality, and the development of mental illness are, as he puts it, ‘unsystematic’ in nature. 

In other words, he seems to be saying that they are mere random noise. This is tantamount to accepting that the null hypothesis is true. 

Judith Harris, however, has a quite different take. According to Harris, environmental causes must be sought, not within the family home, but rather outside it – in a person’s interactions with their peer-group and the wider community.[6]

Environment ≠ Nurture 

Thus, Harris argues that the so-called nature-nurture debate is misnamed, since the word ‘nurture’ usually refers to deliberate care and moulding of a child (or of a plant or animal). But many environmental effects are not deliberate. 

Thus, Harris repeatedly references behaviourist John B. Watson’s infamous boast: 

Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in and I’ll guarantee to take any one at random and train him to become any type of specialist I might select—doctor, lawyer, artist, merchant-chief and, yes, even beggar-man and thief, regardless of his talents, penchants, tendencies, abilities, vocations, and race of his ancestors.

Yet what strikes me as particularly preposterous about Watson’s boast is not its radical environmental determinism, nor even its rather convenient unfalsifiability.[7] 

Rather, what most strikes me as most preposterous about Watson’s claim is its frankly breath-taking arrogance. 

Thus, Watson not only insisted that it was environment alone that entirely determined adult personality. In this same quotation, he also proclaimed that he already fully understood the nature of these environmental effects to such an extent that, given omnipotent powers to match his evidently already omniscient understanding of human development, he could produce any outcome he wished. 

Yet, in reality, environmental effects are anything but clear-cut. Pushing a child in a certain direction, or into a certain career, may sometimes have the desired effect, but other times may seemingly have the exact opposite effect to that desired, provoking the child to rebel against parental dictates. 

Thus, even to the extent that environment does determine outcomes, the precise nature of the environmental factors implicated, and their interaction with one another, and with the child’s innate genetic endowment, is surely far more complex than the simple mechanisms proposed by behaviourists like Watson (e.g. reinforcement and punishment). 

Language Acquisition 

The most persuasive evidence for Harris’s theory of the importance of peer groups comes from an interesting and widely documented peculiarity of language acquisition

The children of immigrants, whose parents speak a different language inside the family home, and may even themselves be monolingual, nevertheless typically grow up to speak the language of their host culture rather better than they do the language to which they were first exposed in the family home. 

Indeed, while their parents may never achieve fluency in the language of their host culture, having missed out on the Chomskian critical period for language acquisition, their children often actually lose the ability to speak their parent’s language, often much to the consternation of parents and grandparents. 

Yet, from an sociobiological or evolutionary psychological perspective, such an outcome is obviously adaptive. 

After all, if a child is to succeed in wider society, they must master its language, whereas, if their parent’s first language is not spoken anywhere in their host society except in their family, then it is of limited utility, and, once their parents themselves become proficient in the language of the host culture, it becomes entirely redundant.

As sociologist-turned-sociobiologist Pierre van den Berghe observes in his excellent The Ethnic Phenomenon (reviewed here):

Children quickly discover that their home language is a restricted medium that not useable in most situations outside the family home. When they discover that their parents are bilingual they conclude – rightly for their purposes – that the home language is entirely redundant… Mastery of the new language entails success at school, at work and in ‘the world’… [against which] the smiling approval of a grandmother is but slender counterweight” (The Ethnic Phenomenon: p258). 

Code-Switching 

Harris suggests that the same applies to personality. Just as the child of immigrants switches between one language and another at home and school, so they also adopt different personalities. 

Thus, many parents are surprised to be told by their children’s teachers at parents’ evenings that their offspring is quiet and well-behaved at school, since, they report, he or she isn’t at all like that at home. 

Yet, at home, a child has only, at most, a sibling or two with whom to compete for his parents’ attention. In contrast, at school, he or she has a whole class with whom to compete for their teacher’s attention.

It is therefore unsurprising that most children are less outgoing at school than they are at home with their parents. 

For example, an older sibling might be able push his little brother around at home. But, if he is small for his age, he is unlikely to be able to get away with the same behaviour among his peers at school. 

Children therefore adopt two quite different personalities – one for interactions with family and siblings, and another for among their peers.

This then, for Harris, explains why, perhaps surprisingly, birth-order has generally been found to have little if any effect on personality, at least as personality manifests itself outside the family home. 

An Evolutionary Theory of Socialization? 

Interestingly, even evolutionary psychologists have not been immune from the delusion of parental influence. Thus, in one influential paper, anthropologists Patricia Draper and Henry Harpending argued that offspring calibrate their reproductive strategy by reference to the presence or absence of a father in their household (Draper & Harpending 1982). 

On this view, being raised in a father-absent household is indicative of a social environment where low male parental investment is the norm, and hence offspring adjust their own reproductive strategy accordingly, adopting a promiscuous, low-investment mating strategy characterized by precocious sexual development and an inability to maintain lasting long-term relationships (Draper & Harpending 1982; Belsky et al 1991). 

There is indeed, as these authors amply demonstrate, a consistent correlation between father-absence during development and both earlier sexual development and more frequent partner-switching in later life. 

Yet there is also another, arguably more obvious, explanation readily at hand to explain this association. Perhaps offspring simply inherit biologically the personality traits, including sociosexual orientation, of their parents. 

On this view, offspring raised in single-parent households are more likely to adopt a promiscuous, low-investment mating strategy simply because they biologically inherit the promiscuous sociosexual orientation of their parents, the very promiscuous sociosexual orientation that caused the latter to have children out-of-wedlock or from relationships that were destined to break down and hence caused the father-absent childhood of their offspring. 

Moreover, even on purely a priori theoretical grounds, Draper, Harpending and Belsky’s reasoning is dubious. 

After all, whether you personally were raised in a one- or two-parent family is obviously a very unreliable indicator of the sorts of relationships prevalent in the wider community into which you are born, since it represents a sample size of just one. 

Instead, therefore, it would be far more reliable to calibrate your reproductive strategy in response to the prevalence of one-parent households in the wider community at large, rather than the particular household type into which you happen to have been born.  

This, of course, directly supports Harris’s own theory of ‘peer group socialization’. 

In short, to the extent that children do adapt to the environment and circumstances of their upbringing (and they surely do), they must integrate into, adopt the norms of, and a reproductive strategy to maximize their fitness within, the wider community into which they are born, rather than the possibly quite idiosyncratic circumstances and attitudes of their own family. 

Absent Fathers, from Upper-Class to Under-Class 

Besides language-acquisition among the children of immigrants, another example cited by Harris in support of her theory of ‘peer group socialization’ is the culture, behaviours and upbringing of British upper-class males.

Here, she reports, boys were, and, to some extent, still are, reared primarily, not by their parents, but rather by nannies, governoresses and, more recently, in exclusive fee-paying all-male boarding schools

Yet, despite having next to no contact with their fathers throughout most of their childhood, these boys nevertheless managed somehow to acquire manners, attitudes and accents similar, if not identical, to those of their upper-class fathers, and not at all those of the middle-class nannies, governoresses and masters with whom they spent most of their childhood being raised. 

Yet this phenomenon is by no means restricted to the British upper-classes.

On the contrary, rather than citing the example of the British upper-classes in centuries gone by, Harris might just as well have cited that of contemporary underclass in Britain and America, since what was once true of the British upper-classes, is now equally true of the underclass

Just as the British upper-classes were once raised by governoresses, nannies and in private schools with next to no contact with their fathers, so contemporary underclass males are similarly raised in single-parent households, often to unwed mothers, and typically have little if any contact with their biological fathers. 

Here, as Warren Farrell observes in his seminal The Myth of Male Power (which I have reviewed here, here and here), there is a now a “a new nuclear family: woman, government and child”, what Farrell terms “Government as a Substitute Husband”. 

Yet, once again, these underclass males, raised by single parents with the financial assistance of the taxpayer, typically turn out much like their absent fathers with whom they have had little if any contact, often going on to promiscuously father a succession of offspring themselves, with whom they likewise have next to no contact. 

Abuse 

But what of actual abuse? Surely this has a long-term devastating psychological impact on children. This, at any rate, is the conventional wisdom, and questioning this wisdom, at least with respect to sexual abuse, is tantamount to contemporary heresy, with attendant persecution

Thus, for example, it is claimed that criminals who are abusive towards their children were themselves almost invariably abused, mistreated or neglected as children, which is what has led to their own abusive, behaviour.

A particularly eloquent expression of this theory is found in the novel Clockers, by Richard Price, where one of the lead characters, a police officer, explains how, during his first few years on the job, a senior colleague had restrained him from attacking an abusive mother who had left her infant son handcuffed to a radiator, telling him:

Rocco, that lady you were gonna brain? Twenty years ago when she was a little girl. I arrested her father for beating her baby brother to death. The father was a piece of shit. Now that she’s all grown up? She’s a real piece of shit. That kid you saved today. If he lives that long, if he grows up? He’s gonna be a real piece of shit. It’s the cycle of shit and you can’t do nothing about it” (Clockers: p96).

Take, for example, what is perhaps the form of child abuse that provokes the most outrage and disgust – namely, sexual abuse. Here, it is frequently asserted that paedophiles were almost invariably themselves abused as children, which creates a so-called cycle of abuse

However, there are at least three problems with this claim. 

First, it cannot explain how the first person in this cycle came to be abusive. 

Second, we might doubt whether it is really true that paedophiles are disproportionately likely to have themselves been abused as children. After all, abuse is something that almost invariably happens surreptitiously ‘behind closed doors’ and is therefore difficult to verify or disprove. 

Therefore, even if most paedophiles claim to have been victims of abuse, it is possible that they are simply lying in order to elicit sympathy or excuse or shift culpability for their own offending. 

Finally, and most importantly for present purposes, even if paedophiles can be shown to be disproportionately likely to have themselves been victimized as children, this by no means proves that their past victimization caused their current sexual orientation. 

Rather, since most abuse is perpetrated by parents or other close family members, an alternative possibility is that victims simply biologically inherit the sexual orientation of their abuser.

After all, if homosexuality is partially heritable, as is now widely accepted, then why not paedophilia as well? 

In short, the ‘cycle of shit’ referred to by Price’s fictional police officer may well be real, but mediated by genetics rather than childhood experience.

However, this conclusion is not entirely clear. On the contrary, Harris is at pains to emphasize that the finding that the shared family environment accounts for hardly any of the variance in outcomes among adults does not preclude the possibility that severe abuse may indeed have an adverse effect on adult outcomes. 

After all, adoption studies can only tell us what percent of the variance is caused by heredity or by shared or unshared environments within a specific population as a whole.

Perhaps the shared family environment accounts for so little of the variance precisely because the sort of severe abuse that does indeed have a devastating long-term effect on personality and mental health is, thankfully, so very rare in modern societies. 

Indeed, it may be especially rare within the families sampled in adoption studies precisely because adoptive families are carefully screened for suitability before being allowed to adopt. 

Moreover, Harris emphasizes an important caveat: Even if abuse does not have long-term adverse psychological effects, this does not mean that abuse causes no harm, and nor does it in any way excuse such abuse. 

On the contrary, the primary reason we shouldn’t mistreat children (and should severely punish those who do) is not on account of some putative long-term psychological effect on the adults whom the children subsequently become, but rather because of the very real pain and suffering inflicted on a child at the time the abuse takes place. 

Race Differences in IQ 

Finally, Harris even touches upon that most vexed area of the (so-called) nature-nurture debate – race differences in intelligence

Here, the politically-correct claim that differences in intelligence between racial groups, as recorded in IQ tests, are of purely environmental origin runs into a problem, since the sorts of environmental effects that are usually posited by environmental determinists as accounting for the black-white test score gap in America (e.g. differences in rates of poverty and socioeconomic status) have been shown to be inadequate because, even after controlling for these factors, there remains a still unaccounted for gap in test-scores.[8]

Thus, as Arthur R. Jensen laments: 

This gives rise to the hypothesizing of still other, more subtle environmental factors that either have not been or cannot be measured—a history of slavery, social oppression, and racial discrimination, white racism, the ‘black experience,’ and minority status consciousness [etc]” (Straight Talk About Mental Tests: p223). 

The problem with these explanations, however, is that none of these factors has yet been demonstrated to have any effect on IQ scores. 

Moreover, some of the factors proposed as explanations are formulated in such a vague form (e.g. “white racism, the ‘black experience’”) that it is difficult to conceive of how they could ever be subjected to controlled testing in the first place.[9]

Jensen has termed this mysterious factor the X-factor

In coining this term, Jensen was emphasizing its vague, mysterious and unfalsifiable nature. Jensen did not actually believe that this posited X-factor, whatever it was, really did account for the test-score gap. Rather, he thought heredity explained most, if not all, of the remaining unexplained test-score gap. 

However, Harris takes Jensen at his word and takes the search for the X-factor very seriously. Indeed, she apparently believes she has discovered and identified it. Thus, she announces: 

I believe I know what this X factor is… I can describe it quite clearly. Black kids and white kids identify with different groups that have different norms. The differences are exaggerated by group contrast effects and have consequences that compound themselves over the years. That’s the X factor” (p248-9). 

Unfortunately, Harris does not really develop this fascinating claim. Indeed, she cites no direct evidence in support of this claim, and evidently seems to regard the alternative possibility – namely, that race differences in intelligence are at least partly genetic in origin – as so unpalatable that it can safely ruled out a priori.

In fact, however, although not discussed by Harris, there is at least some evidence in support of her theory. Indeed, her theory potentially reconciles the apparently conflicting findings of two of the most widely-cited studies in this vexed area of research and debate.

First, in the more recent of these two studies, Minnesota Transracial Adoption Study, the same race differences in IQ were observed among black, white and mixed-race children adopted into upper-middle class white families as are found among black, white and mixed-race populations in the community at large (Scarr & Weinberg 1976). 

Moreover, although, when tested during childhood, the children’s adoptive households did seem to have had a positive effect on their IQ scores, in a follow-up study it was found that by the time they reached the cusp of adulthood, the black teenagers who had been adopted into upper-middle-class white homes actually scored no higher in IQ than did blacks in the wider population not raised in upper-middle class white families (Weinberg, Scarr & Waldman 1992). 

Although Scarr, Weinberg and Waldman took pains to present their findings as compatible with a purely environmentalist theory of race differences, this study has, not unreasonably, been widely cited by hereditarians as evidence for the existence of innate racial differences in intelligence (e.g. Levin 1994; Lynn 1994; Whitney 1996).

However, in the light of the findings of the behavioural genetics studies discussed by Harris in ‘The Nurture Assumption’, the fact that white upper-middle-class adoptive homes had no effect on the adult IQs of the black children adopted into them is, in fact, hardly surprising. 

After all, as we have seen, the shared family environment generally has no effect on IQ, at least by the time the person being tested has reached adulthood.[10]

One would therefore not expect adoptive homes, howsoever white and upper-middle-class, to have any effect on adult IQs of the black children adopted into them, or indeed of the white or mixed-race children adopted into them. 

In short, adoptive homes have no effect on adult IQ, whether or not the adoptees, or adoptive families, are black, white, brown, yellow, green or purple! 

But, if race differences in intelligence are indeed entirely environmental in origin, then where are these environmental causes to be found, if not in the family environment? 

Harris has an answer – black culture

According to her, the black adoptees, although raised in white adoptive families, nevertheless still come to identify as ‘black’, and to identify with the wider black culture and social norms. In addition, they may, on account of their racial identification, come to socialize with other blacks in school and elsewhere. 

As a result of this acculturation to African-American norms and culture, they therefore, according to Harris, come to score lower in IQ than their white peers and adoptive siblings. 

But how can we ever test this theory? Is it not untestable, and is this not precisely the problem identified by Jensen with previous positedX-factors.

Actually, however, although not discussed by Harris, there is a way of testing this theory – namely, looking at the IQs of black children raised in white families where there is no wider black culture with which to identify, and few if any black peers with whom to socialize?

This, then, brings us to the second of the two studies which Harris’s theory potentially reconciles, namely the famous Eyferth study.  

Here, it was found that the mixed-race children fathered by black American servicemen who had had sexual relationships with German women during the Allied occupation of Germany after World War Two had almost exactly the same average IQ scores as a control group of offspring fathered by white US servicemen during the same time period (Eyferth 1959). 

The crucial difference from the Minnesota study may be that these children, raised in an almost entirely monoracial, white Germany in the mid-twentieth century, had no wider African-American culture with which to identify or whose norms to adopt, and few if any black or mixed-race peers in their vicinity with whom to socialize. 

This, then, is perhaps the last lifeline for a purely environmentalist theory of race differences in intelligence – namely the theory that African-American culture depresses intelligence. 

Unfortunately, however, this proposition – namely, that African-American culture depresses your IQ – is almost as politically unpalatable and politically-incorrect as is the notion that race differences in intelligence reflect innate genetic differences.[11]

Endnotes

[1] Thus, this ancient wisdom is reflected, for example, in many folk sayings, such as the apple does not fall far from the tree, a chip off the old block and like father, like son, many of which long predate either Darwin’s theory of evolution, and Mendel’s work on heredity, let alone the modern work of behavioural geneticists.

[2] It is important to emphasize here that this applies only to psychological outcomes, and not, for example, economic outcomes. For example, a child raised by wealthy parents is indeed likely to be wealthier than one raised in poverty, if only because s/he is likely to inherit (some of) the wealth of his parents. It is also possible that s/he may, on average, obtain a better job as a consequence of the opportunities opened by his privileged upbringing. However, his IQ will be no higher than had s/he been raised in relative poverty, and neither will s/he be any more or less likely to suffer from a mental illness

[3] Similarly, it is often claimed that children raised in care homes, or in foster care, tend to have negative life-outcomes. However, again, this by no means proves that it is care homes or foster care that causes these negative life-outcomes. On the contrary, since children who end up in foster care are typically either abandoned by their biological parents, or forcibly taken from their parents by social services on account of the inadequate care provided by the latter, or sometimes outright abuse, it is obvious that their parents represent an unrepresentative sample of society as a whole. An obvious alternative explanation, then, is that the children in question simply inherit the dysfunctional personality attributes of their biological parents, namely the very dysfunctional personality attributes that caused the latter to either abandon their children or have them removed by the social services. (In other cases, such children may have been orphaned. However, this is less common today. At any rate, parents who die before their offspring reach maturity are surely also unrepresentative of parents in general. For example, many may live high-risk lifestyles that contribute to their early deaths.)

[4] Likewise, the heritability of such personality traits as conscientiousness and self-discipline, in addition to intelligence, likely also partly account for the association between parental income and academic attainment among their offspring, since both academic attainment, and occupational success, require the self-discipline to work hard to achieve success. These factors, again in addition to intelligence, likely also contribute to the association between parental income and the income and socioeconomic status ultimately attained by their offspring.

[5] This possibility could, at least in theory, be ruled out by longitudinal studies, which could investigate whether the spanking preceded the misbehaviour, or vice versa. However, this is easier said than done, since, unless relying on the reports by caregivers or children themselves, which depends on both the memory and honesty of the caregivers and children themselves, it would have to involve intensive, long-term, and continued observation in order to establish which came first, namely the pattern of misbehaviour, or the adoption of physical chastisement as a method of discipline. This would, presumably, require continuous observation from birth onwards, so as to ensure that the very first instance of spanking or excessive misbehaviour were recorded. Such a study would seem all but impossible and certainly, to my knowledge, has yet to be conducted.

[6] The fact that the relevant environmental variables must be sought outside the family home is one reason why the terms ‘between-family environment’ and ‘within-family environment’, sometimes used as synonyms or alternatives for ‘shared’ and ‘non-shared family environment’ respectively, are potentially misleading. Thus, the ‘within-family environment’ refers to those aspects of the environment that differ for different siblings even within a single family. However, these factors may differ within a single family precisely because they occur outside, not within, the family itself. The terms ‘shared’ and ‘non-shared family environment’ are therefore to be preferred, so as to avoid any potential confusion these alternative terms could cause.

[7] Both practical and ethical considerations, of course, prevent Watson from actually creating his “own specified world” in which to bring up his “dozen healthy infants”. Therefore, no one is able to put his claim to the test. It is therefore unfalsifiable and Watson is therefore free to make such boasts, safe in the knowledge that there is no danger of his actually being called to make good on his claims and thereby proven wrong.

[8] Actually, even if race differences in IQ are found to disappear after controlling for socioeconomic status, it would be a fallacy to conclude that this means that the differences in IQ are entirely a result of differences in social class and that there is no innate difference in intelligence between the races. After all, differences in socioeconomic status are in large part a consequence of differences in cognitive ability, as more intelligent people perform better at school, and at work, and hence rise in socioeconomic status. Therefore, in controlling for socioeconomic status, one is, in effect, also controlling for differences in intelligence, since the two are so strongly correlated. The contrary assumption has been termed by Jensenthe sociologist’s fallacy’.
This fallacy involves the assumption that it is differences in socioeconomic status that cause differences in IQ, rather than differences in intelligence that cause differences in socioeconomic status. As Arthur Jensen explains it:

If SES [i.e. socioeconomic status] were the cause of IQ, the correlation between adults’ IQ and their attained SES would not be markedly higher than the correlation between children’s IQ and their parents’ SES. Further, the IQs of adolescents adopted in infancy are not correlated with the SES of their adoptive parents. Adults’ attained SES (and hence their SES as parents) itself has a large genetic component, so there is a genetic correlation between SES and IQ, and this is so within both the white and the black populations. Consequently, if black and white groups are specially selected so as to be matched or statistically equated on SES, they are thereby also equated to some degree on the genetic component of IQ” (The g Factor: p491).

[9] Actually, at least some of these theories are indeed testable and potentially falsifiable. With regard to the factors quoted by Jensen (namely, “a history of slavery, social oppression, and racial discrimination, white racism… and minority status consciousness”), one way of testing these theories is to look at test scores in those countries where there is no such history. For example, in sub-Saharan Africa, as well as in Haiti and Jamaica, blacks are in the majority, and are moreover in control of the government. Yet the IQ scores of the indigenous populations of sub-Saharan Africa are actually even lower than among blacks in the USA (see Richard Lynn’s Race Differences in Intelligence: reviewed here). True, most such countries still have a history of racial oppression and discrimination, albeit in the form of European colonialism rather than racial slavery or segregation in the American sense. However, in those few sub-Saharan African countries that were not colonized by western powers, or only briefly colonized (e.g. Ethiopia, Liberia), scores are not any higher. Also, other minority groups ostensibly or historically subject to racial oppression and discrimination (e.g. Ashkenazi Jews, Overseas Chinese) actually score higher in IQ than the host populations that ostensibly oppress them. As for “the ‘black experience’”, this meanly begs the question as to why the ‘black experience’ has been so similar, and resulted in the same low IQs in so many different parts of the world, something implausible unless unless the ‘black experience’ itself reflects innate aspects of black African psychology. 

[10] The fact that the heritability of intelligence is higher in adulthood than during childhood, and the influence of the shared family environment correspondingly decreases, has been interpreted as reflecting the fact that, during childhood, our environments are shaped, to a considerable extent, by our parents. For example, some parents may encourage activities that may conceivably enhance intelligence, such as reading books and visiting museums. In contrast, as we enter adulthood, we begin to have freedom to choose and shape our own environments, in accordance with our interests, which may be partly a reflection of our heredity.
Interestingly, this theory suggests that what is biologically inherited is not necessarily intelligence itself, but rather a tendency to seek out intelligence-enhancing environments, i.e. intellectual curiosity rather than intelligence as such. In fact, it is probably a mixture of both factors. Moreover, intellectual curiosity is surely strongly correlated with intelligence, if only because it requires a certain level of intelligence to appreciate intellectual pursuits, since, if one lacks the ability to learn or understand complex concepts, then intellectual pursuits are necessarily unrewarding.

[11] Thus, ironically, the recently deceased James Flynn, though always careful, throughout his career, to remain on the politically-correct radical environmentalist side of the debate with regard to the causes of race differences in intelligence, nevertheless recently found himself taken to task by the leftist, politically-correct British Guardian newspaper for a sentence in his recent book, Does Your Family Make You Smarter, where he described American blacks as coming from a “from a cognitively restricted subculture” (Wilby 2016). Thus, whether one attributes lower black IQs to biology or to culture, either answer is certain offend leftists, and the power of political correctness can, it seems, never be appeased.

References 

Belsky, Steinberg & Draper (1991) Childhood Experience, Interpersonal Development, and Reproductive Strategy: An Evolutionary Theory of Socialization Child Development 62(4): 647-670 

Draper & Harpending (1982) Father Absence and Reproductive Strategy: An Evolutionary Perspective Journal of Anthropological Research 38:3: 255-273 

Eyferth (1959) Eine Untersuchung der Neger-Mischlingskinder in Westdeutschland. Vita Humana, 2, 102–114

Levin (1994) Comment on Minnesota Transracial Adoption Study. Intelligence. 19: 13–20

Lynn, R (1994) Some reinterpretations of the Minnesota Transracial Adoption Study. Intelligence. 19: 21–27

Scarr & Weinberg (1976) IQ test performance of black children adopted by White families. American Psychologist 31(10):726–739 

Weinberg, Scarr & Waldman, (1992) The Minnesota Transracial Adoption Study: A follow-up of IQ test performance at adolescence Intelligence 16:117–135 

Whitney (1996) Shockley’s experiment. Mankind Quarterly 37(1): 41-60

Wilby (2006) Beyond the Flynn effect: New myths about race, family and IQ? Guardian, September 27.

A Modern McCarthyism in our Midst

Anthony Browne, The Retreat of Reason: Political Correctness and the Corruption of Public Debate in Modern Britain (London: Civitas, 2006) 

Western civilization has progressed. Today, unlike in earlier centuries, we no longer burn heretics at the stake.

Instead, according to sociologist Steven Goldberg, himself no stranger to contemporary heresy, these days: 

All one has to lose by unpopular arguments is contact with people one would not be terribly attracted to anyway” (Fads and Fallacies in the Social Sciences: p222). 

Unfortunately, however, Goldberg underplays, not only the psychological impact of ostracism, but also the more ominous consequences that sometimes attach to contemporary heresy.

“While columnists, academics, and filmmakers delight in condemning, without fear of reprisals, a form of McCarthyism that ran out of steam over half a century ago (i.e. anti-communism), few dare to incur the wrath of the contemporary inquisition by exposing a modern McCarthyism right here in our midst”

Thus, bomb and death threats were issued repeatedly to women such as Erin Pizzey and Suzanne Steinmetz for pointing out that women were just as likely, or indeed somewhat more likely, to perpetrate acts of domestic violence against their husbands and boyfriends as their husbands and boyfriends were to perpetrate acts of domestic violence against them – a finding now replicated in literally hundreds of studies (see also Domestic Violence: The 12 Things You Aren’t Supposed to Know). 

Similarly, in the seventies, Arthur Jensen, a psychology professor at the University of California, had to be issued with an armed guard on campus after suggesting, in a sober and carefully argued scientific paper, that it was a “not unreasonable” hypothesis that the IQ difference between blacks and whites in America was partly genetic in origin.

Political correctness has also cost people their jobs. 

Academics like Chris BrandHelmuth NyborgLawrence SommersFrank EllisNoah Carl and, most recently, Bo Winegard have been forced to resign or lost their academic positions as a consequence of researching, or, in some cases, just mentioning, politically incorrect theories such as the possible social consequences of, or innate basis for, sex and race differences in intelligence

Indeed, even the impeccable scientific credentials of James Watson, a man jointly responsible for among the most important scientific discoveries of the twentieth century, did not spare him this fate when he was reported in a newspaper as making some controversial but eminently defensible comments regarding population differences in cognitive ability and their likely impact on prospects for economic development.  

At the time of (re-)writing this piece, the most recent victim of this process of purging in academia is the celebrated historian, and long-term controversialist, David Starkey, excommunicated for some eminently sensible, if crudely expressed, remarks about slavery. 

Meanwhile, as proof of the one-sided nature of the witch-hunt, during the very same month as that in which Starkey was excommunicated from public life, a non-white leftist female academic, Priyamvada Gopal, now a professor of Postcolonial Studies, tweeted the borderline genocidal tweet

White lives don’t matter. As white lives.[1]

Yet the only repercussions she faced from her employer, Cambridge University, was to be almost immediately promoted to a full professorship

Cambridge University also issued a defence of their employees’ right to academic freedom, the institution itself tweeting in response to the controversy that: 

“[Cambridge] University defends the right of its academics to express their own lawful opinions which others might find controversial

This is indeed an admirable and principled stance – if applied consistently. 

Unfortunately, however, although this tweet was phrased in general terms, and actually included no mention of Gopal by name, it was evidently not of general application. 

For Cambridge University is, not only among the institutions from which Starkey was forced to tender his resignation this very same month, but also itself the very same institution that, only a year before, had denied a visiting fellowship to Jordan Peterson, the eminent public intellectual, for his controversial stances and statements on a range of topics, and which, only two years before, had denied an academic fellowship to sociologist Noah Carl, after a letter calling for his dismissal which was signed by, among others, none other than the loathsome Priyamvada Gopal herself. 

The inescapable conclusion is the freedom of “academics to express lawful opinions which others might find controversial” at Cambridge University applies, despite the general wording of the tweet from which these words are taken, only to those controversial opinions of which the leftist academic and cultural establishment currently approves. 

Losing Your Livelihood 

If I might be accused here of focusing excessively on freedom of speech in an academic context, this is only because academia is among the arenas where freedom of expression is most essential, as it is only if all ideas, howsoever offensive to certain protected groups, are able to freely circulate, and compete, in the marketplace of ideas that knowledge is able to progress through a selective process of testing and falsification.[2]

However, although the university environment is, today, especially intolerant, nevertheless similar fates have also befallen non-academics, many of whom have been deprived of their livelihoods on account of their politics. 

For example, in The Retreat of Reason, the 2006 book of which this post is ostensibly a review, Anthony Browne points to the case of a British headmaster sacked for saying Asian pupils should be obliged to learn English, a policy that was then, only a few years later, actually adopted as official government policy (p50). 

In the years since the publication of ‘The Retreat of Reason’, such examples have only multiplied. 

Indeed, today it is almost taken for granted that anyone caught saying something controversial and politically incorrect on the internet in his own name, or even under a pseudonym if subsequently ‘doxed’, is liable to lose his job.

Likewise, Browne noted that police and prison officers in the UK were then (and are stillbarred from membership of the BNP, a legal and constitutional political party, but not from membership of Sinn Fein, who until quite recently had supported domestic terror against the British state, including the murder of soldiers, civilians and the police themselves, nor of various Marxist groups that openly advocate the violent overthrow of the state and indeed the whole capitalist system (p51-2). 

Today, meanwhile, even believing that a person cannot change their biological sex is said to be a bar on admission into the British police force.

Moreover, employees sacked on account of their political views cannot always even turn to their unions for support.

On the contrary, trade unions have themselves expelled members for their political views, and indeed for membership of this same political party (p52). They have also successfully defended themselves in the European Court of Human Rights for doing precisely this, citing the right to freedom of association (see ASLEF v UK [2007] ECHR 184). 

Yet, ironically, freedom of association is not only the precise same freedom denied to employers by anti-discrimination laws, but also the very same freedom that surely guarantees a person’s right to be a member of a constitutional, legal political party, or express controversial political views outside of their work, without being at risk of losing their job or being banished from their union.

Browne concludes:

One must be very disillusioned with democracy not to find it at least slightly unsettling that in Europe in the twenty-first century government employees are being banned from joining certain legal political parties but not others, legal democratic party leaders are being arrested in dawn raids for what they have said and political parties leading the polls are being banned by judges” (p57). 

Of course, racists and members of parties like the BNP hardly represent a fashionable cause célèbre for civil libertarians. But, then, neither did other groups targeted for political persecution at the time of their political persecution. This is, of course, precisely what rendered them so vulnerable to persecution. 

Political correctness is often dismissed as a trivial issue, which only bigots and busybodies bother complaining about when there are so many supposedly more serious problems in the world today. 

Yet free speech is never trivial. When people lose their jobs and livelihoods because of currently unfashionable opinions, what we are witnessing is a modern form of McCarthyism. 

Indeed, as conservative commentator David Horowitz observes: 

The era of the progressive witch-hunt has been far worse in its consequences to individuals and freedom of expression than was the McCarthy era… [not least because] unlike the McCarthy era witch-hunt, which lasted only a few years, the one enforced by left-wing ‘progressives’ is now entering its third decade and shows no signs of abating” (Left Illusions: An Intellectual Odyssey).[3] 

Thus, the McCarthyism of the 1950s positively pales into insignificance as compared to the McCarthyism that operates in the west today. The former involved a few communists, suspected communists and communist sympathizers being forced out of their jobs at the height of the Cold War and of Soviet infiltration (which was very real); the latter involves untold numbers of people losing their jobs, being excommunicated from public life and polite society, harassed, demonized and sometimes criminally prosecuted for currently unfashionable and politically-incorrect opinions.

Yet, while columnists, academics, and filmmakers delight in condemning, without fear of reprisals, a form of McCarthyism that ran out of steam over half a century ago (i.e. anti-communism during the Second Red Scare), few dare to incur the wrath of the contemporary inquisition by exposing a modern McCarthyism right here in our midst.

Recent Developments 

Browne’s ‘The Retreat of Reason’ was first published in 2006. Unfortunately, however, in the intervening decade and a half, despite Browne’s wise counsel, the situation has only worsened.

Thus, what was then called ‘political correctness’ has now itself transmorphed into what is now called ‘wokeness’ and cancel culture, phenomena which predate the coinage of these terms, but which, though representing a difference of degree rather than of kind, nevertheless reflect a more than merely semantic transformation. 

Thus, in 2006, Browne rightly championed New Media facilitated by the internet age, such as blogs (like this one, hopefully), for disseminating controversial, politically-incorrect ideas and opinion, and thereby breaking the mainstream media monopoly on the dissemination of information and ideas (p85). 

Here, Browne was surely right. Indeed, new media, such as blogs, have not only been responsible for disseminating ideas that are largely taboo in the mainstream media, but even for breaking news stories that had been suppressed by mainstream media, such as the predominant racial background of the men responsible for the 2015-2016 New Year’s Eve sexual assaults in Germany

However, in the decade and a half since ‘The Retreat of Reason’ was published, censorship has become increasingly restrictive even in the virtual sphere. 

Thus, internet platforms like YouTubePatreon, Facebook and Twitter increasingly deplatform content-creators with politically incorrect viewpoints, and, in a particularly disturbing move, even some websites have been, at least temporarily, forced offline, or banished to the darkweb, by their web hosting providers.

Doctrinaire libertarians respond that this is not a free speech issue, but rather a freedom of association issue, because these are private businesses with the right to deny service to anyone with whom they, for whatever reason, choose not to contract.

In reality, however, platforms like Facebook and Twitter are far more than merely private businesses. As virtual market monopolies, they are part of the infrastructure of everyday life in the twenty-first century.

To be banned from communicating on Facebook is tantamount to being barred from communication in a public place.

Moreover, the problem is only exacerbated by the fact that the few competitors seeking to provide an alternative to these ‘Big Tech’ monopolies are themselves being de-platormed by their hosting providers as a direct consequence of their commitment to free speech and willingness to host controversial content.

Likewise, the denial of financial services, such as bank accounts, loans and payment processing, to groups or individuals on the basis of their politics is particularly troubling, effectively making it all but impossible those afflicted to remain financially viable. The result is effectively tantamount to being made an ‘unperson’.

Moreover, far from remaining a hub of free expression, social media in particular has increasingly provided a rallying and recruiting ground for moral outrage and repression, not least in the form of so-called twittermobs,, intent on publicly shaming, harassing and denying employment opportunities to anyone of whose views they disapprove.

In short, if the internet has facilitated free speech, it has also facilitated political persecution, since today, it seems, one can enjoy all the excitement and exhilaration of joining a witchhunt, pitchfork proudly in hand, without ever straying from the comfort of your computer screen.

Explaining Political Correctness 

For Browne, PC represents “the dictatorship of virtue” (p7) and replaces “reason with emotion” and subverts “objective truth to subjective virtue” (xiii). 

Political correctness is an assault on both reason and… democracy. It is an assault on reason, because the measuring stick of the acceptability of a belief is no longer its objective, empirically established truth, but how well it fits in with the received wisdom of political correctness. It is an assault on… democracy because [its] pervasiveness… is closing down freedom of speech” (p5). 

Yet political correctness is not wholly without precedents. 
 
On the contrary, every age has its taboos. Thus, in previous centuries, it was compatibility with religious dogma rather than leftist orthodoxy that represented the primary “measuring stick of the acceptability of a belief” – as Galileo, among others, was to discover for his pains.

Although, as a conservative, Browne might be expected to be favourably disposed to traditional religion, he nevertheless acknowledges the analogy between political correctness and the religious dogmas of an earlier age: 

Christianity… has shown many of the characteristics of modern political correctness and often went far further in enforcing its intolerance with violence” (p29). 

Indeed, this intolerance is not restricted to Christianity. Thus, whereas Christianity, in an earlier age, persecuted heresy with even greater intolerance than does the contemporary left, in many parts of the world, including increasingly the West, Islam still does.  

As well as providing an analogous justification for the persecution of heretics, political correctness may also, Browne suggests, serve a similar psychological function to religion, in representing: 

A belief system that echoes religion in providing ready, emotionally-satisfying answers for a world too complex to understand fully and providing a gratifying sense of righteousness absent in our otherwise secular society” (p6).

Defining Political Correctness

What, then, do we mean by ‘political correctness’? 

Political correctness evaluates a claim, not on its truth, but on its offensiveness to certain protected groups. Some views are held to be not only false, indeed sometimes not even false, but rather unacceptable, unsayable and beyond the bounds of acceptable opinion. 

Indeed, for the enforcers of the politically correct orthodoxy, the truth or falsehood of a statement seems ultimately to be of little interest. 

Browne provides a useful definition of political correctness as: 

An ideology which classifies certain groups of people as victims in need of protection from criticism and which makes believers feel that no dissent should be tolerated” (p4). 

Refining this, I would say that, for an opinion to be ‘politically incorrect’, two criteria must be met:

1) The existence of a group to whom the opinion in question is regarded as ‘offensive’
2) The group in question must be perceived as ‘oppressed’

Thus, it is perfectly acceptable to disparage and offend supposedly ‘privileged’ groups (e.g. males, white people, Americans or the English), but groups with ‘victim-status’ are deemed sacrosanct and beyond reproach, at least as a group. 

Victim Status

Victim-status itself, however, seems to be rather arbitrarily bestowed. 

Certainly, actual poverty or economic deprivation has little to do with it. 

“It is acceptable to denigrate the white working class as ‘chavs’ , and ‘rednecks’, but multi-millionaires who happen to be black, female or homosexual can perversely pose as oppressed’. The ‘ordinary working man’, once the quintessential proletarian, has found himself recast in leftist demonology as a racist, homophobic, wife-beating bigot.”

Thus, it is perfectly acceptable to denigrate the white working-class. Thus, pejorative epithets aimed at the white working class, such as redneck, chav and ‘white trash’, are widely employed and considered socially-acceptable in polite (and not so polite) conversation (see The Redneck Manifesto).

Yet the use of comparably derogatory terms in respect of, say, black people, is considered wholly beyond the pale, and sufficient to end media careers in Britain and America.

However, multi-millionaires who happen to be black, female or homosexual are permitted to perversely pose as ‘oppressed’, and wallow in their ostensible victimhood.

Thus, in the contemporary West, the Left has largely abandoned its traditional constituency, namely the working class, in favour of ethnic minorities, homosexuals and feminists.

In the process, the ‘ordinary working man’, once the quintessential proletarian, has found himself recast in leftist demonology as a racist, homophobic, wife-beating bigot.

Likewise, men are widely denigrated in popular culture. Yet, contrary to the feminist dogma which maintains that men have disproportionate power and are privileged, it is in fact men who are overwhelmingly disadvantaged by almost every sociological measure.

Thus, Browne writes: 

Men were overwhelmingly underachieving compared with women at all levels of the education system, and were twice as likely to be unemployed, three times as likely to commit suicide, three times as likely to be a victim of violent crime, four times as likely to be a drug addict, three times as likely to be alcoholic and nine times as likely to be homeless” (p49). 

Indeed, overt discrimination against men, such as the different ages at which men and women were then eligible for state pensions in the UK (p25; p60; p75) and the higher levels of insurance premiums demanded of men (p73) are widely tolerated.[4]

The demand for equal treatment only goes as far as it advantages the [ostensibly] less privileged sex” (p77). 

“‘Victim status’ is a relative concept. Thus, feminists may have victim power over men, but, as soon as the men in question decide to don lipstick and dresses and identify as ‘transwomen’, suddenly the feminists find that the high heel stilettos are, both literally and metaphorically, very much on the other foot.”

Victim status is not only seemingly arbitrarily accorded, it is also a relative concept.

Thus, the Scots and Irish may have a degree of victim-status in relation to their historical enemy, the English, such that the vitriolic anti-English and anti-British rhetoric of many Scottish and Irish nationalists tends to receive a free pass – so long as it remains safely directed against the English.

However, as soon as Scottish or Irish nationalism comes to be directed, not against the British or English, but rather at recent nonwhite immigrants to Scotland and Ireland, who, unlike the English, arguably represent the real threat to Scottish and Irish identity and nationhood today, it suddenly becomes anathema and beyond the pale.

Likewise, women may indeed, as we have seen, possess victim power vis a vis men. However, as soon as the men in question decide to put on lipstick and dresses and identify as ‘transwomen’, suddenly the feminists find, much to their chagrin, that the high heel stilettos are, both literally and metaphorically, very much on the other foot.[5]

The arbitrary way in which recognition as an ‘oppressed group’ is accorded, together with the massive benefits accruing to demographics that have secured such recognition, has created a perverse process that Browne aptly terms “competitive victimhood” (p44). 

Few things are more powerful in public debate than… victim status, and the rewards… are so great that there is a large incentive for people to try to portray themselves as victims” (p13-4) 

Thus, groups currently campaigning for ‘victim status’ include, he reports, “the obese, Christians, smokers and foxhunters” (p14). 

The result is what economists call perverse incentives

By encouraging people to strive for the bottom rather than the top, political correctness undermines one of the main driving forces in society, the individual pursuit of self-improvement” (p45) 

This outcome can perhaps even be viewed as the ultimate culmination of what Nietzsche called the transvaluation of values, whereby, under the influence of Christian ethics, disadvantage, weakness and oppression are converted into positive virtues and even, paradoxically, into strength. 

Euroscepticism & Brexit

Unfortunately, despite his useful definition of the phenomenon of political correctness, Browne goes on to use the term ‘political correctness’ in a broader fashion that goes beyond this original definition, and, in my opinion, extends the concept beyond its sphere of usefulness. 

For example, he classifies Euroscepticism – i.e. opposition to the further integration of the European Union – as a politically incorrect viewpoint (p60-62). 

Here, however, there is no obvious ‘oppressed group’ in need of protection. 

“The term ‘political correctness’ serves a similar function for conservatives as the term ‘fascist’ does for leftists – namely a useful catchall label to be applied to any views with which they themselves happen to disagree.”

Moreover, although widely derided as ignorant and jingoistic, Eurosceptical opinions have never been actually deemed ‘offensive’ or beyond the bounds of acceptable opinion.

On the contrary, they are regularly aired in mainstream media outlets, and even on the BBC, and recently scored a final victory in Britain with the Brexit campaign of 2016.  

Browne’s extension of the concept of political correctness in this way is typical of many critics of political correctness, who succumb to the temptation to define as ‘political correctness’ as any view with which they themselves happen to disagree. 

This enables them to tar any views with which they disagree with the pejorative label of ‘political correctness’.

It also, perhaps more importantly, allows ostensible opponents of political correctness to condemn the phenomenon without ever actually violating its central taboos by discussing any genuinely politically incorrect issues. 

They can therefore pose as heroic opponents of the inquisition while never actually themselves incurring its wrath. 

The term ‘political correctness’ therefore serves a similar function for conservatives as the term fascist does for leftists – namely a useful catchall label to be applied to any views with which they themselves happen to disagree.[6]

Jews, Muslims and the Middle East 

Another example of Browne’s extension of the concept of political correctness beyond its sphere of usefulness is his characterization of any defence of the policies of Israel as ‘politically incorrect’. 

Yet, here, the ad hominem and guilt-by-association methods of debate (or rather of shutting down debate), which Browne rightly describes as characteristic of political correctness (p21-2), are more often used by defenders of Israel than by her critics – though, here, the charge of ‘anti-Semitism’ is substituted for the usual refrain of ‘racism’.[7]

Thus, in the US, any suggestion that the US’s small but disproportionately wealthy and influential Jewish community influences US foreign policy in the Middle East in favour of Israel is widely dismissed as anti-Semitic and roughly tantamount to proposing the existence of a world Jewish conspiracy led by the Learned Elders of Zion.

Admittedly, Browne acknowledges: 

The dual role of Jews as oppressors and oppressed causes complications for PC calculus” (p12).  

In other words, the role of the Jews as victims of persecution in National Socialist Germany conflicts with, and weighs against, their current role as perceived oppressors of the Palestinians in the Middle East. 

However, having acknowledged this complication, Browne immediately dismisses its importance, all too hastily going on to conclude in the very same sentence that: 

PC has now firmly transferred its allegiance from the Jews to Muslims” (p12). 

However, in many respects, the Jews retain their ‘victim-status’ despite their hugely disproportionate wealth and political power

Indeed, perhaps the best evidence of this is the taboo on referring to this disproportionate wealth and power. 

Thus, while the political Left never tires of endlessly recycling statistics demonstrating the supposed overrepresentation of ‘white males’ in positions of power and privilege, to cite similar statistics demonstrating the even greater per capita overrepresentation of Jews in these exact same positions of power and privilege is deemed somehow deemed beyond the pale, and evidence, not of leftist sympathies, but rather of being ‘far right

This is despite the fact that the average earnings of American-Jews and their level of overrepresentation in influential positions in government, politics, media and business relative to population size surely far outstrips that of any other demographic – white males very much included.

The Myth of the Gender Pay Gap 

One area where Browne claims that the “politically correct truth” conflicts with the “factually correct truth” is the causes of the gender pay-gap (p8; p59-60). 

This is also included by philosopher David Conway as one of six issues, raised by Browne in the main body of his text, for which Conway provides supportive evidence in an afterword entitled ‘Commentary: Evidence supporting Anthony Browne’s Table of Truths Suppressed by PC’, included as a sort of appendix in later editions of Browne’s book. 

Although still standard practice in mainstream journalism at the time his book was written, it is regrettable that Browne himself offers no sources to back up the statistics he cites in his text.

This commentary section therefore provides the only real effort to provide sources or citations for many of Browne’s claims. Unfortunately, however, it covers only a few of the many issues addressed by Browne in preceding pages. 

In support of Browne’s contention that “different work/life choices” and “career breaks” underlie the gender pay gap (p8), Conway cites the work of sociologist Catherine Hakim (p101-103). 

Actually, more comprehensive expositions of the factors underlying the gender pay gap are provided by Warren Farrell in Why Men Earn More (which I have reviewed here, here and here) and Kingsley Browne in Biology at Work: Rethinking Sexual Equality (which I have reviewed here). 

Moreover, while it indeed true that the pay-gap can largely be explained by what economists call ‘compensating differentials’ – e.g. the fact that men work longer hours, in more unpleasant and dangerous working conditions, and for a greater proportion of their adult lives – Browne fails to factor in the final and decisive feminist fallacy regarding the gender pay gap, namely the assumption that, because men earn more money than women, this necessarily means they have more money than women and are wealthier.

In fact, however, although men earn more money than women, much of this money is then redistributed to women via such mechanisms as marriage, alimony, maintenance, divorce settlements and the culture of dating.

Indeed, as I have previously provocatively proposed:

The entire process of conventional courtship is predicated on prostitution, from the social expectation that the man will pay for dinner on the first date, to the legal obligation that he continue to provide for his ex-wife through alimony and maintenance for anything up to ten or twenty years after he has belatedly rid himself of her.

Therefore, much of the money earned by men is actually spent by, or on, their wives, ex-wives and girlfriends (not to mention daughters) such that, although women earn less than men, women have long been known to researchers in the marketing industry to dominate about 80% of consumer spending

However, Browne does usefully debunk another area in which the demand for equal pay has resulted in injustice – namely the demand for equal prizes for male and female athletes at the Wimbledon Tennis Championships (a demand since cravenly capitulated to). Yet, as Browne observes: 

Logically, if the prize doesn’t discriminate between men and women, then the competition that leads to those prizes shouldn’t either… Those who insist on equal prizes, because anything else is discrimination, should explain why it is not discrimination for men to be denied an equal right to compete for the women’s prize.” (p77).[8]

Thus, Browne perceptively observes: 

It would currently be unthinkable to make the same case for a ‘white’s only’ world athletics championship… [Yet] it is currently just as pointless being a white 100 metres sprinter in colour-blind sporting competitions as it would be being a women 100 metres sprinter in gender-blind sporting competitions” (p77). 

International Aid 

Another topic addressed by both Browne (p8) and Conway (p113-115) is the reasons for African poverty. 

The politically correct explanation, according to Browne, is that African poverty results from inadequate international aid (p8). However, Browne observes: 

No country has risen out of poverty by means of international aid and cancelling debts” (p20).[9]

Moreover, Browne points out that fashionable policies such as “writing off Third World debt” produce perverse incentives by “encourag[ing] excessive and irresponsible borrowing by governments” (p48), while international aid encourages economic dependence, bureaucracies and corruption (p114).

Actually, in my experience, the usual explanation given for African underdevelopment is not, as Browne and Conway suggest, inadequate international aid as such.

After all, this explanation only raises the question as to how many developed countries, such as those in Europe, managed to achieve First World living standards back when there were no other wealthy First World countries around to provide them with international aid to assist with their development.

Instead, in my experience, most leftists blame African poverty and underdevelopment on the supposed legacy of European colonialism. Thus, it is argued that European nations, and indeed white people in general, are themselves to blame for the poverty of Africa. International aid is then reimagined as a form of recompense for past wrongs. 

Unfortunately, however, this explanation for African poverty fares barely any better. 

For one thing, it merely raises the question why it was that Africa was colonized by Europeans rather than vice versa?

The answer, of course, is that much of sub-Saharan Africa was ‘underdeveloped (i.e. socially and technologically backward) even before colonization. This was what allowed Africa to be so easily and rapidly conquered and colonized during the late-nineteenth and early-twentieth centuries. 

Moreover, if European colonization is really to blame for the poverty of so much of sub-Saharan Africa, then why is it that those few African countries largely spared European colonization, such as Liberia and Ethiopia, are, if anything, even worse off than their neighbours in part precisely because they lack the infastructure (e.g. roads, railroads) that the much-maligned European colonial overlords were responsible for bequeathing other African states.

In other words, far from holding Africa back, European colonizers often built what little infrastructure and successful industry sub-Saharan Africa still has, and African countries are poor despite colonialism rather than because of it.[10]

Further falsifying the assumption that the experience of European colonialism invariably impeded the economic development of those regions formerly subject to European colonial rule is the experience of former European colonies in parts of the world other than Africa.

Here, there have been many notable success stories, including Malaysia, Singapore, Hong Kong, even India, not to mention Canada, Australia, New Zealand, all of which were former European colonies, and many of which gained their independence around the same time as African polities.

An experience with European colonization is then, it seems, no bar to economic development outside of Africa. Why then has the experience in Africa itself been so different?

Browne and Conway, for their part, place the blame firmly on Africans themselves – but on African rulers rather than the mass of African people. The real reason for African poverty, they report, is simply “bad governance” on the part of Africa’s post-colonial rulers (p8).

Poverty in African has been caused by misrule rather than insufficient aid” (p113).

Unfortunately, however, this is hardly a complete explanation, since it only merely raises the question as to why Africa has been so prone to “misrule” and “bad governance” in the first place.

It also raises the question as to why regions outside of Africa, but nevertheless populated by people of predominantly sub-Saharan African ancestry, such as Haiti and Jamaica (or even Baltimore and Detriot), are seemingly beset by many of the same problems (e.g. high levels of violent crime, poverty).

This last observation, of course, suggests that the answer lies, not in African soil or geography, but rather in differences between races in personality, intelligence and behaviour.[11]

However, this is, one suspects, a conclusion too politically incorrect even for Browne himself to consider.

Is Browne a Victim of Political Correctness Himself? 

The foregoing discussion converges in suggesting a single overarching problem with Browne’s otherwise admirable dissection of the nature and effects of political correctness – namely that Browne, although ostensibly an opponent of political correctness, is, in reality, neither immune to the infection nor ever able to effect a full recovery. 

Brown himself observes: 

Political correctness succeeds, like the British Empire, through divide and rule… The politically incorrect often end up appeasing political correctness by condemning fellow travellers” (p37). 

Indeed, this is indeed a characteristic feature of witch-hunts, from Salem to McCarthy, whereby victims were able to partially absolve themselves by ‘outing’ fellow-travellers to be persecuted in their place. 

However, although bemoaning this trend, Browne nevertheless himself provides a prime example it when, having rightly deplored the treatment of BNP supporters deprived of employment on account of their political views, he nevertheless issues the almost obligatory disclaimer, condemning the party as “odious” (p52).

In doing so, he thereby ironically perfectly illustrates the very appeasement of political correctness which he has himself identified as central to its power.

Similarly, it is notable that, in his discussion of the suppression of politically incorrect facts and theories, Browne nevertheless fails to address any of the most incendiary such facts and theories, such as those that resulted in death threats to the likes of Jensen, Pizzey and Steinmetz

After all, to discuss the really taboo topics would not only bring upon him even greater opprobrium than that which he already faced, but also likely deny him a platform (or at least a mainstream platform) in which to express his views altogether. 

Browne therefore provides his ultimate proof of the power of political correctness, not through the topics he addresses, but rather through those he conspicuously avoids. 

In failing to address these issues, either out of fear of the consequences or genuine ignorance of the facts due to the media blackout on their discussion, Browne provides the definitive proof of his own fundamental thesis, namely the political correctness indeed corrupts public debate and subverts free speech.

Endnotes

[1] After the resulting outcry, Gopal insisted she stood by her tweets, which, she insists, “were very clearly speaking to a structure and ideology, not about people”, something actually not at all clear from her phraseology, and arguably inconsistent with it, given that, save in a metaphoric sense, it is only people who have, and lose, “lives”, not institutions or ideology, and indeed only people, not institutions or ideology, who can properly be described as “white.
At best, her tweet was incendiary and grossly irresponsible in a time of increasing, sometimes overtly genocidal, anti-white animosity, rhetoric, violence and rioting. At worst, it is perhaps not altogether paranoid to compare it to the sort of dehumanizing racist rhetoric that has historically often served as a precursor to genocide.
Thus, it is notable that not even the Nazis openly talked openly about the mass killings of the Jews, even when this process was already underway. Instead, they employed such coded euphemisms as resettlement in the East and the Final Solution to the Jewish Question.
In this light, it is notable that those leftists like Noel Ignatiev who talk of “abolishing the white race” but insist they are only talking of deconstructing the concept of ‘whiteness, which is, they argue, a social construct, strangely never talk about ‘abolishing the black race’, or indeed any race other than whites, even though, according to their own ideology, all racial categories are mere social constructs with no real basis in biology, invented to justify oppression, slavery, colonialism and other such malign and supposedly uniquely western practices, and hence presumably similarly artificial and malignant.

[2] Thus, according to the sort of evolutionary epistemology championed by, among others, Karl Popper, it is only if different theories are tested and subjected to falsification that we are able to assess their merits and thereby choose between them, and scientific knowledge is able to progress. If some theories are simply deemed beyond the pale a priori, then clearly this process of testing and falsification cannot properly occur.

[3] The book in which Horowitz wrote these words was published in 2003. Yet, today, some seventeen years later, “the era of the progressive witch-hunt”, far from abating, seems to be only accelerating. By Horowitz’s reckoning, then, “the era of the progressive witch-hunt” is now approaching its fourth decade, and, not only, in Horowitz’s words, “shows no signs of abating”, but rather recently seems to be going into overdrive.

[4] Discrimination against men in the provision of insurance policies remains legal in most jurisdictions (e.g. the USA). However, sex discrimination in the provision of insurance policies was belatedly outlawed throughout the European Union at the end of 2012, due to a ruling of the European Court of Justice. This was many years after other forms of sex discrimination had been outlawed in most member-states.
For example, in the UK, most other forms of gender discrimination were outlawed almost forty years previously under the 1975 Sex Discrimination Act. However, section 45 of this Act explicitly exempted insurance companies from liability for sex discrimination if they could show that the discriminatory practice they employed was based on actuarial data and was “reasonable”.
Yet actuarial data could also be employed to justify other forms of discrimination, such as employers deciding not to employ women of childbearing age. However, this remained unlawful.
This exemption was preserved by Section 22 of Part 5 of Schedule 3 of the new Equality Act 2010. As a result, as recently as 2010 insurance providers routinely charged young male drivers double the premiums demanded of young female drivers.
Yet, curiously, the only circumstances in which insurance policy providers were barred from discriminating on the grounds of sex was where the differences result from the costs associated with pregnancy or to a woman’s having given birth under section 22(3)(d) of Schedule 3 – in other words, the only readily apparent circumstance where insurance providers might be expected to discriminate against women rather than men!
Interestingly, even after the ECJ ruling, there is evidence that indirect discrimination against males continues, simply by using occupation as a marker for gender.

[5] It is difficult to muster much sympathy for the feminists for at least three reasons. First, the central tenet of transgender ideology, namely the denial of the reality of biological sex, is itself a direct inheritance from feminism.
Thus, feminists have long contended that there are few if any innate biological differences between the sexes in psychology and behaviour and that instead, to use a favourate phrase of feminists, sociologists and other such ‘professional damned fools’, such differences in psychology and behaviour as are observed are entirely ‘socially constructed’ in origin. From this absurd position, it is surely only one step further to claim that a person of one sex can unilaterally declare himself to be of the other sex and will henceforth, and even retroactively, be of the sex she/he/they/‘ze’/‘xe’/etc. declare themselves and should be henceforth referred to and treated as such.
Indeed, if biological sex differences are indeed as trivial and next to nonexistent as the feminists have so often and so loudly claimed, then this raises the question as to why a person should not be able to unilaterally declare themselves as of the opposite sex to that which they were arbitrarily assigned at birth. Transsexual ideology is then the logical conclusion, or perhaps the reductio ad absurdum, of feminist sex denial. In short, the feminists have only themselves to blame.
Second, the feminist TERFs who who complain so loudly and incessantly about so-called ‘trans women’ invading ‘female only spaces’ are often the exact same feminists, or at least heirs to those exact same feminists, who, in a previous generation, had loudly and incessantly sought entry for women into what were previously ‘male only spaces’ (e.g. golf clubs, gentleman’s clubs). Thus, the opening up of so-called ‘female only spaces’ to biological males is arguably the logical conclusion of feminist campaigning and rhetoric, and the opposition of many feminists to this development illustrates only their logical inconsistency, double-standards and hypocrisy.
The final reason that one should not waste one’s sympathy on feminist TERFs who find themselves ostricized, persecuted and sometimes cancelled by the transgender lobby is that the feminists, including many of those subsequently demonized as ‘TERFs’, have themselves been responsible for exactly the same sort of persecution and intolerance of which they they now find themselves the victims, namely the persecution and demonization of anyone who questions the central tenets of feminism, including those who question sex denialism, prominent victims having included Lawrence SummersJames Damore, Suzanne Steinmetz, Erin Pizzey and Neil Lyndon among countless others.

[6] Actually, the term fascist is sometimes employed in this way by conservatives as well, as when they refer to certain forms of Islamic fundamentalism as Islamofascism or indeed when they refer to the stifling of debate, and of freedom of expression, by leftists (i.e. politically correctness itself) as a form of fascism

[7] This use of the phrase ‘anti-Semitism’ in the context of criticism of Israel’s policies towards the Palestinians is ironic, at least from a pedantic etymological perspective, since the Palestinian people actually have a rather stronger claim to being a Semitic people, in both a racial and a linguistic sense, than do either Ashkenazi or Sephardi (if not Mizrahi) Jews.

[8] Of course, at the time Browne wrote these words in 2006, his proposal that, for sports is to be truly non-gender discriminatory, then men should be allowed to enter women’s events was nothing more than a hypothetical thought experiment. It was not a serious proposal, but rather a reductio ad absurdum to illustrate what taking the feminist rhetoric of non-discrimination in sports would actually look like if taken to its logical conclusion. Now, of course, it has become a hilarious reality, with biologically male transgender athletes entering and outcompeting women in women’s sporting events.
Feminists have, of course, been the first to cry foul. However, they really have only themselves to blame. As Browne argues, if sports is really to be non-discriminatory as between men and women, then presumably men should indeed have a right to enter women’s athletic events. Indeed, the entire rhetoric of transgender ideology is based on the feminist claim that sex (or ‘gender’ to use the preferred feminist term) is a social construct with no basis in biology.

[9] Actually, contrary to what Browne says, international aid may sometimes be partially successful in alleviating povery. For example, the Marshall Plan for post-WWII Europe is sometimes credited as a success story, though some economists disagree. The success, or otherwise, of foreign aid seems, then, to depend, at least in part, on the identity of the recipients.

[10] Relatedly, it is surely no accident that the two sub-Saharan African countries that, until relatively recently, remained under white rule, namely South Africa and Rhodesia (now Zimbabwe), at that time enjoyed some of the highest living-standards in Africa, with Rhodesia famously being described as ‘the breadbasket of Africa’ and South Africa long regarded as the only ‘developed economy’ in the entire continent during the apartheid-era. Since the transition to black majority rule, however, the decline in both countries, especially in the sphere of law and order, has been dramatic and, given the experience elsewhere in post-colonial Africa, wholly predictable.
Interestingly, a few studies have investigated the relationship between a history of European colonization and economic development, most, but not all, of which seem to have found that a history of colonialism by European powers is actually associated with increased eonomic development. For example, Easterly and Levine (2016) found that a history of European colonization was associated with increased levels of economic development; Grier (1999) similarly found that, among former colonies, the duration of the period of colonial rule was positively associated with greater levels of economic growth; and Feyrer & Sacerdote (2009) found that, among islands, there is robust positive correlation between years spent as a European colony and present day GDP.
However, interestingly and directly contrary to what I have claimed here, Bertocchi & Canova (2002), in a study restricted to economies on the African continent, purported to find an inverse correlation between degree of European colonial penetration and economic growth.

[11] For more on this plausible but incendiary theory, see IQ and the Wealth of Nations by Richard Lynn and Tatu Vanhanen and Understanding Human History by Michael Hart.