In Defence of Physiognomy

Edward Dutton, How to Judge People by What they Look Like (Wrocław: Thomas Edward Press, 2018) 

Never judge a book by its cover’ – or so a famous proverb advises. 

However, given that Edward Dutton’s ‘How to Judge People by What they Look Like’, represents, from its provocative title onward, a spirited polemic against this received wisdom, one is tempted, in the name of irony, to review his book entirely on the basis of its cover. 

I will resist this temptation. However, it is perhaps worth pointing out that two initial points are apparent, if not from the book’s cover alone, then at least from its external appearance. These are: 

1) It is rather cheaply produced and apparently self-published; and

2) It is very short – a pamphlet rather than a book.[1]

Both these facts are probably excusable by reference to the controversial and politically-incorrect nature of the book’s title, theme and content.

Thus, on the one hand, the notion that we can, with some degree of accuracy, judge people by appearances alone is a very politically-incorrect idea and hence one that many publishers would be reluctant to associate themselves with or put their name to.

On the other hand, the fact that the topic is so controversial may also explain why the book is so short. After all, relatively little research has been conducted on this topic for precisely this reason.

Moreover, even such research as has been conducted is often difficult to track down. 

After all, physiognomy, the field of research which Dutton purports to review, is no longer a recognized science. On the contrary, most people today dismiss it as a discredited pseudoscience.

Therefore, there is no ‘International Journal of Physiognomy’ available at the click of a mouse on ScienceDirect. 

Neither are there any Departments of Physiognomy or Professors of Physiognomy at major universities, or a recent undergraduate, or graduate-level textbook on physiognomy collating all important research on the subject. Indeed, the closest thing we have to such a textbook is Dutton’s own thin, meagre pamphlet. 

Therefore, not only has relatively little research has been conducted in this area, at least in recent years, but also such research as has been conducted is spread across different fields, different journals and different researchers, and hence not always easy to track down. 

Moreover, such research rarely actually refers to itself as ‘physiognomy’, in part precisely because physiognomy is widely regarded as a pseudoscience and hence something to which researchers, even those directly researching correlations between morphology and behaviors, are reluctant to associate themselves.[2]

Therefore, conducting a key word search for the term ‘physiognomy’ in one or more of the many available databases of scientific papers would not assist the reader much, if at all, in tracking down relevant research.[3]

It is therefore not surprising that Dutton’s book is quite short. 

For this same reason, it is perhaps also excusable that Dutton has evidently failed to track down some interesting studies relevant to his theme. 

For example, a couple of interesting studies not cited by Dutton purported to uncover an association between behavioural inhibition and iris pigmentation in young children (Rosenberg & Kagan 1987; Rosenberg & Kagan 1989). 

Another interesting study not mentioned by Dutton presents data apparently showing that subjects are able to distinguish criminals from non-criminals at better than chance levels merely from looking at photographs of their faces (Valla, Ceci & Williams 2011).[4]

Such omissions are inevitable and excusable. More problematically however, Dutton also seems to have omitted at least one entire area of research relevant to his subject-matter – namely research on so-called minor physical anomalies or MPAs

These are certain physiological traits, interpreted as minor abnormalities, probably reflecting developmental instability and mutational load, which have been found in several studies to be associated with various psychiatric and developmental conditions, as well as being a correlate of criminal behaviour (see below).

Defining the Field 

Yet Dutton not only misses out on several studies relevant to the subject-matter of his book, he also is not entirely consistent in identifying just what the precise subject-matter of his book actually is. 

It is true that, at many points in his book, he talks about physiognomy

This term is usually defined as the science (or, according to many people, the pseudoscience) of using a person’s morphology in order to determine their character, personality and likely behaviour. 

However, the title of Dutton’s book, ‘How to Judge People by What They Look Like’, is potentially much broader. 

After all, what people look like includes, not just our morphology, but also, for example, how we dress and what clothes we wear.

For example, we might assess a person’s job from their uniform, or, more generally, their socioeconomic status and income level from the style and quality of their clothing, or the designer labels and brand names adorning it. 

More specifically, we might even determine their gang allegiance from the color of their bandana, and their sexuality and fetishes from the colour and positioning of their handkerchief

We also make assessments of character from clothing style. For example, a person who is sloppily dressed and is hence perceived not take care in his or her appearance (e.g. whose shirt is unironed or unclean) might be interpreted as lacking in self-worth and likely to produce similarly sloppy work in whatever job s/he is employed at. On the other hand, a person always kitted out in the latest designer fashions might be thought shallow and materialistic. 

In addition, certain styles of dress are associated with specific youth subcultures, which are often connected, not only to taste in music, but also with lifestyle (e.g. criminality, drug-use, political views).[5]

Dutton does not discuss the significance of clothing choice in assessments of character. However, consistent with this broader interpretation of his book’s title, Dutton does indeed sometimes venture beyond physiognomy in the strict sense. 

For example, he discusses tattoos (p46-8) and beards (p60-1). 

I suppose the decision to get tattooed or grow a beard reflects both genetic predispositions and environmental influence, just as all aspects of phenotype, including morphology, reflect the interaction between genes and environment. 

However, this is also true of clothing choice, which, as I have already mentioned, Dutton does not discuss.  

On the other hand, both tattoos and, given that they take time to grow, even beards are relatively more permanent than whatever clothes we are wearing at any given time. 

However, Dutton also discusses the significance of what he terms a “blank look” or “glassy eyes” (p57-9). But this is a mere facial expression, and hence even more transitory than clothing. 

Yet Dutton omits discussion of other facial expressions which, unlike his wholly anecdotal discussion of “glassy eyes”, have been researched by ethologists at least since Charles Darwin’s seminal The Expression of the Emotions in Man and Animals was published in 1872. 

Thus, Paul Ekman famously demonstrated that the meanings associated with at least some facial expressions are cross-culturally universal (e.g. smiling being associated with happiness). 

Indeed, some human facial expressions even appear to be homologues of behaviour patterns among non-human primates. For example, it has been suggested that the human smile is homologous with an appeasement gesture, namely the baring of clenched teeth (aka a ‘fear grin’), among chimpanzees. 

Of particular relevance to the question posed in Dutton’s book title, namely ‘How to Judge People by What They Look Like’, it is suggested some facial expressions lie partly outside of conscious control – e.g. blushing when embarrassed, going pale when shocked or fearful.  

Indeed, even a fake smile is said to be distinguishable from a Duchenne smile

This then explains the importance of reading facial expressions when playing poker or interrogating suspects, as people often inadvertently give away their true feelings through their facial expressions, behaviour and other mannerisms (e.g. so-called microexpressions). 

Somatotypes and Physique 

Dutton begins his book with a remarkable attempt to resurrect William Sheldon’s theory that certain types of physiques (or, as Sheldon called them, somatotypes) are associated with particular types of personality (or as Sheldon called them, constitutions). 

Although the three dimensions by which Sheldon classified physiques – endomorphy, ectomorphy and mesomorphy – have proven useful as dimensions for classifying body-type, Sheldon’s attempt to equate these ideal types with personality is now widely dismissed as pseudoscience. 

Dutton, however, argues that physique is indeed associated with character, and moreover provides what was conspicuously lacking in Sheldon’s own exposition – namely, compelling theoretical reasons for the postulated associations. 

Yet, interestingly, the associations suggested by Dutton do indeed to some extent mirror those first posited by William Shelton over half a century previously.

Whereas, elsewhere, Dutton draws on previously published research, here, Dutton’s reasoning is, to my knowledge, largely original to himself, though, as I show below, psychometric studies do support the existence of at least some of the associations he postulates. 

This part of Dutton’s book represents, in my view, the most important and convincing original contribution in the book. 

Endomorphy/Obesity, Self-Control and Conscientiousness

First, he discusses what Sheldon called endomorphy – namely, a body-type that can roughly be equated with what we would today call fatness or obesity

Dutton points out that, at least in contemporary Western societies, where there is a superabundance of food, and starvation is all but unknown even among the relatively less well-off, obesity tends to correlate with personality. 

In short, people who lack self-control and willpower will likely also lack the self-control and willpower to diet effectively. 

Endomorphy (i.e. obesity) is therefore a reliable correlate of the personality factor known to psychometricians as conscientiousness (p31-2).  

Although Dutton himself cites no data or published studies in support of this conclusion, nevertheless several published studies confirm an association between BMI and conscientiousness (Bagenjuk et al 2019; Jokela et al 2012; Sutin et al 2011). 

Obesity is also, Dutton claims, inversely correlated with intelligence

This is, first, because IQ is, according to Dutton, correlated with time-preference – i.e. a person’s willingness to defer gratification by making sacrifices in the short-term in return for a greater long-term pay-off. 

Therefore, low-IQ people, Dutton claims: 

Are less able to forego the immediate pleasure of ice cream for the future positive of not being overweight and diabetic” (p31). 

However, far from being associated with a short-time preference, some evidence, not discussed by Dutton, suggests that intelligence is actually inversely correlated with conscientiousness, such that more intelligent people are actually on average less conscientious (e.g. Rammstedt et al 2016; cf. Murray et al 2014). 

This would suggest that low IQ people might, all else being equal, actually be more successful at dieting than their high IQ counterparts. 

However, according to Dutton, there is a second reason that low-IQ people are more likely to be fat, namely: 

They are likely to understand less about healthy eating and simply possess less knowledge of what constitutes healthy food or a reasonable portion” (p31). 

This may be true. 

However, while there are some borderline cases (e.g. foods misleadingly marketed by advertisers as healthy), I suspect that virtually everyone knows that, say, eating lots of cake is unhealthy. Yet resisting the temptation to eat another slice is often easier said than done. 

I therefore suspect conscientiousness is a better predictor of weight than is intelligence

Interestingly, a few studies have investigated the association between IQ and the prevalence of obesity. However, curiously, most seem to be premised on the notion that, rather than low intelligence causing obesity, obesity somehow contributes to cognitive decline, especially in children (e.g. Martin et al 2015) and the elderly (e.g. Elias et al 2012). 

In fact, however, longitudinal studies confirm that, as contended by Dutton, it is low IQ that causes obesity rather than the other way around (Kanazawa 2014). 

At any rate, people lacking in intelligence and self-control also likely lack the intelligence and self-discipline to excel in school and gain promotions into high-income jobs, since both earnings and socioeconomic status correlate with both intelligence and conscientiousness.[6]

One can also, then, make better than chance assessments of a person’s socioeconomic status  and income from their physique. 

In other words, whereas in the past (and perhaps still in the developing world) the poor were more likely to starve or suffer from malnutrition and only the rich could afford to be fat, in the affluent west today it is the relatively less well-off who are, if anything, more likely to suffer from obesity and diseases of affluence such as diabetes and heart disease

This, then, all rather confirms the contemporary stereotype of the fat, lazy slob. 

However, Dutton also provides a let-off clause for offended fatties. Obesity is associated, not only with conscientiousness, but also with the factor of personality known as extraversion. This refers to the tendency to be outgoing, friendly and talkative, traits that are generally viewed positively. 

Several studies, again not cited by Dutton, do indeed suggest an association between extraversion and BMI (Bagenjuk et al 2019; Sutin et al 2011). Dutton, for his part, explains it this way: 

Extraverts simply enjoy everything positive more, and this includes tasty (and thus unhealthy) food” (p32). 

Dutton therefore provides theoretical support to the familiar stereotype of, not only the fat, lazy slob, but also the jolly and gregarious fat man, and the ‘bubbly’ fat woman.[7]

Mesomorphy/Muscularity and Testosterone

Mesomorphs were another of Sheldon’s supposed body-types. Mesomorphy can roughly be equated with muscularity. 

Here, Dutton concludes that: 

Sheldon’s theory… actually fits quite well with what we know about testosterone” (p33). 

Thus, mesomorphy is associated with muscularity, and muscularity with testosterone

Yet testosterone, as well as masculinizing the body, also masculinizes brain and behaviour. 

This is why anabolic steroids, not only increase muscularity, but are also said to be associated with roid rage.[8]

Testosterone, at least during development, may also be associated, not only with muscularity, but also with certain aspects of facial morphology, such as a wide and well-defined jawline, prominent brow ridges, deep-set eyes and facial width.  

I therefore wonder if this might go some way towards explain the finding, not mentioned by Dutton (but clearly relevant to his subject-matter), that observers are apparently able to identify convicted criminals at better than chance levels from a facial photograph alone (Valla, Ceci & Williams 2011).[9]

Testosterone and Autism 

Further exploring the effects of testosterone on both psychology and morphology, Dutton also proposes: 

We would also expect the more masculine-looking person to have higher levels of autism traits” (p34). 

This idea seems to be based on Simon Baron-Cohen’s extreme male brain theory of autism

However, the relationship between, on the one hand, levels of androgens such as testosterone and, on the other, degree of masculinization in respect of a given sexually-dimorphic trait may be neither one-dimensional nor linear

Thus, interestingly, Kingsley Browne in his excellent Biology at Work: Rethinking Sexual Equality (which I have reviewed here) reports: 

The relationship between spatial ability and [circulating] testosterone levels is described by an inverted U-shaped curve… Spatial ability is lowest in those with the very lowest and the very highest testosterone levels, with the optimal testosterone level lying in the lower end of the normal male range. Thus, males with testosterone in the low-normal range have the highest spatial ability” (Biology at Work: p115; Gouchie & Kimura 1991). 

Similarly, leading intelligence researcher Arthur Jensen reports, in The g Factor: The Science of Mental Ability, that:

Within each sex there is a nonlinear (inverted-U) relationship between an individual’s position on the estrogen/testosterone continuum and the individual’s level of spatial ability, with the optimal level of testosterone above the female mean and below the male mean. Generally, females with markedly above-average testosterone levels (for females) and males with below-average levels of testosterone (for males) tend to have higher levels of spatial ability, relative to the average spatial ability for their own sex” (The g Factor: p534).

In contrast, however, Dutton claims: 

There is evidence that testosterone level in healthy males is positively associated with spatial ability” (p36). 

However, the only study he cites in support of this assertion was, according to its methodology section and indeed its very title, conducted among “older males”, reported as having been between the ages of 60 and 75 years of age (Janowsky et al 1994). 

Therefore, since testosterone levels are known to decline with age, this finding is not necessarily inconsistent with the relationship between testosterone and spatial ability described by Browne (see Moffat & Hampson 1996). 

This, of course, accords with the anecdotal observation that math nerds and autistic males are rarely athletic, square-jawed ‘alpha male’-types.[10]

Testosterone and Baldness 

Another trait associated with testosterone levels, according to Dutton, is male pattern baldness. Thus, Dutton contends: 

Baldness is yet another reflection of high testosterone… [B]aldness in males known as androgenic apolecia, is positively associated with levels of testosterone” (p55). 

As evidence, he cites a study both a review (Batrinos 2014) and some indirect anecdotal evidence: 

It is widely known among doctors – I base this on my own discussions with doctors – that males who come to them in their 60s complaining of impotence tend to have full heads of fair or only very limited hair loss” (p55).[11]

If male pattern baldness is indeed associated with testosterone levels then this is somewhat surprising, because our perceptions regarding men suffering from male pattern baldness seem to be that they are, if anything, less masculine than other males. 

Thus, Nancy Etcoff, in Survival of the Prettiest (which I have reviewed here), reports that one study  found that: 

Both sexes assumed that balding men were weaker and found them less attractive” (Survival of the Prettiest: p121; Cash 1990).[12]

Yet, if the main message of Dutton’s book is that individual differences in morphology and appearance do indeed predict individual differences in behaviour, psychology and personality, then a second implicit theme seems also to be that our intuitions and stereotypes regarding the association between appearance and behaviors are often correct.  

True, it is likely that few people notice, say, digit ratios, or make judgements about people based on them either consciously or unconsciously. However, elsewhere, Dutton cites studies showing that subjects are able to estimate the IQ of male students at better than chance levels simply by viewing a photograph of their faces (Kleisner et al 2014; discussed at p50); and identify homosexuals and heterosexual men at better than chance levels from a facial photograph alone (Kosinski & Wang 2017; discussed at p66). 

Yet, according to Etcoff and Cash, perceptions regarding the personalities of balding men are almost the opposite of what would be expected if male pattern balding were indeed a reflection of high testosterone levels, as suggested by Dutton. 

In fact, however, although a certain level of testosterone is indeed a necessary condition for male pattern hair loss (this is why neither women nor castrated eunuchs experience the condition, though their hair does thin with age), this seems to be a threshold effect, and among non-castrated males with testosterone levels within the normal range levels of circulating testosterone do not seem to significantly predict either the occurrence, or severity, of male pattern baldness

Thus, healthline reports: 

It’s not the amount of testosterone or DHT that causes baldness; it’s the sensitivity of your hair follicles. That sensitivity is determined by genetics. The AR gene makes the receptor on hair follicles that interact with testosterone and DHT. If your receptors are particularly sensitive, they are more easily triggered by even small amounts of DHT, and hair loss occurs more easily as a result. 

In other words, male pattern baldness is yet another trait that is indeed related to testosterone, but does not evince a simple linear relationship

2D:4D Ratio

Another presumed correlate of prenatal androgens is 2D:4D ratio (aka digit ratio). 

Over the last two decades, a huge body of research has reported correlations between 2D:4D ratio and a variety of psychiatric conditions and behavioural propensities, including autism (Manning et al 2001), ADHD (Martel et al 2008; Buru 2020; Işık 2020), psychopathy (Blanchard & Lyons 2010), aggressive behaviours (Bailey & Hurd 2005; Benderlioglu & Nelson 2005), sports and athletic performance (Manning & Taylor 2001Hönekopp & Urban 2010; Griffin et al 2012; Keshavarz et al 2017), criminal behaviour (Ellis & Hoskin 2015; Hoskin & Ellis 2014) and homosexuality (Williams et al 2000; Lippa 2003; Kangassalo et al 2011; Li et al 2016; Xu & Zheng 2016). 
 
Unfortunately, and slightly embarrassingly, Dutton apparently misunderstands what 2D:4D ratio actually measures. Thus, he writes: 

If the profile of someone’s fingers is smoother, more like a shovel, then it implies high testosterone. If, by contrast, the little finger is significantly smaller than the middle finger, which is highly prevalent among women, then it implies lower testosterone exposure” (p69). 

Actually, however, both the little finger and middle finger are irrelevant to 2D:4D ratio.

Indeed, for virtually everyone, “the little finger is significantly smaller than the middle finger”. This is, of course, why the latter is called “the little finger”.

Actually, 2D:4D ratio concerns the ratio between index finger and the ring finger – i.e. the two fingers on either side of the middle finger

These fingers are, of course, the second and fourth digit, respectively, if you begin counting from your thumb outwards, hence the name ‘2D:4D ratio’. 

In evidently misnumbering his digits, I can only conclude that Dutton began counting at the correct end, but missed out his thumb. 

At any rate, the evidence for any association between digit ratios and measures of behavior and psychology is, at best, mixed

Skimming the literature on the subject, one finds many conflicting findings – for example, sometimes significant effects are found only for one sex, while other studies find the same correlations limited to the other sex (e.g. Bailey & Hurd 2005; Benderlioglu & Nelson 2005; see also Hilgard et al 2019), and also many failures to replicate earlier reported associations (e.g. Voracek et al 2011; Fossen et al 2022; Kyselicová et al 2021). 

Likewise, meta-analyses of published studies have generally found, at best, only small and inconsistent associations (e.g Voracek et al 2011 ; Pratt et al 2016). Thus, 2D:4D ratio has been a major victim of the recent so-called replication crisis in psychology

Indeed, it is not entirely clear that 2D:4D ratio represents a useful measure of prenatal androgens in the first place (Hollier et al 2015), and even the universality of the sex difference that originally led researchers to posit such a link is has been called into question (Apicella 2015; Lolli et al 2017).  

In short, the usefulness of digit ratio as a measure of exposure to prenatal androgens, let alone an important correlate of behaviour, psychology, personality or athletic performance, is questionable. 

Testosterone and Height 

The examples of male pattern baldness and spatial ability demonstrate that the effect of testosterone on some sexually-dimorphic traits is not necessarily always linear. Instead, it can be quite complex. 

Therefore, just because men are, on average, higher for a given trait than are women, which is ultimately a consequence of androgens such as testosterone, this does not necessarily mean that men with relatively higher levels of testosterone are necessarily higher for this trait than are men with relatively lower levels of testosterone. 

Indeed, Dutton himself provides another example of such a trait – namely height

Thus, although men, in general, are taller than women, nevertheless, according to Dutton: 

Men who are high in testosterone… tend to be of shorter stature than those who are low in it. High levels of testosterone at a relatively early age have been shown to reduce stature” (p34).[13]

In evolutionary terms, Dutton explains this in terms of the controversial Life History Theory of Philippe Rushton, of whom Dutton seems to be, with some reservations, something of a disciple (p22-4). 

If true, this might explain why eunuchs who were castrated before entering puberty are said to grow taller, on average, than other men. 

Further corroboration is provided by the fact that, in the Netherlands, whose population is among the tallest in the world, excessively tall boys are sometimes treated with testosterone in order to prevent them growing any taller (de Waal et al 1995).[14]

This is said to occur because additional testosterone speeds up puberty, and produces a growth spurt, but it also brings this to an end when height stabilizes and we cease to grow any taller. This is discussed in Carole Hooven’s book Testosterone: The Story of the Hormone that Dominates and Divides Us.

Short Man Syndrome’?

Interestingly, although Dutton does not explore the idea, the association between testosterone levels and height among males may even explain the supposed phenomenon of short man syndrome (also referred to, by reference to the supposed diminutive stature of the French emperor Napoleon, as a Napoleon complex), whereby short men are said to be especially aggressive and domineering. 

This is something that is usually attributed to a psychological need among shorter men to compensate for their diminutive stature. However, if Dutton is right, then the supposed aggressive predilections of short men might simply reflect differences between short and taller man in testosterone levels during adolescence. 

Actually, however, so-called short man syndrome is likely a myth – and yet another way society in general demeans and belittles short men. Certainly, it is very much a folk-psychiatric diagnosis with no empirical or real evidential basis, besides the merely anecdotal.  

Indeed, far from short men being, on average, more aggressive and domineering than taller men, one study commissioned by the BBC actually found that short men were less likely to respond aggressively when provoked

Given that tall men have an advantage in combat, it would actually make sense for relatively shorter men to avoid potentially violent confrontations with other men where possible, since, all else being equal, they would be more likely to come off worse in any such altercation.  

Consistent with this, some studies have found a link between increased stature and anti-social personality disorder, which is associated with aggressive behaviours (e.g. Ishikawa et al 2001; Salas-Wright & Vaughn 2016), while another study found a positive association between height and dominance, especially among males (Malamed 1992).[15]

Height and Intelligence 

Height is also, Dutton reports, correlated with intelligence, with taller people having, on average, slightly higher IQs than shorter people.  

The association between height and IQ is, like most if not all of those discussed by Dutton in this book, modest in magnitude or effect size.[16]

However, unlike many other associations reported by Dutton, many of which are based on just a single published study, or sometimes by purely theoretical arguments, the association between height and intelligence is robust and well-established.[17] Indeed, there is even wikipedia page on the topic

Dutton’s explanation for this phenomenon is that intelligence and height “have been sexually selected for as a kind of bundle” (p46). 

Females have sexually selected for intelligent men (because intelligence predicts social status and they have been specifically selected for this) but they have also selected for taller men, realising that taller men will be better able to protect them. This predilection for tall but intelligent men has led to the two characteristics being associated with one another” (p46). 

Actually, as I see it, this explanation would only work, or at least work much better, if both men and women had a preference for partners who are both tall and intelligent

This is indeed Arthur Jensen’s explanation for the association between height and IQ

Probably represents a simple genetic correlation resulting from cross-assortative mating for the two traits. Both height and ‘intelligence’ are highly valued in western culture. There is also evidence for cross-assortative mating for height and IQ. There is some trade-off between them in mate selection. When short and tall women are matched on IQ, educational level and social class of origin, for example, it is found that taller women tend to marry men of higher socioeconomic status… than do shorter women” (The G Factor: The Science of Mental Ability: p146). 

An alternative explanation might be that both height and intelligence reflect developmental stability and a lack of deleterious mutations. On this view, both height and intelligence might represent indices of genetic quality and lack of mutational load. 

However, this alternative explanation is inconsistent with the finding that there is no ‘within-family’ correlation between height and intelligence. In other words, when one looks at, say, full-siblings from the same family, there is no tendency for the taller sibling to have a higher IQ (Mackintosh, IQ and Human Intelligence: p6). 

This suggests that the genes that cause greater height are different from those that cause greater intelligence, but that they have come to be found in the same individuals through assortative mating, as suggested by Jensen and Dutton.[18]

Height and Earnings 

Although not discussed by Dutton, there is also a correlation between height and earnings. Thus, economist Steven Landsburg reports that: 

In general, an extra inch of height adds roughly an extra $1,000 a year in wages, after controlling for education and experience. That makes height as important as race or gender as a determinant of wages” (More Sex is Safer Sex: p53). 

This correlation could be mediated by the association between height and intelligence, since intelligence is known to be correlated with earnings (Case & Paxson 2009). 

However, one interesting study found that it was actually height during adolescence that accounted for the association, and that, once this was controlled for, adult height had little or no effect on earnings (Persico, Postlewaite & Silverman 2004). 

Controlling for teen height essentially eliminates the effect of adult height on wages for white males. The teen height premium is not explained by differences in resources or endowments” (Persico, Postlewaite & Silverman 2004). 

Thus, Landsburg reports: 

Tall men who were short in high school earn like short men, while short men who were tall (for their age) in high school” (More Sex is Safer Sex: p54). 

This suggests that it is height during a key formative period (a critical period’) in adolescence that increases self-confidence, which self-confidence continues into adulthood and ultimately contributes to higher adult earnings of men who were relatively taller as adolescents. 

On the other hand, however, Case and Paxon report that, in addition to being associated with adult height, intelligence is also associated with an earlier growth spurt. This leads them to conclude that adolescent height might be a better marker for cognitive ability than adult height, thereby providing an alternative explanation for Persico et al’s finding (Case & Paxson 2009). 

Head Size and Intelligence 

Dutton also discusses the finding that there is an association between intelligence and head-size. This is indeed true and is a topic I have written about elsewhere

However, Dutton’s illustration of this phenomenon seems to me rather unhelpful. Thus, he writes: 

Intelligent people have big heads in comparison to the size of their bodies. This association is obvious at the extremes. People who suffer from a variety of conditions that reduce their intelligence, including fetal alcohol syndrome or the zika virus, have noticeably very small heads” (p56). 

However, to me, this seems to be the wrong way to think about it. 

While it is indeed true that microcephaly (i.e. a smaller than usual head size) is usually associated with lower than normal intelligence levels, the reverse is not true. Thus, although head-size is indeed correlated with IQ, people suffering from macrocephaly (i.e. abnormally large heads) do not generally have exceptionally high IQs. On the contrary, macrocephaly is often associated with impaired cognitive function, probably because, like microcephaly, it reflects a malfunction in brain development.

Neither do people afflicted with forms of disproportionate dwarfism, such as achondroplasia, have higher than average IQs even though their heads are larger relative to their body-size than are those of ordinary-sized people.  

In short, rather than being, as Dutton puts it “obvious at the extremes”, the association between head-size and intelligence is obvious at only one of the extremes and not at all apparent at the other extreme. 

In general, species, individuals and races with larger brains have higher intelligence because, because brain-size is highly metabolically expensive and therefore unlikely to evolve without some compensating advantage (i.e. higher intelligence). 

However, conditions such achondroplasia and macrocephaly did not evolve through positive selection. On the contrary, they are pathological and maladaptive. Therefore, in these cases, the additional brain tissue may indeed be wasted and hence confer no cognitive advantage. 

Mate Choice 

In evolutionary psychology, there is a large literature on human mate-choice and beauty/attractiveness standards. Much of this depends on the assumption that the physical characteristics favoured as mate-choice criteria represent fitness-indicators, or otherwise correlate with traits desirable in a mate. 

For example, a low waist-to-hip ratio (or ‘WHR’) is said to be perceived as attractive among females because it is supposedly a correlate of both health and fertility. Similarly, low levels of fluctuating asymmetry are thought to be perceived as attractive by members of the opposite sex in both humans and other animals, supposedly because it is indicative of developmental stability and hence indirectly of genetic quality

Dutton reviews some of this literature. However, an introductory textbook on evolutionary psychology (e.g. David Buss’s Evolutionary Psychology: The New Science of the Mind), or on the evolutionary psychology of mating behaviour in particular (e.g. David Buss’s The Evolution of Desire), would provide a more comprehensive review. 

Also, some of Dutton’s speculations are rather unconvincing. He claims: 

Hipsters with their Old Testament beards are showcasing their genetic quality… Beards are a clear advertisement of male health and status. They are a breeding ground for parasites” (p61). 

However, if this is so, then it merely raises the question as to why have beards come back into fashion very recently? Indeed, until the last few years, beards had not been in fashion for men in the west to my knowledge since the 1970s.[19]

Moreover, it is not at all clear that beards do increase attractiveness (e.g. Dixson & Vasey 2012). Rather, it seems that beards increase perceptions of male age, dominance, social status and aggressiveness, but not their attractiveness.[20]

This suggests that beards are more likely to have evolved through intrasexual selection (i.e. dominance competition or fighting between males) than by intersexual selection (i.e. female choice). 

This is actually consistent with a recently-emerging consensus among evolutionary psychologists that human male physiology (and behaviour) has been shaped more by intrasexual selection than by intersexual selection (Puts 2010; Kordsmeyer et al 2018). 

Consistent with this, Dutton notes: 

“[Beards] have been found to make men look more aggressive, of higher status, and older… in a context in which females tend to be attracted to slightly older men, with age tending to be associated with status in men” (p61). 

However, this raises the question as to why, today, most men prefer to look younger.[21]

Are Feminine Faces More Prone to Infidelity?

Another interesting idea discussed by Dutton is that mate-choice criteria may vary depending on the sort of relationship sought. For example, he suggests: 

A highly feminine face is attractive, in particular in terms of a short term relationship… [where] a healthy and fertile partner is all that is needed” (p43). 

In contrast, however, he concludes that for a long-term relationship a less feminine face may be desirable, since he contends “being extremely feminine in terms of secondary sexual characteristics is associated with an r-strategy” and hence supposedly with a greater risk of infidelity (p43).[22]

However, Dutton presents no evidence in favour of the claim that less feminine women are less prone to sexual infidelity. 

Actually, on theoretical grounds, I would contend that the precise opposite relationship is more likely to exist. 

After all, less feminine and more masculine females, having been subjected to higher levels of androgens, would presumably also have a more male-typical sexuality, including a high sex drive and preference for promiscuous sex with multiple partners

Indeed, there is data in support of this conclusion, from studies of women afflicted with a rare condition, congenital adrenal hyperplasia, which results in their having been exposed to abnormally high levels of masculinizing androgens such as testosterone both in the womb and sometimes in later life as compared to other females, and who, as a consequence, exhibit a more male-typical psychology and sexuality than other females. 

Thus, Donald Symons in his seminal The Evolution of Human Sexuality (which I have reviewed here) reports:  

There is evidence that certain aspects of adult male sexuality result from the effects of prenatal and postpubertal androgens: before the discovery of cortisone therapy women with andrenogenital syndrome [AGS] were exposed to abnormally high levels of androgens throughout their lives, and clinical data on late-treated AGS women indicate clear-cut tendencies toward a male pattern of sexuality” (The Evolution of Human Sexuality: p290). 

Thus, citing the work of, among others the much-demonized John Money, Symons reports that women suffering from andrenogenital syndrome

Tended to exhibit clitoral hypersensitivity and an autonomous, initiatory, appetitive sexuality which investigators have characterized as evidencing a high sex drive or libido” (The Evolution of Human Sexuality: p290). 

This suggests that females with a relatively more masculine appearance, having been subject, on average, to higher levels of masculinizing androgens, will also evidence a more male-typical sexuality, including greater promiscuity and hence presumably a greater proclivity towards infidelity, rather than a lesser tendency as theorized by Dutton. 

Good Looks, Politics and Religion 

Dutton also cites studies showing that conservative politicians, and voters, are more attractive than liberals (Peterson & Palmer 2017; Berggren et al 2017). 

By way of explanation for these findings, Dutton speculates that in ancestral environments: 

Populations… so low in ethnocentrism as to espouse Multiculturalism and reject religion would simply have died out… Therefore… the espousal of leftist dogmas would partly reflect mutant genes, just as the espousal of atheism does. This elevated mutational load… would be reflected in their bodies as well as their brains” (p76). 

However, this seems unlikely, since atheism and possibly socially liberal political views as well have usually been associated with higher intelligence, which is probably a marker for good genes.[23]

Moreover, although mutations might result in suboptimal levels of both ethnocentrism and religiosity, these suboptimal levels would presumably also manifest in the form of excessive levels of religiosity and ethnocentrism

This would suggest that religious fundamentalists and extreme xenophobes and racial supremacists would be just as mutated, and hence just as ugly, as atheists and extreme leftists supposedly are. 

Yet Dutton instead insists that religious fundamentalists, especially Mormons, tend to be highly attractive (Dutton et al 2017). However, he and his co-authors cite little evidence for this claim beyond the merely anecdotal.[24]

The authors of the original paper, Dutton reports, themselves suggested an alternative explanation for the greater attractiveness of conservative politicians, namely: 

Beautiful people earn more, which makes them less inclined to support redistribution” (p75). 

This, to me seems, both simpler more plausible. However, in response, Dutton observes: 

There is far more to being… right-wing… than not supporting redistribution” (p75). 

Here, he is right. The correlation between socioeconomic status/income and political ideology and voting is actually quite modest (see What’s Your Bias). 

However, earnings do still correlate with voting patterns, and this correlation is perhaps enough to explain the modest association between physical attractiveness and political opinions. 

Nevertheless, other factors may also play a role. For example, a couple of studies have found, among men, an association between grip strength and support for policies that benefit oneself economically (Peterson et al 2013; Peterson & Laustsen 2018). 

Grip strength is associated with muscularity, which is generally considered attractive in males

Since most leading politicians mostly come from middle-class, well-to-do, if not elite backgrounds, this would suggest that conservative male politicians are likely to be, on average, more attractive than liberal or leftist politicians.

Indeed, Noah Carl has even purported to observe, and presents evidence suggesting, a general, and widening, masculinity gap between the political left and right, and some studies have found evidence that more physically formidable males have more conservative and less egalitarian political views (Price et al 2017; Kerry & Murray 2018). 

Since masculinity in general (e.g. not just muscularity, but also square jaws etc.) is associated with attractiveness in males (see discussion here), this might explain at least part of the association between political views and physical attractiveness. 

On the other hand, among females, an opposite process may be at work. 

Among women, leftist politics seem to be strongly associated with feminist views

Since feminists reject traditional female sex roles, it is likely they would be relatively less ‘feminine’ than other women, perhaps having been, on average, subjected to relatively higher levels of androgens in the womb, masculinizing both their behaviour and appearance. 

Yet it is relatively more feminine women, with feminine, sexually-dimorphic traits such as large breasts, low waist to hip ratios, and neotenous facial features, who are perceived by men as more attractive.

It is therefore unsurprising that feminist women in particular tend to be less attractive than women who are attracted to traditional sex roles.[25]

Developmental Disorders and MPAs

One study cited by Dutton found that observers are able to estimate a male’s IQ from a facial photograph alone at better than chance level (Kleisner 2014). To explain this, Dutton speculates: 

Having a small nose is associated with Downs [sic] Syndrome and Foetal Alcohol Syndrome and this would have contributed to our assuming that those with smaller noses were less intelligent” (p51). 

Thus, he explains: 

“[Whereas] Downs [sic] Syndrome and Foetal Alcohol Syndrome are major disruptions of developmental pathways and they lead to very low intelligence and a very small nose… even minor disruptions would lead to slightly reduced intelligence and a slightly smaller nose” (p51-2). 

Indeed, foetal alcohol syndrome itself seems to exist on a continuum and is hence a matter of degree. 
 
Indeed, going further than Dutton, I would agree with publisher/blogger Chip Smith, who observes in his blog

Dutton only mention[s] trisomy 21 (Down syndrome) in passing, but I think that’s a pretty solid place to start if you want to establish the baseline premise that at least some mental traits can be accurately inferred from external appearances.” 

Thus, the specific ‘look associated with Down Syndrome is a useful counterexample to cite to anyone who dismisses the idea of physiognomy, and the existence of any association between looks and ability or behaviour, a priori

Indeed, other developmental disorders and chromosomal abnormalities, not mentioned by Dutton, are also associated with a specific specific ‘look’ – for example, Williams Syndrome, the distinctive appearance, and personality, associated with which has even been posited as the basis for the elf figure in folklore.[26]

Less obviously, it has even been suggested that there are also subtle facial features that distinguish autistic children from neurotypical children, and which also distinguish boys with relatively more severe forms of autism from those who are likely to be diagnosed as higher functioning (Aldridge et al 2011; Ozgen et al 2011). 

However, Dutton neglects to mention that there is in fact a sizable literature regarding the association between so-called minor physical anomalies (aka MPAs) and several psychiatric conditions including autism (Ozgen et al 2008), schizophrenia (Weinberg et al 2007; Xu et al 2011) and paedophilia (Dyshniku et al 2015). 

MPAs have also been identified in several studies as a correlate of criminal behaviour (Kandel et al 1989; see also Criminology: A Global Perspective: p70-1). 

Yet these MPAs are often the very same traits – the single transverse palmar crease; sandal toe gap; fissured tongue – that are also used to diagnose Down Syndrome in nenates.

The Morality of Making Judgements

But is it not superficial to judge a book by its cover? And, likewise, by extension, isn’t it morally wrong to judge people by their appearance? 

Indeed, it is not only morally wrong to judge people by their appearance, but also, worse still, isn’t it racist

After all, skin colour is obviously a part of our appearance, and did not our Lord and Saviour, Dr Martin Luther King, himself advocate for a world in which people would be judged “not be judged by the color of their skin but by the content of their character.” 

Here, Dutton turns from science to morality, and convincingly contends that, at least in certain circumstances, it is indeed morally acceptable to judge people by appearances. 

It is true, he acknowledges, that most of the correlations that he has uncovered or reported are modest in magnitude. However, he is at pains to emphasize, the same is true of almost all correlations that are found throughout psychology and the social sciences. Thus, he exhorts: 

Let us be consistent. It is very common in psychology to find a correlation between, for example, a certain behaviour and accidents (or health) of 0.15 or 0.2 and thus argue that action should be taken based on the results. These sizes are considered large enough to be meaningful and even for policy to be changed” (p82). 

However, Dutton also includes a few sensible precautions and caveats to be borne in mind by those readers who might be tempted overenthusiastically apply some of his ideas. 

First, he warns against regarding making inferences regarding “people from a racial group with which you have relatively limited contact”, where the same cues used with respect to your own group may be inapplicable, or must be applied relative to the group averages for the other group, something we may not be adept at doing (p82-3). 

Thus, to give an obvious example, among Caucasians, epicanthic folds (i.e. so-called ‘slanted’ eyes) may be indicative of a developmental disorder such as Down syndrome. However, among East Asians, Southeast Asians and some other racial groups (notably the Khoisan of Southern Africa), such folds are entirely normal and not indicative of any pathology. 

He also cautions regarding people’s ability to disguise their appearance, both by makeup and plastic surgery. However, also notes that the tendency to wear excessive makeup, or undergo cosmetic surgery, is itself indicative of a certain personality type, and indeed often, Dutton asserts, of psychopathology (p84-5). 

Using physical appearance to make assessments is particularly useful, Dutton observes, “in extreme situations when a quick decision must be made” (p80). 

Thus, to take a deliberately extreme reductio ad absurdum, if we see someone stabbing another person, and this first person then approaches us in an aggressive manner brandishing the knife, then, if we take evasive action, we are, strictly speaking, judging by appearances. The person appears as if they are going to stab us, so we assume they are and act accordingly. However, no one would judge us morally wrong for so doing. 

However, in circumstances where we have access to greater individualizing information, the importance of appearances becomes correspondingly smaller. Here, a Bayesian approach is useful. 

In 2013, evolutionary psychologist Geoffrey Miller caused predictable outrage and hysteria when he tweeted

Dear obese PhD applicants: if you didn’t have the willpower to stop eating carbs, you won’t have the willpower to do a dissertation #truth.” 

According to Dutton, as we have seen above, willpower is indeed likely correlated with obesity, because, as Miller argues, people lacking in willpower also likely lack the willpower to diet. 

However, a PhD supervisor surely has access to far more reliable information regarding a person’s personality and intelligence, including their conscientiousness and willpower, in the form of their application and CV, than is obtainable from their physique alone. 

Thus, the outrage that this tweet provoked, though indeed excessive and a reflection of the intolerant climate of so-called cancel culture’ and public shaming in the contemporary west, was not entirely unwarranted. 

Similarly, if geneticist James Watson did indeed say, as he was rather hilariously reported as having said, that “Whenever you interview fat people, you feel bad, because you know you’re not going to hire them”, he was indeed being prejudiced, because, again, an employer has access to more reliable information regarding applicants than their physique, namely, again, their application and CV. 

Obesity may often—perhaps even usually—be indicative of low levels of conscientiousness, willpower and intelligence. But, it is not always indicative of low levels of conscientiousness, willpower and intelligence. Instead, it may instead, as Dutton himself points out, reflect only high extraversion, or indeed an unusual medical condition. 

However, even at job interviews, employers do still, in practice, judge people partly by their appearance. Moreover, we often regard them as well within their rights to do so. 

This is, of course, why we advise applicants to dress smartly for their interviews.

Endnotes

[1] If ‘How to Judge People by What They Look Like’ is indeed a very short book, then, it must be conceded that this is, by comparison, a rather long and detailed book review. While, as will become clear in the remainder of this review, I have many points of disagreement with Dutton (as well as many points of agreement) and there are many areas where I feel he is mistaken, nevertheless the length of this book review is, in itself, testament to the amount of thinking that Dutton’s short pamphlet has inspired in this reader. 

[2] In addition, I suspect few of the researchers whose work Dutton cites ever even regarded themselves as working within, or somehow reviving, the field of physiognomy. On the contrary, despite researching and indeed demonstrating robust associations between morphology and behavior, this idea may never even have occurred to them.
Thus, for example, I was already familiar with some of this literature even before reading Dutton’s book, but it never occurred to me that what I was reading was a burgeoning literature in a revived science of physiognomy. Indeed, despite being familiar with much of this literature, I suspect that, if questioned directly on the matter, I may well have agreed with the general consensus that physiognomy was a discredited pseudoscience.
Thus, one of the chief accomplishments of Dutton’s book is simply to establish that this body of research does indeed represent a revived science of physiognomy, and should be recognized and described as such, even if the researchers themselves rarely if ever use the term.

[3] Instead, it would surely uncover mostly papers in the field of ‘history of science’, documenting the history of physiognomy as a supposedly discredited pseudoscience, along with such other real and supposed pseudosciences as phrenology and eugenics.

[4] The studies mentioned in the two paragraphs that precede this endnote are simply a few that I happen to have stumbled across that are relevant to Dutton’s theme and which I happen to have been able to recall. No doubt, any list of relevant studies that I could compile would be just as inexhaustive as that of Dutton and my own list would be longer than Dutton’s only because I have the advantage of having read Dutton’s book beforehand.

[5] Thus, a young person dressed as a hippy in the 60s and 70s was more likely to ascribe to certain (usually rather silly and half-baked) political beliefs, and also more likely to engage in recreational drug-use and live on a commune, while a young man dressed as a teddy boy in Britain in the 1950s, a skinhead in the 1970s and 80s, a football casual in the 1990s, or indeed a chav today, may be perceived as more likely to be involved in violent crime and thuggery. The goth subculture also seems to be associated with a certain personality type, and also with self-harm and suicide.

[6] The association between IQ and socioeconomic status is reviewed in The Bell Curve: Intelligence and Class Structure in American Life (which I have reviewed here). The association between conscientiousness and socioeconomic status is weaker, probably because personality tests are a less reliable measure of conscientiousness than IQ tests are of IQ, since the former rely on self-report. This is the equivalent of an IQ test that, instead of asking test-takers to solve logical puzzles, simply asked them how good they perceived themselves to be at solving logical puzzles. Nevertheless, conscientiousness, as measured in personality tests, does indeed correlate with earnings and career advancement, albeit less strongly than does IQ (Spurk & Abele 2011Wiersma & Kappe 2016).

[7] If some fat people are low in conscientiousness and intelligence, and others merely high in extraversion, there may, I suspect, also be a third category of people who do have self-control and self-discipline, but simply do not much care about whether they are fat or thin. However, given both the social stigma and health implications of obesity, this group is, I suspect, small. It is also likely young, since health dangers of obesity increase with age, and male, since both the social stigma of fatness, and especially its negative impact on mate value and attractiveness, seems to be greater for females. 

[8] Actually, whether roid rage is a real thing is a matter of some dispute. Although users of anabolic steroids do indeed have higher rates of violent crime, it has been suggested that this may be at least in part because the type of people who choose to use steroids are precisely those already prone to violence. In other words, there is a problem of self-selection bias.
Moreover, the association between testosterone and aggressive behaviours is more complex than this simple analysis assumes. One leading researcher in the field, Allan Mazur, argues that testosterone is not associated with aggression or violence per se, but only with dominance behaviours, which only sometimes manifest themselves through violent aggression. Thus, for example, a leading politician, business tycoon or chief executive of a large company may have high testosterone and be able to exercise dominance without resort to violence. However, a prisoner, being of low status in the legitimate world, is likely only able to assert dominance through violence (see Mazur & Booth 1998; Mazur 2009).

[9] Here, however, it is important to distinguish between the so-called organizing and ‘activating’ effects of testosterone. The latter can be equated with levels of circulating testosterone at any given time. The former, however, involves androgen levels at certain key points during development, especially in utero (i.e. in the womb) and during puberty, which thenceforth have long-term effects on both morphology and behaviour (and a person’s degree of susceptibility to circulating androgens).
Facial bone structure is presumable largely an effect of the ‘organizing’ effects of testosterone during development, though jaw shape is also affected by the size of the jaw muscles, which can be increased, it has been claimed, by regularly chewing gum. Bodily muscularity, on the other hand, is affected by both levels of circulating testosterone (hence the effects of anabolic steroids on muscle growth) but also levels of testosterone during development, not least because high levels of androgens during development increases the number and sensitivity of androgen receptors, which affect the potential for muscular growth.

[10] In this section, I have somewhat conflated spatial ability, mathematical ability and autism traits. However, these are themselves, of course, not the same, though each is probably associated with the others, albeit again not necessarily in a linear relationship.

[11] I have been unable to discover any evidence for this supposed association between lack of balding and impotence in men. On the contrary, googling the terms ‘male pattern baldness’ and ‘impotence’ finds only a results, mostly people speculating whether there is a positive correlation between balding and impotence in males, if only on the very unpersuasive ground that the two conditions tend to have a similar age of onset (i.e. around middle-age).

[12] In contrast, the shaven-head skinhead-look, or close-cropped military-style induction cut, buzz cut or high and tight is, of course, perceived as a quintessentially masculine, and even thuggish, hairstyle. This is perhaps because, in addition to contrasting with the long hair typically favoured by females, it also, by reducing the size of the upper part of the head, makes the lower part of the face e.g. the jaw and body, appear comparatively larger, and large jaws are a masculine trait, Thus, Nancy Etcoff observes:

The absense of hair on the head serves to exaggerate signals of strength. The smaller the head the bigger the look of the neck and body. Bodybuilders often shave or crop their hair, the size contrast between the head and neck and shoulders emphasizing the massiveness of the chest” (Survival of the Prettiest: p126).

[13] The source that Dutton cites for this claim is (Nieschlag & Behr 2013).

[14] In America, it has been suggested, especially tall boys are not treated with testosterone to prevent their growing any taller. Instead, they are encouraged to attempt to make a successful career in professional basketball

[15] On the other hand, one Swedish study investigating the association between height and violent crime found that the shortest men in Sweden had almost double convictions for violent crimes as compared to the tallest men in Sweden. However, after controlling for potential confounds (e.g. socioeconomic status and intelligence, both of which positively correlate with height), the association was reversed, with taller man having a somewhat higher likelihood of being convicted of a violent crime (Beckley et al 2014). 

[16] According to Dutton, the correlation between height and IQ is only about r = 0.1. This is a modest correlation even by psychology and social science standards.

[17] In other words, although modest in magnitude, the association between height and IQ has been replicated in so many studies with sufficiently large and representative sample sizes that we can be certain that it represents a real association in the population at large, not an artifact of small, unrepresentative or biased sampling in just one or a few studies. 

[18] An alternative explanation for the absence of a within-family correlation between height and intelligence is that some factor that differs as between families causes both increased height and increased intelligence. An obvious candidate would be malnutrition. However, in modern western economies where there is an superabundance of food, starvation is almost unknown and obesity is far more common than undernourishment even among the ostensible poor (indeed, as noted by Dutton, especially among the ostensible poor), it is doubtful that undernourishment is a significant factor in explaining either small stature or low IQs, especially since height is mostly heritable, at least by the time a person reaches adulthood.

[19] The conventional wisdom is that beards went out of fashion during the twentieth century precisely because their role as in spreading germs came to be more widely known. Thus, Nancy Etcoffwrites:

Facial hair has been less abundant in this century than in centuries past (except in the 1960s) partly because medical opinion turned against them. As people became increasingly aware of the role of germs in spreading diseases, beards came to be seen as repositories of germs. Previously, they had been advised by doctors as a means to protect the throat and filter air to the lungs” (Survival of the Prettiest: p156-7). 

Of course, this is not at all inconsistent with the notion that beards are perceived as attractive by women precisely because they represent a potential vector of infection and hence advertise the health and robustness of the male whom they adorn, as contended by Dutton. On the contrary, the fact that beards are indeed associated with infection, is consistent with and supportive of Dutton’s theory. 

[20] It would be interesting to discover whether these findings generalize to other, non-western cultures, especially those where beards are universal or the norm (e.g. among Muslims in the Middle East). It would also be discover whether women’s perceptions regarding the attractiveness of men with beards have changed as beards have gone in and out of fashion. 

[21] Perhaps this is because, although age is still associated with status, it is no longer as socially acceptable for older men to marry, or enter sexual relationships with, much younger women or girls as it was in the past, and such relationships are now less common. Indeed, in the last few years, this has become especially socially unacceptable. Therefore, given that most men are maximally attracted to females in this age category, they prefer to be thought of as younger so that it is more acceptable for them to seek relationships with younger, more attractive females.
Actually, while older men tend to have higher status on average, I suspect that, after controlling for status, it is younger men who would be perceived as more attractive. Certainly, a young multi-millionaire would surely be considered a more eligible bachelor than an older homeless man. Therefore, age per se is not attractive; only high status is attractive, which happens to correlate with age.

[22] This idea is again based on Philippe Rushton’s Differential K theory, which I have reviewed here and here.

[23] Dutton is apparently aware of this objection. He acknowledges, albeit in a different book, that “Intelligence, in general, is associated with health” (Why Islam Makes You Stupid: p174). However, in this same book, he also claims that: 

Intelligence has been shown to be only weakly associated with mutational load” (Why Islam Makes You Stupid: p169). 

Interestingly, Dutton also claims in this book: 

Very high intelligence predicts autism” (Why Islam Makes You Stupid: p175). 

This claim, namely that exceptionally high intelligence is associated with autism, seems anecdotally plausible. Certainly, autism seems to have a complex and interesting relationship with intelligence
Unfortunately, however, Dutton does not cite a source for the claim the claim that exceptionally high intelligence is associated with autism. Nevertheless, according to data cited here, there is indeed a greater variance in the IQs of autistic people, with greater proportions of autistic people at both tail-ends of the bell curve, the author even referring to an inverted bell curve for intelligence among autistic people, though, even according to her own cited data, this appears to be an exaggeration. However, this is not a scholarly source, but rather appears to be the website of a not entirely disinterested advocacy group, and it is not entirely clear from where this data derives, the piece referring only to data from the Netherlands collected by the Dutch Autism Register (NAR). 

[24] Admittedly, Dutton does cite one study showing that subjects can identify Mormons from facial photographs alone, and that the two groups differed in skin quality (Rule et al 2010). However, this might reflect merely the health advantages resulting from the religiously imposed abstention from the consumption of alcohol, tobacco, tea and coffee.
For what it’s worth, my own subjective and entirely anecdotal impression is almost the opposite of Dutton’s, at least here in secular modern Britain, where anyone who identifies as Christian, let alone a fundamentalist, unless perhaps s/he is elderly, tends to be regarded as a bit odd.
An interesting four-part critique of this theory, along very different lines from my own, is provided by Scott A McGreal at the Psychology Today website, see here, here, here, and here. Dutton responds with a two-part rejoinder here and here.

[25] However, when it comes to actual politicians, I suspect this difference may be attenuated, or even nonexistent, since pursuing a career in politics is, by its very nature, a very untraditional, and unfeminine, career choice, most likely because, in Darwinian terms, political power has a greater reproductive payoff for men than for women. Thus, it is hardly surprising that leading female politicians, even those who theoretically champion traditional sex roles, tend themselves to be quite butch and masculine in appearance and often as unattractive as their leftist opponents (e.g. Ann Widdecombe). Indeed, even Ann Coulter, a relatively attractive woman, at least by the standards of female political figures, has been mocked for her supposedly mannish appearance and pronounced Adam’s apple.
Moreover, most leading politicians are at least middle-aged, and female attractiveness peaks very young, in mid- to late-teens into early-twenties

[26] Another medical condition associated with a specific look, as well as with mental disability, is cretinism, though due to medical advances, most people with the condition in western societies, develop normally and no longer manifest either the distinctive appearance or the mental disability. 

References 

Aldridge et al (2011) Facial phenotypes in subgroups of prepubertal boys with autism spectrum disorders are correlated with clinical phenotypes. Molecular Autism 14;2(1):15. 
Apicella et al (2015) Hadza Hunter-Gatherer Men do not Have More Masculine Digit Ratios (2D:4D) American Journal of Physical Anthropology 159(2):223-32. 
Bagenjuk et al (2019) Personality Traits and Obesity, International Journal of Environmental Research and Public Health 16(15): 2675. 
Bailey & Hurd (2005) Finger length ratio (2D:4D) correlates with physical aggression in men but not in women. Biological Psychology 68(3):215-22. 
Batrinos (2014) The endocrinology of baldness. Hormones 13(2): 197–212. 
Beckley et al (2014) Association of height and violent criminality: results from a Swedish total population study. International Journal of Epidemiology 43(3):835-42 
Benderlioglu & Nelson (2005) Digit length ratios predict reactive aggression in women, but not in men Hormones and Behavior 46(5):558-64. 
Berggren et al (2017) The right look: Conservative politicians look better and voters reward it Journal of Public Economics 146:  79-86. 
Blanchard & Lyons (2010) An investigation into the relationship between digit length ratio (2D: 4D) and psychopathy, British Journal of Forensic Practice 12(2):23-31. 
Buru et al (2017) Evaluation of the hand anthropometric measurement in ADHD children and the possible clinical significance of the 2D:4D ratioEastern Journal of Medicine 22(4):137-142. 
Case & Paxson (2008) Stature and status: Height, ability, and labor market outcomes, Journal of Political Economy 116(3): 499–532. 
Cash (1990) Losing Hair, Losing Points?: The Effects of Male Pattern Baldness on Social Impression Formation. Journal of Applied Social Psychology 20(2):154-167. 
De Waal et al (1995) High dose testosterone therapy for reduction of final height in constitutionally tall boys: Does it influence testicular function in adulthood? Clinical Endocrinology 43(1):87-95. 
Dixson & Vasey (2012) Beards augment perceptions of men’s age, social status, and aggressiveness, but not attractiveness, Behavioral Ecology 23(3): 481–490. 
Dutton et al (2017) The Mutant Says in His Heart, “There Is No God”: the Rejection of Collective Religiosity Centred Around the Worship of Moral Gods Is Associated with High Mutational Load Evolutionary Psychological Science 4:233–244. 
Dysniku et al (2015) Minor Physical Anomalies as a Window into the Prenatal Origins of Pedophilia, Archives of Sexual Behavior 44:2151–2159. 
Elias et al (2012) Obesity, Cognitive Functioning and Dementia: Back to the Future, Journal of Alzheimer’s Disease 30(s2): S113-S125. 
Ellis & Hoskin (2015) Criminality and the 2D:4D Ratio: Testing the Prenatal Androgen Hypothesis, International Journal of Offender Therapy and Comparative Criminology 59(3):295-312 
Fossen et al (2022) 2D:4D and Self-Employment: A Preregistered Replication Study in a Large General Population Sample Entrepreneurship Theory and Practice 46(1):21-43. 
Gouchie & Kimura (1991) The relationship between testosterone levels and cognitive ability patterns Psychoneuroendocrinology 16(4): 323-334. 
Griffin et al (2012) Varsity athletes have lower 2D:4D ratios than other university students, Journal of Sports Sciences 30(2):135-8. 
Hilgard et al (2019) Null Effects of Game Violence, Game Difficulty, and 2D:4D Digit Ratio on Aggressive Behavior, Psychological Science 30(1):095679761982968 
Hollier et al (2015) Adult digit ratio (2D:4D) is not related to umbilical cord androgen or estrogen concentrations, their ratios or net bioactivity, Early Human Development 91(2):111-7 
Hönekopp & Urban (2010) A meta-analysis on 2D:4D and athletic prowess: Substantial relationships but neither hand out-predicts the other, Personality and Individual Differences 48(1):4-10. 
Hoskin & Ellis (2014) Fetal testosterone and criminality: Test of evolutionary neuroandrogenic theory, Criminology 53(1):54-73. 
Ishikawa et al (2001) Increased height and bulk in antisocial personality disorder and its subtypes. Psychiatry Research 105(3):211-219. 
Işık et al (2020) The Relationship between Second-to-Fourth Digit Ratios, Attention-Deficit/Hyperactivity Disorder Symptoms, Aggression, and Intelligence Levels in Boys with Attention-Deficit/Hyperactivity Disorder, Psychiatry Investigation 17(6):596–602. 
Janowski et al (1994) Testosterone influences spatial cognition in older men. Behavioral Neuroscience 108(2):325-32. 
Jokela et al (2012) Association of personality with the development and persistence of obesity: a meta-analysis based on individual–participant data, Etiology and Pathophysiology 14(4): 315-323. 
Kanazawa (2014) Intelligence and obesity: Which way does the causal direction go? Current Opinion in Endocrinology, Diabetes and Obesity (5):339-44. 
Kandel et al (1989) Minor physical anomalies and recidivistic adult violent criminal behavior, Acta Psychiatrica Scandinavica 79(1) 103-107. 
Kangassalo et al (2011) Prenatal Influences on Sexual Orientation: Digit Ratio (2D:4D) and Number of Older Siblings, Evolutionary Psychology 9(4):496-508 
Kerry & Murray (2019) Is Formidability Associated with Political Conservatism?  Evolutionary Psychological Science 5(2): 220–230. 
Keshavarz et al (2017) The Second to Fourth Digit Ratio in Elite and Non-Elite Greco-Roman Wrestlers, Journal of Human Kinetics 60: 145–151. 
Kleisner et al (2014) Perceived Intelligence Is Associated with Measured Intelligence in Men but Not Women. PLoS ONE 9(3): e81237. 
Kordsmeyer et al (2018) The relative importance of intra- and intersexual selection on human male sexually dimorphic traits, Evolution and Human Behavior 39(4): 424-436. 
Kosinski & Wang (2018) Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. Journal of Personality and Social Psychology 114(2):246-257. 
Kyselicová et al (2021) Autism spectrum disorder and new perspectives on the reliability of second to fourth digit ratio Developmental Pyschobiology 63(6). 
Li et al (2016) The relationship between digit ratio and sexual orientation in a Chinese Yunnan Han population, Personality and Individual Differences 101:26-29. 
Lippa (2003) Are 2D:4D finger-length ratios related to sexual orientation? Yes for men, no for women, Journal of Personality &Social Psychology 85(1):179-8 
Lolli et al (2017) A comprehensive allometric analysis of 2nd digit length to 4th digit length in humans, Proceedings of the Royal Society B: Biological Sciences 284(1857):20170356 
Malamed (1992) Personality correlates of physical height. Personality and Individual Differences 13(12):1349-1350. 
Manning & Taylor (2001) Second to fourth digit ratio and male ability in sport: implications for sexual selection in humans, Evolution & Human Behavior 22(1):61-69. 
Manning et al (2001) The 2nd to 4th digit ratio and autism, Developmental Medicine & Child Neurology 43(3):160-164. 
Martel et al (2008) Masculinized Finger-Length Ratios of Boys, but Not Girls, Are Associated With Attention-Deficit/Hyperactivity Disorder, Behavioral Neuroscience 122(2):273-81. 
Martin et al (2015) Associations between obesity and cognition in the pre-school years, Obesity 24(1) 207-214 
Mazur & Booth (1998) Testosterone and dominance in men. Behavioral and Brain Sciences, 21(3), 353–397. 
Mazur (2009) Testosterone and violence among young men. In Walsh & Beaver (eds) Biosocial Criminology: New Directions in theory and Research. New York: Routledge. 
Moffat & Hampson (1996) A curvilinear relationship between testosterone and spatial cognition in humans: Possible influence of hand preference. Psychoneuroendocrinology. 21(3):323-37. 
Murray et al (2014) How are conscientiousness and cognitive ability related to one another? A re-examination of the intelligence compensation hypothesis, Personality and Individual Differences, 70, 17–22. 
Nieshclag & Behr (2013) Testosterone Therapy. In Nieschlag & Behr (eds) Andrology: Male Reproductive Health and Dysfunction. New York: Springer. 
Ozgen et al (2010) Minor physical anomalies in autism: a meta-analysis. Molecular Psychiatry 15(3):300–7. 
Ozgen et al (2011) Morphological features in children with autism spectrum disorders: a matched case-control study. Journal of Autism and Developmental Disorders 41(1):23-31. 
Peterson & Palmer (2017) Effects of physical attractiveness on political beliefs. Politics and the Life Sciences 36(02):3-16 
Persico et al (2004) The Effect of Adolescent Experience on Labor Market Outcomes: The Case of Height, Journal of Political Economy 112(5): 1019-1053. 
Pratt et al (2016) Revisiting the criminological consequences of exposure to fetal testosterone: a meta-analysis of the 2d:4d digit ratio, Criminology 54(4):587-620. 
Price et al (2017). Is sociopolitical egalitarianism related to bodily and facial formidability in men? Evolution and Human Behavior, 38, 626-634. 
Puts (2010) Beauty and the beast: Mechanisms of sexual selection in humans, Evolution and Human Behavior 31(3):157-175. 
Rammstedt et al (2016) The association between personality and cognitive ability: Going beyond simple effects, Journal of Research in Personality 62: 39-44. 
Rosenberg & Kagan (1987) Iris pigmentation and behavioral inhibition Developmental Psychobiology 20(4):377-92. 
Rosenberg & Kagan (1989) Physical and physiological correlates of behavioral inhibition Developmental Psychobiology 22(8):753-70. 
Rule et al (2010) On the perception of religious group membership from faces. PLoS ONE 5(12):e14241. 
Salas-Wright & Vaughn (2016) Size Matters: Are Physically Large People More Likely to be Violent? Journal of Interpersonal Violence 31(7):1274-92. 
Spurk & Abele (2011) Who Earns More and Why? A Multiple Mediation Model from Personality to Salary, Journal of Business and Psychology 26: 87–103. 
Sutin et al (2011) Personality and Obesity across the Adult Lifespan Journal of Personality and Social Psychology 101(3): 579–592. 
Valla et al (2011). The accuracy of inferences about criminality based on facial appearance. Journal of Social, Evolutionary, and Cultural Psychology, 5(1), 66-91. 
Voracek et al (2011) Digit ratio (2D:4D) and sex-role orientation: Further evidence and meta-analysis, Personality and Individual Differences 51(4): 417-422. 
Weinberg et al (2007) Minor physical anomalies in schizophrenia: A meta-analysis, Schizophrenia Research 89: 72–85. 
Wiersma & Kappe 2015 Selecting for extroversion but rewarding for conscientiousness, European Journal of Work and Organizational Psychology 26(2): 314-323. 
Williams et al (2000) Finger-Length Ratios and Sexual Orientation, Nature 404(6777):455-456. 
Xu et al (2011) Minor physical anomalies in patients with schizophrenia, unaffected first-degree relatives, and healthy controls: a meta-analysis, PLoS One 6(9):e24129. 
Xu & Zheng (2016) The Relationship Between Digit Ratio (2D:4D) and Sexual Orientation in Men from China, Archives of Sexual Behavior 45(3):735-41. 

‘The Bell Curve’: A Book Much Read About, But Rarely Actually Read

The Bell Curve: Intelligence and Class Structure in American Life by Richard Herrnstein and Charles Murray (New York: Free Press, 1994). 

There’s no such thing as bad publicity’ – or so contends a famous adage of the marketing industry. 

The Bell Curve: Intelligence and Class Structure in America’ by Richard Herrnstein and Charles Murray is perhaps a case in point. 

This dry, technical, academic social science treatise, full of statistical analyses, graphs, tables, endnotes and appendices, and totalling almost 900 pages, became an unlikely nonfiction bestseller in the mid-1990s on a wave of almost universally bad publicity in which the work was variously denounced as racist, pseudoscientific, fascist, social Darwinist, eugenicist and sometimes even just plain wrong. 

Readers who hurried to the local bookstore eagerly anticipating an incendiary racialist polemic were, however, in for a disappointment. 

Indeed, one suspects that, along with The Bible and Stephen Hawkins’ A Brief History of Time, ‘The Bell Curve’ became one of those bestsellers that many people bought, but few managed to finish. 

The Bell Curve’ thus became, like another book that I have recently reviewed, a book much read about, but rarely actually read – at least in full. 

As a result, as with that other book, many myths have emerged regarded the content of ‘The Bell Curve’ that are quite contradicted when one actually takes the time and trouble to read it for oneself. 

Subject Matter 

The first myth of ‘The Bell Curve’ is that it was a book about race differences, or, more specifically, about race differences in intelligence. In fact, however, this is not true. 

Thus, ‘The Bell Curve’ is a book so controversial that the controversy begins with the very identification of its subject-matter. 

On the one hand, the book’s critics focused almost exclusively on subject of race. This led to the common perception that ‘The Bell Curve’ was a book about race and race differences in intelligence.[1]

Ironically, many racialists seem to have taken these leftist critics at their word, enthusiastically citing the work as support for their own views regarding race differences in intelligence.  

On the other hand, however, surviving co-author Charles Murray insisted from the outset that the issue of race, and race differences in intelligence, was always peripheral to he and co-author Richard Herrnstein’s primary interest and focus, which was, he claimed, on the supposed emergence of a ‘Cognitive Elite’ in modern America. 

Actually, however, both these views seem to be incorrect. While the first section of the book does indeed focus on the supposed emergence of a ‘Cognitive Elite’ in modern America, the overall theme of the book seems to be rather broader. 

Thus, the second section of the book focuses on the association between intelligence and various perceived social pathologies, such as unemployment, welfare dependency, illegitimacy, crime and single-parenthood. 

To the extent the book has a single overarching theme, one might say that it is a book about the social and economic correlates of intelligence, as measured by IQ tests, in modern America.  

Its overall conclusion is that intelligence is indeed a strong predictor of social and economic outcomes for modern Americans – high intelligence with socially desirable outcomes and low intelligence with socially undesirable ones. 

On the other hand, however, the topic of race is not quite as peripheral to the book’s themes as sometimes implied by Murray and others. 

Thus, it is sometimes claimed only a single chapter dealt with race. Actually, however, two chapters focus on race differences, namely chapters 13 and 14, respectively titled ‘Ethnic Differences in Cognitive Ability’ and ‘Ethnic Inequalities in Relation to IQ’. 

In addition, a further two chapters, namely chapters 19 and 20, entitled respectively ‘Affirmative Action in Higher Education’ and ‘Affirmative Action in the Workplace’, deal with the topic of affirmative action, as does the final appendix, entitled ‘The Evolution of Affirmative Action in the Workplace’ – and, although affirmative action has been employed to favour women as well as racial minorities, it is with racial preferences that Herrnstein and Murray are primarily concerned. 

However, these chapters represent only 142 of the book’s nearly 900 pages. 

Moreover, in much of the remainder of the book, the authors actually explicitly restrict their analysis to white Americans exclusively. They do so precisely because the well documented differences between the races in IQ as well as in many of the social outcomes whose correlation with IQ the book discusses would mean that race would have represented a potential confounding factor that they would otherwise have to take steps to control for. 

Herrnstein and Murray therefore took to decision to extend their analysis to race differences near the end of their book, in order to address the question of the extent to which differences in intelligence, which they have already demonstrated to be an important correlate of social and economic outcomes among whites, are also capable of explaining differences in achievement as between races. 

Without these chapters, the book would have been incomplete, and the authors would have laid themselves open to the charge of political-correctness and of ignoring the elephant in the room

Race and Intelligence 

If the first controversy of ‘The Bell Curve’ concerns whether it is a book primarily about race and race differences in intelligence, the second controversy is over what exactly the authors concluded with respect to this vexed and contentious issue. 

Thus, the same leftist critics who claimed that ‘The Bell Curve’ was primarily a book about race and race differences in intelligence, also accused the authors of concluding that black people are innately less intelligent than whites

Some racists, as I have already noted, evidently took the leftists at their word, and enthusiastically cite the book as support and authority for this view. 

However, in subsequent interviews, Murray always insisted he and Herrnstein had actually remained “resolutely agnostic” on the extent to which genetic factors underlay the IQ gap. 

In the text itself, Herrnstein and Murray do indeed declare themselves “resolutely agnostic” with regard to the extent of the genetic contribution to the test score gap (p311).

However, just couple of sentences before they use this very phrase, they also appear to conclude that genes are indeed at least part of the explanation, writing: 

It seems highly likely to us that both genes and the environment have something to do with racial differences [in IQ]” (p311). 

This paragraph, buried near the end of chapter 13, during an extended discussion of evidence relating to the causes of race differences in intelligence, is the closest the authors come to actually declaring any definitive conclusion regarding the causes of the black-white test score gap.[2]

This conclusion, though phrased in sober and restrained terms, is, of course, itself sufficient to place its authors outside the bounds of acceptable opinion in the early-twenty-first century, or indeed in the late-twentieth century when the book was first published, and is sufficient to explain, and, for some, justify, the opprobrium heaped upon the book’s surviving co-author from that day forth. 

Intelligence and Social Class

It seems likely that races which evolved on separate continents, in sufficient reproductive isolation from one another to have evolved the obvious (and not so obvious) physiological differences between races that we all observe when we look at the faces, or bodily statures, of people of different races (and that we indirectly observe when we look at the results of different athletic events at the Olympic Games), would also have evolved to differ in psychological traits, including intelligence

Indeed, it is surely unlikely, on a priori grounds alone, that all different human races have evolved, purely by chance, the exact same level of intelligence. 

However, if races differ in intelligence are therefore probable, the case for differences in intelligence as between social classes is positively compelling

Indeed, on a priori grounds alone, it is inevitable that social classes will come to differ in IQ, if one accepts two premises, namely: 

1) Increased intelligence is associated with upward social mobility; and 
2) Intelligence is passed down in families.

In other words, if more intelligent people tend, on average, to get higher-paying jobs than those of lower intelligence, and the intelligence of parents is passed on to their offspring, then it is inevitable that the offspring of people with higher-paying jobs will, on average, themselves be of higher intelligence than are the offspring of people with lower paying jobs.  

This, of course, follows naturally from the infamous syllogism formulated by ‘Bell Curve’ co-author Richard Herrnstein way back in the 1970s (p10; p105). 

Incidentally, this second premise, namely that intelligence is passed down in families, does not depend on the heritability of IQ in the strict biological sense. After all, even if heritability of intelligence were zero, intelligence could still be passed down in families by environmental factors (e.g. the ‘better’ parenting techniques of high IQ parents, or the superior material conditions in wealthy homes). 

The existence of an association between social class and IQ ought, then, to be entirely uncontroversial to anyone who takes any time whatsoever to think about the issue. 

If there remains any room for reasoned disagreement, it is only over the direction of causation – namely the question of whether:  

1) High intelligence causes upward social mobility; or 
2) A privileged upbringing causes higher intelligence.

These two processes are, of course, not mutually exclusive. Indeed, it would seem intuitively probable that both factors would be at work. 

Interestingly, however, evidence demonstrates the occurrence only of the former. 

Thus, even among siblings from the same family, the sibling with the higher childhood IQ will, on average, achieve higher socioeconomic status as an adult. Likewise, the socioeconomic status a person achieves as an adult correlates more strongly with their own IQ score than it does with the socioeconomic status of their parents or of the household they grew up in (see Straight Talk About Mental Tests: p195). 

In contrast, family, twin and adoption studies and of the sort conducted by behavioural geneticists have concurred in suggesting that the so-called shared family environment (i.e. those aspects of the family environment shared by siblings from the same household, including social class) has but little effect on adult IQ. 

In other words, children raised in the same home, whether full- or half-siblings or adoptees, are, by the time they reach adulthood, no more similar to one another in IQ than are children of the same degree of biological relatedness brought up in entirely different family homes (see The Nurture Assumption: reviewed here). 

However, while the direction of causation may still be disputed by intelligent (if uninformed) laypeople, the existence of an association between intelligence and social class ought not, one might think, be in dispute. 

However, in Britain today, in discussions of social mobility, if children from deprived backgrounds are underrepresented, say, at elite universities, then this is almost invariably taken as incontrovertible proof that the system is rigged against them. The fact that children from different socio-economic backgrounds differ in intelligence is almost invariably ignored. 

When mention is made of this incontrovertible fact, leftist hysteria typically ensues. Thus, in 2008, psychiatrist Bruce Charlton rightly observed that, in discussion of social mobility: 

A simple fact has been missed: higher social classes have a significantly higher average IQ than lower social classes (Clark 2008). 

For his trouble, Charlton found himself condemned by the National Union of Students and assorted rent-a-quote academics and professional damned fools, while even the ostensibly ‘right-wing’ Daily Mail newspaper saw fit to publish a headline Higher social classes have significantly HIGHER IQs than working class, claims academic, as if this were in some way a controversial or contentious claim (Clark 2008). 

Meanwhile, when, in the same year, a professor at University College a similar point with regard the admission of working-class students to medical schools, even the then government Health Minister, Ben Bradshaw, saw fit to offer his two cents worth (which were not worth even that), declaring: 

It is extraordinary to equate intellectual ability with social class” (Beckford 2008). 

Actually, however, what is truly extraordinary is that any intelligent person, least of all a government minister, would dispute the existence of such a link. 

Cognitive Stratification 

Herrnstein’s syllogism leads to a related paradox – namely that, as environmental conditions are equalized, heritability increases. 

Thus, as large differences in the sorts of environmental factors known to affect IQ (e.g. malnutrition) are eliminated, so differences in income have come to increasingly reflect differences in innate ability. 

Moreover, the more gifted children from deprived backgrounds who escape their humble origins, then, given the substantial heritability of IQ, the fewer such children will remain among the working-class in subsequent generations. 

The result is what Herrnstein and Murray call the ‘Cognitive Stratification’ of society and the emergence of what they call a ‘Cognitive Elite’. 

Thus, in feudal society, a man’s social status was determined largely by ‘accident of birth’ (i.e. he inherited the social station of his father). 

Women’s status, meanwhile, was determined, in addition, by what we might call ‘accident of marriage’ – and, to a large extent, it still is

However, today, a person’s social status, at least according to Herrnstein and Murray, is determined primarily, and increasingly, by their level of intelligence. 

Of course, people are not allocated to a particular social class by IQ testing itself. Indeed, the use of IQ tests by employers and educators has been largely outlawed on account of its disparate impact (or indirect discrimination’, to use the equivalent British phrase) with regard to race (see below). 

However, the skills and abilities increasingly valued at a premium in western society (and, increasingly, many non-western societies as well), mean that, through the operation of the education system and labour market, individuals are effectively sorted by IQ, even without anyone ever actually sitting an IQ test. 

In other words, society is becoming increasingly meritocratic – and the form of ostensible ‘merit’ upon which attainment is based is intelligence. 

For Herrnstein and Murray, this is a mixed blessing: 

That the brightest are identified has its benefits. That they become so isolated and inbred has its costs” (p25). 

However, the correlation between socioeconomic status and intelligence remains imperfect. 

For one thing, there are still a few highly remunerated, and very high-status, occupations that rely on skills that are not especially, if at all, related to intelligence.  I think here, in particular, of professional sports and the entertainment industry. Thus, leadings actors, pop stars and sports stars are sometimes extremely well-remunerated, and very high-status, but may not be especially intelligent.  

More importantly, while highly intelligent people might be, by very definition, the only ones capable of performing cognitively-demanding, and hence highly remunerated, occupations, this is not to say all highly intelligent people are necessarily employed in such occupations. 

Thus, whereas all people employed in cognitively-demanding occupations are, almost by definition, of high intelligence, people of all intelligence levels are capable of doing cognitively-undemanding jobs.

Thus, a few people of high intellectual ability remain in low-paid work, whether on account of personality factors (e.g. laziness), mental illness, lack of opportunity or sometimes even by choice (which choice is, of course, itself a reflection of personality factors). 

Therefore, the correlation between IQ and occupation is far from perfect. 

Job Performance

The sorting of people with respect to their intelligence begins in the education system. However, it continues in the workplace. 

Thus, general intelligence, as measured by IQ testing, is, the authors claim, the strongest predictor of occupational performance in virtually every occupation. Moreover, in general, the higher paid and higher status the occupation in question, the stronger the correlation between performance and IQ. 

However, Herrnstein and Murray are at pains to emphasize, intelligence is a strong predictor of occupational performance even in apparently cognitively undemanding occupations, and indeed almost always a better predictor of performance than tests of the specific abilities the job involves on a daily basis. 

However, in the USA, employers are barred from using testing to select among candidates for a job or for promotion unless they can show the test has ‘manifest relationship’ to the work, and the burden of proof is on the employer to show such a relationship. Otherwise, given their disparate impact’ with regard to race (i.e. the fact that some groups perform worse), the tests in question are deemed indirectly discriminatory and hence unlawful. 

Therefore, employers are compelled to test, not general ability, but rather the specific skills required in the job in question, where a ‘manifest relationship’ is easier to demonstrate in court. 

However, since even tests of specific abilities almost invariably still tap into the general factor of intelligence, races inevitably score differently even on these tests. 

Indeed, because of the ubiquity and predictive power of the g factor, it is almost impossible to design any type of standardized test, whether of specific or general ability or knowledge, in which different racial groups do not perform differently. 

However, if some groups outperform others, the American legal system presumes a priori that this reflects test bias rather than differences in ability. 

Therefore, although the words all men are created equal are not, contrary to popular opinion, part of the US constitution, the Supreme Court has effectively decided, by legal fiat, to decide cases as if they were. 

However, just as a law passed by Congress cannot repeal the law of gravity, so a legal presumption that groups are equal in ability cannot make it so. 

Thus, the bar on the use of IQ testing by employers has not prevented society in general from being increasingly stratified by intelligence, the precise thing measured by the outlawed tests. 

Nevertheless, Herrnstein and Murray estimate that the effective bar on the use of IQ testing makes this process less efficient, and cost the economy somewhere between 80 billion to 13 billion dollars in 1980 alone (p85). 

Conscientiousness and Career Success

I am skeptical of Herrnstein and Murray’s conclusion that IQ is the best predictor of academic and career success. I suspect hard work, not to mention a willingness to toady, toe the line, and obey orders, is at least as important in even the most cognitively-demanding careers, as well as in schoolwork and academic advancement. 

Perhaps the reason these factors have not (yet) been found to be as highly correlated with earnings as is IQ is that we have not yet developed a way of measuring these aspects of personality as accurately as we can measure a person’s intelligence through an IQ test. 

For example, the closest psychometricians have come to measuring capacity for hard work is the personality factor known as conscientiousness, one of the Big Five factors of personality revealed by psychometric testing. 

Conscientiousness does indeed correlate with success in education and work (e.g. Barrick & Mount 1991). However, the correlation is weaker than that between IQ and success in education and at work. 

However, this may be because personality is less easily measured by current psychometric methods than is intelligence – not least because personality tests generally rely on self-report, rather than measuring actual behaviour

Thus, to assess conscientiousness, questionnaires ask respondents whether they ‘see themselves as organized’, ‘as able to follow an objective through to completion’, ‘as a reliable worker’, etc. 

This would be the equivalent of an IQ test that, instead of directly testing a person’s ability to recognize patterns or manipulate shapes by having them do just this, simply asked respondents how good they perceived themselves as being at recognizing patterns, or manipulating shapes. 

Obviously, this would be a less accurate measure of intelligence than a normal IQ test. After all, some people lie, some are falsely modest and some are genuinely deluded. 

Indeed, according to the Dunning Kruger effect, it is those most lacking in ability who most overestimate their abilities – precisely because they lack the ability to accurately assess their ability (Kruger & Dunning 1999). 

In an IQ test, on the other hand, one can sometimes pretend to be dumber than one is, by deliberately getting questions wrong that one knows the answer to.[3]

However, it is not usually possible to pretend to be smarter than one is by getting more questions right simply because one would not know what are the right answers. 

Affirmative Action’ and Test Bias 

In chapters nineteen and twenty, respectively entitled ‘Affirmative Action in Higher Education’ and ‘Affirmative Action in the Workplace’, the authors discuss so-called affirmative action, an American euphemism for systematic and overt discrimination against white males. 

It is well-documented that, in the United States, blacks, on average, earn less than white Americans. On the other hand, it is less well-documented that whites, on average, earn less than people of IndianChinese and Jewish ancestry

With the possible exception of Indian-Americans, these differences, of course, broadly mirror those in average IQ scores. 

Indeed, according to Herrnstein and Murray, the difference in earnings between whites and blacks, not only disappears after controlling for differences in IQ, but is actually partially reversed. Thus, blacks are actually somewhat overrepresented in professional and white-collar occupations as compared to whites of equivalent IQ. 

This remarkable finding Herrnstein and Murray attribute to the effects of affirmative action programmes, as black Americans are appointed and promoted beyond what their ability merits because through discrimination. 

Interestingly, however, this contradicts what the authors wrote in an earlier chapter, where they addressed the question of test bias (pp280-286). 

There, they concluded that testing was not biased against African-Americans, because, among other reasons, IQ tests were equally predictive of real-world outcomes (e.g. in education and employment) for both blacks and whites, and blacks do not perform any better in the workplace or in education than their IQ scores predict. 

This is, one might argue, not wholly convincing evidence that IQ tests are not biased against blacks. It might simply suggest that society at large, including the education system and the workplace, is just as biased against blacks as are the hated IQ tests. This is, of course, precisely what we are often told by the television, media and political commentators who insist that America is a racist society, in which such mysterious forces as ‘systemic racism’ and ‘white privilege’ are pervasive. 

In fact, the authors acknowledge this objection, conceding:  

The tests may be biased against disadvantaged groups, but the traces of bias are invisible because the bias permeates all areas of the group’s performance. Accordingly, it would be as useless to look for evidence of test bias as it would be for Einstein’s imaginary person traveling near the speed of light to try to determine whether time has slowed. Einstein’s traveler has no clock that exists independent of his space-time context. In assessing test bias, we would have no test or criterion measure that exists independent of this culture and its history. This form of bias would pervade everything” (p285). 

Herrnstein and Murray ultimately reject this conclusion on the grounds that it is simply implausible to assume that: 

“[So] many of the performance yardsticks in the society at large are not only biased, they are all so similar in the degree to which they distort the truth-in every occupation, every type of educational institution, every achievement measure, every performance measure-that no differential distortion is picked up by the data” (p285). 

In fact, however, Nicholas Mackintosh identifies one area where IQ tests do indeed under-predict black performance, namely with regard to so-called adaptive behaviours – i.e. the ability to cope with day-to-day life (e.g. feed, dress, clean, interact with others in a ‘normal’ manner). 

Blacks with low IQs are generally much more functional in these respects than whites or Asians with equivalent low IQs (see IQ and Human Intelligence: p356-7).[4]

Yet Herrnstein and Murray seem to have inadvertently, and evidently without realizing it, identified yet another sphere where standardized testing does indeed under-predict real-world outcomes for blacks. 

Thus, if indeed, as Herrnstein and Murray claim, blacks are somewhat overrepresented among professional and white-collar occupations relative to their IQs, this suggests that blacks do indeed do better in real-world outcomes than their test results would predict and, while Herrnstein and Murray attribute this to the effect of discrimination against whites, it could instead surely be interpreted as evidence that the tests are biased against blacks. 

Policy Implications? 

What, then, are the policy implications that Herrnstein and Murray draw from the findings that they report? 

In The Blank Slate: The Modern Denial of Human Nature, cognitive science, linguist and popular science writer Steven Pinker popularizes the notion that recognizing the existence of innate differences between individuals and groups in traits such as intelligence does not necessarily lead to ‘right-wing’ political implications. 

Thus, a leftist might accept the existence of innate differences in ability, but conclude that, far from justifying inequality, this is all the more reason to compensate the, if you like, ‘cognitively disadvantaged’ for their innate deficiencies, differences which are, being innate, hardly something for which they can legitimately be blamed. 

Herrnstein and Murray reject this conclusion, but acknowledge it is compatible with their data. Thus, in an afterword to later editions, Murray writes: 

If intelligence plays an important role in determining how well one does in life, and intelligence is conferred on a person through a combination of genetic and environmental factors over which that person has no control, the most obvious political implication is that we need a Rawlsian egalitarian state, compensating the less advantaged for the unfair allocation of intellectual gifts” (p554).[5]

Interestingly, Pinker’s notion of a ‘hereditarian left’, and the related concept of Bell Curve liberals, is not entirely imaginary. On the contrary, it used to be quite mainstream. 

Thus, it was the radical leftist post-war Labour government that imposed the tripartite system on schools in the UK in 1945, which involved allocating pupils to different schools on the basis of their performance in what was then called the 11-plus exam, conducted at with children at age eleven, which tested both ability and acquired knowledge. This was thought by leftists to be a fair system that would enable bright, able youngsters from deprived and disadvantaged working-class backgrounds to achieve their full potential.[6]

Indeed, while contemporary Cultural Marxists emphatically deny the existence of innate differences in ability as between individuals and groups, Marx himself, laboured under no such delusion

On the contrary, in advocating, in his famous (plagiarized) aphorism From each according to his ability; to each according to his need, Marx implicitly recognized that individuals differ in “ability”, and, given that, in the unrealistic communist utopia he envisaged, environmental conditions were ostensibly to be equalized, these differences he presumably conceived of as innate in origin. 

However, a distinction must be made here. While it is possible to justify economic redistributive policies on Rawlsian grounds, it is not possible to justify affirmative action

Thus, one might well reasonably contend that the ‘cognitively disadvantaged’ should be compensated for their innate deficiencies through economic redistribution. Indeed, to some extent, most Western polities already do this, by providing welfare payments and state-funded, or state-subsidized, care to those whose cognitive impairment is such as to qualify as a disability and hence render them incapable of looking after or providing for themselves. 

However, we are unlikely to believe that such persons should be given entry to medical school such that they are one day liable to be responsible for performing heart surgery on us or diagnosing our medical conditions. 

In short, socialist redistribution is defensible – but affirmative action is definitely not! 

Reception and Readability 

The reception accorded ‘The Bell Curve’ in 1994 echoed that accorded another book that I have also recently reviewed, but that was published some two decades earlier, namely Edward O. Wilson’s Sociobiology: The New Synthesis

Both were greeted with similar indignant moralistic outrage by many social scientists, who even employed similar pejorative soundbites (‘genetic determinism’, reductionism, ‘biology as destiny’), in condemning the two books. Moreover, in both cases, the academic uproar even spilled over into a mainstream media moral panic, with pieces appearing the popular press attacking the two books. 

Yet, in both cases, the controversy focused almost exclusively on just a small part of each book – the single chapter in Sociobiology: The New Synthesis focusing on humans and the few chapters in ‘The Bell Curve’ discussing race. 

In truth, however, both books were massive tomes of which these sections represented only a small part. 

Indeed, due to their size, one suspects most critics never actually read the books in full for themselves, including, it seemed, most of those nevertheless taking it upon themselves to write critiques. This is what led to the massive disconnect between what most people thought the books said, and their actual content. 

However, there is a crucial difference. 

Sociobiology: The New Synthesis was a long book of necessity, given the scale of the project Wilson set himself. 

As I have written in my review of that latter work, the scale of Wilson’s ambition can hardly be exaggerated. He sought to provide a new foundation for the whole field of animal behaviour, then, almost as an afterthought, sought to extend this ‘New Synthesis’ to human behaviour as well, which meant providing a new foundation, not for a single subfield within biology, but for several whole disciplines (psychology, sociology, economics and cultural anthropology) that were formerly almost unconnected to biology. Then, in a few provocative sentences, he even sought to provide a new foundation for moral philosophy, and perhaps epistemology too. 

Sociobiology: The New Synthesis was, then, inevitably and of necessity, a long book. Indeed, given that his musings regarding the human species were largely (but not wholly) restricted to a single chapter, one could even make a case that it was too short – and it is no accident that Wilson subsequently extended his writings with regard to the human species to a book length manuscript

Yet, while Sociobiology was of necessity a long book, ‘The Bell Curve: Intelligence and Class Structure in America’ is, for me, unnecessarily overlong. 

After all, Herrnstein and Murray’s thesis was actually quite simple – namely that cognitive ability, as captured by IQ testing, is a major correlate of many important social outcomes in modern America. 

Yet they reiterate this point, for different social outcomes, again and again, chapter after chapter, repeatedly. 

In my view, Herrnstein and Murray’s conclusion would have been more effectively transmitted to the audience they presumably sought to reach had they been more succinct in their writing style and presentation of their data. 

Had that been the case then perhaps rather more of the many people who bought the book, and helped make it into an unlikely nonfiction bestseller in 1994, might actually have managed to read it – and perhaps even been persuaded by its thesis. 

For casual readers interested in this topic, I would recommend instead Intelligence, Race, And Genetics: Conversations With Arthur R. Jensen (which I have reviewed herehere and here). 

Endnotes

[1] For example, Francis Wheen, a professional damned fool and columnist for the Guardian newspaper (which two occupations seem to be largely interchangeable) claimed that: 

The Bell Curve (1994), runs to more than 800 pages but can be summarised in a few sentences. Black people are more stupid than white people: always have been, always will be. This is why they have less economic and social success. Since the fault lies in their genes, they are doomed to be at the bottom of the heap now and forever” (Wheen 2000). 

In making this claim, Wheen clearly demonstrates that he has read few if any of those 800 pages to which he refers.

[2] Although their discussion of the evidence relating to the causes, genetic or environmental, of the black-white test score gap is extensive, it is not exhaustive. For example, Phillipe Rushton, the author of Race Evolution and Behavior (reviewed here and here) argues that, despite the controversy their book provoked, Herrnstein and Murray actually didn’t go far enough on race, omitting, for example, any real discussion, save a passing mention in Appendix 5, of race differences in brain size (Rushton 1997). On the other hand, Herrnstein and Murray also did not mention studies that failed to establish any correlation between IQ and blood groups among African-Americans, studies interpreted as supporting an environmentalist interpretation of race differences in intelligence (Loehlin et al 1973Scarr et al 1977). For readers interested in a more complete discussion of the evidence regarding the relative contributions of environment and heredity to the differences in IQ scores of different races, see my review of Richard Lynn’s Race Differences in Intelligence: An Evolutionary Analysis, available here.

[3] For example, some of those accused of serious crimes have been accused of deliberately getting questions wrong on IQ tests in order to qualify as mentally subnormal when before the courts for sentencing in order to be granted mitigation of sentence on this ground, or, more specifically, in order to evade the death penalty

[4] This may be because whites or Asians with such low IQs are more likely to have such impaired cognitive abilities because of underlying conditions (e.g chromosomal abnormalitiesbrain damage) that handicap them over and above the deficit reflected in IQ score alone. On the other hand, blacks with similarly low IQs are still within the normal range for their own race. Therefore, rather than suffering from, say, a chromosomal abnormality or brain damage, they are relatively more likely to simply be at the tail-end of the normal range of IQs within their group, and hence normal in other respects.

[5] The term Rawlsian is a reference to political theorist John Rawles version of social contract theory, whereby he poses the hypothetical question as to what arrangement of political, social and economic affairs humans would favour if placed in what he called the original position, where they would be unaware of, not only their own race, sex and position in to the socio-economic hierarchy, but also, most important for our purposes, their own level of innate ability. This Rawles referred to as ‘veil of ignorance’.

[6] The tripartite system did indeed enable many working-class children to achieve a much higher economic status than their parents, although this was partly due to the expansion of the middle-class sector of the economy over the same time-period. It was also later Labour administrations who largely abolished the 11-plus system, not least because, unsurprisingly given the heritability of intelligence and personality, children from middle-class backgrounds tended to do better on it than did children from working-class backgrounds.

References 

Barrick & Mount 1991 The big five personality dimensions and job performance: a meta-analysis. Personnel Psychology 44(1):1–26 
Beckford (2008) Working classes ‘lack intelligence to be doctors’, claims academicDaily Telegraph, 04 Jun 2008. 
Clark 2008 Higher social classes have significantly HIGHER IQs than working class, claims academic Daily Mail, 22 May 2008. 
Kruger & Dunning (1999) Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-AssessmentsJournal of Personality and Social Psychology 77(6):1121-34 
Loehlin et al (1973) Blood group genes and negro-white ability differencesBehavior Genetics 3(3): 263-270  
Rushton, J. P. (1997). Why The Bell Curve didn’t go far enough on race. In E. White (Ed.), Intelligence, political inequality, and public policy (pp. 119-140). Westport, CT: Praeger. 
Scarr et al (1977) Absence of a relationship between degree of white ancestry and intellectual skills within a black population. Human Genetics 39(1):69-86 . 
Wheen (2000) The ‘science’ behind racismGuardian, 10 May 2000. 

Judith Harris’s ‘The Nurture Assumption’: By Parent or Peers

Judith Harris, The Nurture Assumption: Why Children Turn Out the Way They Do. Free Press, 1998.

Almost all psychological traits on which individual humans differ, from personality and intelligence to mental illness, are now known to be substantially heritable. In other words, individual differences in these traits are, at least in part, a consequence of genetic differences between individuals.

This finding is so robust that it has even been termed by Eric Turkenheimer the First Law of Behviour Genetics and, although once anathema to most psychologists save a marginal fringe of behavioural geneticists, it has now, under the sheer weight of evidence produced by the latter, belatedly become the new orthodoxy. 

On reflection, however, this transformation is not entirely a revelation. 

After all, it was only in the mid-twentieth century that the curious notion that individual differences were entirely the product of environmental differences first arose, and, even then, this delusion was largely restricted to psychologists, sociologists, feminists and other such ‘professional damned fools’, along with those among the semi-educated public who seek to cultivate an air of intellectualism by aping the former’s affections. 

Before then, poets, peasants and laypeople alike had long recognized that ability, insanity, temperament and personality all tended to run in families, just as physical traits like stature, complexion, hair and eye colour also do.[1]

However, while the discovery of a heritable component to character and ability merely confirms the conventional wisdom of an earlier age, another behavioural genetic finding, far more surprising and counterintuitive, has passed relatively unreported. 

This is the discovery that the so-called shared family environment (i.e. the environment shared by siblings, or non-siblings, raised in the same family home) actually has next to no effect on adult personality and behaviour. 

This we know from such classic study designs in behavioural genetics as twin studies, adoption studies and family studies.

In short, individuals of a given degree of relatedness, whether identical twins, fraternal twins, siblings, half-siblings or unrelated adoptees, are, by the time they reach adulthood, no more similar to one another in personality or IQ when they are raised in the same household than when they are raised in entirely different households. 

The Myth of Parental Influence 

Yet parental influence has long loomed large in virtually every psychological theory of child development, from the Freudian Oedipus complex and Bowby’s attachment theory to the whole literary genre of books aimed at instructing anxious parents on how best to raise their children so as to ensure that the latter develop into healthy, functional, successful adults. 

Indeed, not only is the conventional wisdom among psychologists overturned, but so is the conventional wisdom among sociologists – for one aspect of the shared family environment is, of course, household income and social class

Thus, if the family that a person is brought up in has next to no impact on their psychological outcomes as an adult, then this means that the socioeconomic status of the family home in which they are raised also has no effect. 

Poverty, or a deprived upbringing, then, has no effect on IQ, personality or the prevalence of mental illness, at least by the time a person has reached adulthood.[2]

Neither is it only leftist sociologists who have proved mistaken. 

Thus, just as leftists use economic deprivation as an indiscriminate, catch-all excuse for all manner of social pathology (e.g. crime, unemployment, educational underperformance) so conservatives are apt to place the blame on divorce, family breakdown, having children out of wedlock and the consequential increase in the prevalence of single-parent households

However, all these factors are, once again, part of the shared family environment – and according to the findings of behavioural genetics, they have next to no influence on adult personality or intelligence. 

Of course, chaotic or abusive family environments do indeed tend to produce offspring with negative life outcomes. 

However, none of this proves that it was the chaotic or abusive family environment that caused the negative outcomes. 

Rather, another explanation is at hand – perhaps the offspring simply biologically inherit the personality traits of their parents, the very personality traits that caused their family environment to be so chaotic and abusive in the first place.[3] 

For example, parents who divorce or bear offspring out-of-wedlock likely differ in personality from those who first get married then stick together, perhaps being more impulsive or less self-disciplined and conscientious (e.g. less able refrain from having children from a relationship that was destined to be fleeting, or less able to persevere and make the relationship last). 

Their offspring may, then, simply biologically inherit these undesirable personality attributes, which then themselves lead to the negative social outcomes associated with being raised in single-parent households or broken homes. The association between family breakdown and negative outcomes for offspring might, then, reflect simply the biological inheritance of personality. 

Similarly, as leftists are fond of reminding us, children from economically-deprived backgrounds do indeed have lower recorded IQs and educational attainment than those from more privileged family backgrounds, as well as other negative outcomes as adults (e.g. lower earnings, higher rates of unemployment). 

However, this does not prove that coming from a deprived family background necessarily itself depresses your IQ, educational attainment or future salary. 

Rather, an equally plausible possibility is simply that offspring simply biologically inherit the low intelligence of their parents – the very low intelligence which was likely a factor causing the low socioeconomic status of their parents, since intelligence is known to correlate strongly with educational and occupational advancement.[4]

In short, the problem with all of this body of research which purports to demonstrate the influence of parents and family background on psychology and behavioural outcomes for offspring is that they fail to control for the heritability of personality and intelligence, an obvious confounding factor

The Non-Shared Environment

However, not everything is explained by heredity. As a crude but broadly accurate generalization, only about half the variation for most psychological traits is attributable to genes. This leaves about half of the variation in intelligence, personality and mental illness to be explained environmental factors.  

What are these environmental factors if they are not to be sought in the shared family environment

The obvious answer is, of course, the non-shared family environment – i.e. the ways in which even children brought up in the same family-home nevertheless experience different micro-environments, both within the home and, perhaps more importantly, outside it. 

Thus, even the fairest and most even-handed parents inevitably treat their different offspring differently in some ways.  

Indeed, among the principal reasons why parents treat their different offspring differently is precisely because the different offspring themselves differ in their own behaviour quite independently of any parental treatment.

This is well illustrated by the question of the relationship between corporal punishment and behaviour in children.

Corporal punishment 

Rather than differences in the behaviour of different children resulting from differences in how their parents treat them, it may be that differences in how parents treat their children may reflect responses to differences in the behaviour of the children themselves. 

In other words, the psychologists have the direction of causation precisely backwards. 

Take, for example, one particularly controversial issue, namely the physical chastisement of children by their parents as a punishment for bad behaviour (e.g. spanking). 

Some psychologists have sometimes argued that physical chastisement actually causes misbehaviour. 

As evidence, they cite the fact that children who are spanked more often by their parents or caregivers on average actually behave worse than those whose caregivers only rarely or never spank the children entrusted to their care.  

This, they claim, is because, in employing spanking as a form of discipline, caregivers are inadvertently imparting the message that violence is a good way of solving your problems. 

Actually, however, I suspect children are more than capable of working out for themselves that violence is often an effective means of getting your way, at least if you have superior physical strength to your adversary. Unfortunately, this is something that, unlike reading, arithmetic and long division, does not require explicit instruction by teachers or parents. 

Instead, a more obvious explanation for the correlation between spanking and misbehaviour in children is not that spanking causes misbehaviour, but rather that misbehaviour causes spanking. 

Indeed, once you think about it, this is in fact rather obvious: If a child never seriously misbehaves, then a parent likely never has any reason to spank that child, even if the parent is, in principle, a strict disciplinarian; whereas, on the other hand, a highly disobedient child is likely to try the patience of even the most patient caregiver, whatever his or her moral opposition to physical chastisement in principle. 

In other words, causation runs in exactly the opposite direction to that assumed by the naïve psychologists.[5] 

Another factor may also be at play – namely, offspring biologically inherit from their parents the personality traits that cause both the misbehaviour and the punishment. 

In other words, parents with aggressive personalities may be more likely to lose their temper and physically chastise their children, while children who inherit these aggressive personalities are themselves more likely to misbehave, not least by behaving in an aggressive or violent manner. 

However, even if parents treat their different offspring differently owing to the different behaviour of the offspring themselves, this is not the sort of environmental factor capable of explaining the residual non-shared environmental effects on offspring outcomes. 

After all, this merely begs the question as to what caused these differences in offspring behaviour in the first place? 

If the differences in offspring behaviour exist prior to differences in parental responses to this behaviour, then these differences cannot be explained by the differences in parental responses.  

Peer Groups 

This brings us back to the question of the environmental causes of offspring outcomes – namely, if about half the differences among children’s IQs and personalities are attributable to environmental factors, but these environmental factors are not to be found in the shared family environment (i.e. the environment shared by children raised in the same household), then where are these environmental factors to be sought? 

The search for environmental factors affecting personality and intelligence has, thus far, been largely unsuccessful. Indeed, some behavioural geneticists have almost gone as far as conceding scholarly defeat in identifying correlates for the environmental portion of the variance. 

Thus, leading contemporary behavioural geneticist Robert Plomin in his recent book, Blueprint: How DNA Makes Us Who We Are, concludes that those environmental factors that affect cognitive ability, personality, and the development of mental illness are, as he puts it, ‘unsystematic’ in nature. 

In other words, he seems to be saying that they are mere random noise. This is tantamount to accepting that the null hypothesis is true. 

Judith Harris, however, has a quite different take. According to Harris, environmental causes must be sought, not within the family home, but rather outside it – in a person’s interactions with their peer-group and the wider community.[6]

Environment ≠ Nurture 

Thus, Harris argues that the so-called nature-nurture debate is misnamed, since the word ‘nurture’ usually refers to deliberate care and moulding of a child (or of a plant or animal). But many environmental effects are not deliberate. 

Thus, Harris repeatedly references behaviourist John B. Watson’s infamous boast: 

Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in and I’ll guarantee to take any one at random and train him to become any type of specialist I might select—doctor, lawyer, artist, merchant-chief and, yes, even beggar-man and thief, regardless of his talents, penchants, tendencies, abilities, vocations, and race of his ancestors.

Yet what strikes me as particularly preposterous about Watson’s boast is not its radical environmental determinism, nor even its rather convenient unfalsifiability.[7] 

Rather, what most strikes me as most preposterous about Watson’s claim is its frankly breath-taking arrogance. 

Thus, Watson not only insisted that it was environment alone that entirely determined adult personality. In this same quotation, he also proclaimed that he already fully understood the nature of these environmental effects to such an extent that, given omnipotent powers to match his evidently already omniscient understanding of human development, he could produce any outcome he wished. 

Yet, in reality, environmental effects are anything but clear-cut. Pushing a child in a certain direction, or into a certain career, may sometimes have the desired effect, but other times may seemingly have the exact opposite effect to that desired, provoking the child to rebel against parental dictates. 

Thus, even to the extent that environment does determine outcomes, the precise nature of the environmental factors implicated, and their interaction with one another, and with the child’s innate genetic endowment, is surely far more complex than the simple mechanisms proposed by behaviourists like Watson (e.g. reinforcement and punishment). 

Language Acquisition 

The most persuasive evidence for Harris’s theory of the importance of peer groups comes from an interesting and widely documented peculiarity of language acquisition

The children of immigrants, whose parents speak a different language inside the family home, and may even themselves be monolingual, nevertheless typically grow up to speak the language of their host culture rather better than they do the language to which they were first exposed in the family home. 

Indeed, while their parents may never achieve fluency in the language of their host culture, having missed out on the Chomskian critical period for language acquisition, their children often actually lose the ability to speak their parent’s language, often much to the consternation of parents and grandparents. 

Yet, from an sociobiological or evolutionary psychological perspective, such an outcome is obviously adaptive. 

After all, if a child is to succeed in wider society, they must master its language, whereas, if their parent’s first language is not spoken anywhere in their host society except in their family, then it is of limited utility, and, once their parents themselves become proficient in the language of the host culture, it becomes entirely redundant.

As sociologist-turned-sociobiologist Pierre van den Berghe observes in his excellent The Ethnic Phenomenon (reviewed here):

Children quickly discover that their home language is a restricted medium that not useable in most situations outside the family home. When they discover that their parents are bilingual they conclude – rightly for their purposes – that the home language is entirely redundant… Mastery of the new language entails success at school, at work and in ‘the world’… [against which] the smiling approval of a grandmother is but slender counterweight” (The Ethnic Phenomenon: p258). 

Code-Switching 

Harris suggests that the same applies to personality. Just as the child of immigrants switches between one language and another at home and school, so they also adopt different personalities. 

Thus, many parents are surprised to be told by their children’s teachers at parents’ evenings that their offspring is quiet and well-behaved at school, since, they report, he or she isn’t at all like that at home. 

Yet, at home, a child has only, at most, a sibling or two with whom to compete for his parents’ attention. In contrast, at school, he or she has a whole class with whom to compete for their teacher’s attention.

It is therefore unsurprising that most children are less outgoing at school than they are at home with their parents. 

For example, an older sibling might be able push his little brother around at home. But, if he is small for his age, he is unlikely to be able to get away with the same behaviour among his peers at school. 

Children therefore adopt two quite different personalities – one for interactions with family and siblings, and another for among their peers.

This then, for Harris, explains why, perhaps surprisingly, birth-order has generally been found to have little if any effect on personality, at least as personality manifests itself outside the family home. 

An Evolutionary Theory of Socialization? 

Interestingly, even evolutionary psychologists have not been immune from the delusion of parental influence. Thus, in one influential paper, anthropologists Patricia Draper and Henry Harpending argued that offspring calibrate their reproductive strategy by reference to the presence or absence of a father in their household (Draper & Harpending 1982). 

On this view, being raised in a father-absent household is indicative of a social environment where low male parental investment is the norm, and hence offspring adjust their own reproductive strategy accordingly, adopting a promiscuous, low-investment mating strategy characterized by precocious sexual development and an inability to maintain lasting long-term relationships (Draper & Harpending 1982; Belsky et al 1991). 

There is indeed, as these authors amply demonstrate, a consistent correlation between father-absence during development and both earlier sexual development and more frequent partner-switching in later life. 

Yet there is also another, arguably more obvious, explanation readily at hand to explain this association. Perhaps offspring simply inherit biologically the personality traits, including sociosexual orientation, of their parents. 

On this view, offspring raised in single-parent households are more likely to adopt a promiscuous, low-investment mating strategy simply because they biologically inherit the promiscuous sociosexual orientation of their parents, the very promiscuous sociosexual orientation that caused the latter to have children out-of-wedlock or from relationships that were destined to break down and hence caused the father-absent childhood of their offspring. 

Moreover, even on purely a priori theoretical grounds, Draper, Harpending and Belsky’s reasoning is dubious. 

After all, whether you personally were raised in a one- or two-parent family is obviously a very unreliable indicator of the sorts of relationships prevalent in the wider community into which you are born, since it represents a sample size of just one. 

Instead, therefore, it would be far more reliable to calibrate your reproductive strategy in response to the prevalence of one-parent households in the wider community at large, rather than the particular household type into which you happen to have been born.  

This, of course, directly supports Harris’s own theory of ‘peer group socialization’. 

In short, to the extent that children do adapt to the environment and circumstances of their upbringing (and they surely do), they must integrate into, adopt the norms of, and a reproductive strategy to maximize their fitness within, the wider community into which they are born, rather than the possibly quite idiosyncratic circumstances and attitudes of their own family. 

Absent Fathers, from Upper-Class to Under-Class 

Besides language-acquisition among the children of immigrants, another example cited by Harris in support of her theory of ‘peer group socialization’ is the culture, behaviours and upbringing of British upper-class males.

Here, she reports, boys were, and, to some extent, still are, reared primarily, not by their parents, but rather by nannies, governoresses and, more recently, in exclusive fee-paying all-male boarding schools

Yet, despite having next to no contact with their fathers throughout most of their childhood, these boys nevertheless managed somehow to acquire manners, attitudes and accents similar, if not identical, to those of their upper-class fathers, and not at all those of the middle-class nannies, governoresses and masters with whom they spent most of their childhood being raised. 

Yet this phenomenon is by no means restricted to the British upper-classes.

On the contrary, rather than citing the example of the British upper-classes in centuries gone by, Harris might just as well have cited that of contemporary underclass in Britain and America, since what was once true of the British upper-classes, is now equally true of the underclass

Just as the British upper-classes were once raised by governoresses, nannies and in private schools with next to no contact with their fathers, so contemporary underclass males are similarly raised in single-parent households, often to unwed mothers, and typically have little if any contact with their biological fathers. 

Here, as Warren Farrell observes in his seminal The Myth of Male Power (which I have reviewed here, here and here), there is a now a “a new nuclear family: woman, government and child”, what Farrell terms “Government as a Substitute Husband”. 

Yet, once again, these underclass males, raised by single parents with the financial assistance of the taxpayer, typically turn out much like their absent fathers with whom they have had little if any contact, often going on to promiscuously father a succession of offspring themselves, with whom they likewise have next to no contact. 

Abuse 

But what of actual abuse? Surely this has a long-term devastating psychological impact on children. This, at any rate, is the conventional wisdom, and questioning this wisdom, at least with respect to sexual abuse, is tantamount to contemporary heresy, with attendant persecution

Thus, for example, it is claimed that criminals who are abusive towards their children were themselves almost invariably abused, mistreated or neglected as children, which is what has led to their own abusive, behaviour.

A particularly eloquent expression of this theory is found in the novel Clockers, by Richard Price, where one of the lead characters, a police officer, explains how, during his first few years on the job, a senior colleague had restrained him from attacking an abusive mother who had left her infant son handcuffed to a radiator, telling him:

Rocco, that lady you were gonna brain? Twenty years ago when she was a little girl. I arrested her father for beating her baby brother to death. The father was a piece of shit. Now that she’s all grown up? She’s a real piece of shit. That kid you saved today. If he lives that long, if he grows up? He’s gonna be a real piece of shit. It’s the cycle of shit and you can’t do nothing about it” (Clockers: p96).

Take, for example, what is perhaps the form of child abuse that provokes the most outrage and disgust – namely, sexual abuse. Here, it is frequently asserted that paedophiles were almost invariably themselves abused as children, which creates a so-called cycle of abuse

However, there are at least three problems with this claim. 

First, it cannot explain how the first person in this cycle came to be abusive. 

Second, we might doubt whether it is really true that paedophiles are disproportionately likely to have themselves been abused as children. After all, abuse is something that almost invariably happens surreptitiously ‘behind closed doors’ and is therefore difficult to verify or disprove. 

Therefore, even if most paedophiles claim to have been victims of abuse, it is possible that they are simply lying in order to elicit sympathy or excuse or shift culpability for their own offending. 

Finally, and most importantly for present purposes, even if paedophiles can be shown to be disproportionately likely to have themselves been victimized as children, this by no means proves that their past victimization caused their current sexual orientation. 

Rather, since most abuse is perpetrated by parents or other close family members, an alternative possibility is that victims simply biologically inherit the sexual orientation of their abuser.

After all, if homosexuality is partially heritable, as is now widely accepted, then why not paedophilia as well? 

In short, the ‘cycle of shit’ referred to by Price’s fictional police officer may well be real, but mediated by genetics rather than childhood experience.

However, this conclusion is not entirely clear. On the contrary, Harris is at pains to emphasize that the finding that the shared family environment accounts for hardly any of the variance in outcomes among adults does not preclude the possibility that severe abuse may indeed have an adverse effect on adult outcomes. 

After all, adoption studies can only tell us what percent of the variance is caused by heredity or by shared or unshared environments within a specific population as a whole.

Perhaps the shared family environment accounts for so little of the variance precisely because the sort of severe abuse that does indeed have a devastating long-term effect on personality and mental health is, thankfully, so very rare in modern societies. 

Indeed, it may be especially rare within the families sampled in adoption studies precisely because adoptive families are carefully screened for suitability before being allowed to adopt. 

Moreover, Harris emphasizes an important caveat: Even if abuse does not have long-term adverse psychological effects, this does not mean that abuse causes no harm, and nor does it in any way excuse such abuse. 

On the contrary, the primary reason we shouldn’t mistreat children (and should severely punish those who do) is not on account of some putative long-term psychological effect on the adults whom the children subsequently become, but rather because of the very real pain and suffering inflicted on a child at the time the abuse takes place. 

Race Differences in IQ 

Finally, Harris even touches upon that most vexed area of the (so-called) nature-nurture debate – race differences in intelligence

Here, the politically-correct claim that differences in intelligence between racial groups, as recorded in IQ tests, are of purely environmental origin runs into a problem, since the sorts of environmental effects that are usually posited by environmental determinists as accounting for the black-white test score gap in America (e.g. differences in rates of poverty and socioeconomic status) have been shown to be inadequate because, even after controlling for these factors, there remains a still unaccounted for gap in test-scores.[8]

Thus, as Arthur R. Jensen laments: 

This gives rise to the hypothesizing of still other, more subtle environmental factors that either have not been or cannot be measured—a history of slavery, social oppression, and racial discrimination, white racism, the ‘black experience,’ and minority status consciousness [etc]” (Straight Talk About Mental Tests: p223). 

The problem with these explanations, however, is that none of these factors has yet been demonstrated to have any effect on IQ scores. 

Moreover, some of the factors proposed as explanations are formulated in such a vague form (e.g. “white racism, the ‘black experience’”) that it is difficult to conceive of how they could ever be subjected to controlled testing in the first place.[9]

Jensen has termed this mysterious factor the X-factor

In coining this term, Jensen was emphasizing its vague, mysterious and unfalsifiable nature. Jensen did not actually believe that this posited X-factor, whatever it was, really did account for the test-score gap. Rather, he thought heredity explained most, if not all, of the remaining unexplained test-score gap. 

However, Harris takes Jensen at his word and takes the search for the X-factor very seriously. Indeed, she apparently believes she has discovered and identified it. Thus, she announces: 

I believe I know what this X factor is… I can describe it quite clearly. Black kids and white kids identify with different groups that have different norms. The differences are exaggerated by group contrast effects and have consequences that compound themselves over the years. That’s the X factor” (p248-9). 

Unfortunately, Harris does not really develop this fascinating claim. Indeed, she cites no direct evidence in support of this claim, and evidently seems to regard the alternative possibility – namely, that race differences in intelligence are at least partly genetic in origin – as so unpalatable that it can safely ruled out a priori.

In fact, however, although not discussed by Harris, there is at least some evidence in support of her theory. Indeed, her theory potentially reconciles the apparently conflicting findings of two of the most widely-cited studies in this vexed area of research and debate.

First, in the more recent of these two studies, Minnesota Transracial Adoption Study, the same race differences in IQ were observed among black, white and mixed-race children adopted into upper-middle class white families as are found among black, white and mixed-race populations in the community at large (Scarr & Weinberg 1976). 

Moreover, although, when tested during childhood, the children’s adoptive households did seem to have had a positive effect on their IQ scores, in a follow-up study it was found that by the time they reached the cusp of adulthood, the black teenagers who had been adopted into upper-middle-class white homes actually scored no higher in IQ than did blacks in the wider population not raised in upper-middle class white families (Weinberg, Scarr & Waldman 1992). 

Although Scarr, Weinberg and Waldman took pains to present their findings as compatible with a purely environmentalist theory of race differences, this study has, not unreasonably, been widely cited by hereditarians as evidence for the existence of innate racial differences in intelligence (e.g. Levin 1994; Lynn 1994; Whitney 1996).

However, in the light of the findings of the behavioural genetics studies discussed by Harris in ‘The Nurture Assumption’, the fact that white upper-middle-class adoptive homes had no effect on the adult IQs of the black children adopted into them is, in fact, hardly surprising. 

After all, as we have seen, the shared family environment generally has no effect on IQ, at least by the time the person being tested has reached adulthood.[10]

One would therefore not expect adoptive homes, howsoever white and upper-middle-class, to have any effect on adult IQs of the black children adopted into them, or indeed of the white or mixed-race children adopted into them. 

In short, adoptive homes have no effect on adult IQ, whether or not the adoptees, or adoptive families, are black, white, brown, yellow, green or purple! 

But, if race differences in intelligence are indeed entirely environmental in origin, then where are these environmental causes to be found, if not in the family environment? 

Harris has an answer – black culture

According to her, the black adoptees, although raised in white adoptive families, nevertheless still come to identify as ‘black’, and to identify with the wider black culture and social norms. In addition, they may, on account of their racial identification, come to socialize with other blacks in school and elsewhere. 

As a result of this acculturation to African-American norms and culture, they therefore, according to Harris, come to score lower in IQ than their white peers and adoptive siblings. 

But how can we ever test this theory? Is it not untestable, and is this not precisely the problem identified by Jensen with previous positedX-factors.

Actually, however, although not discussed by Harris, there is a way of testing this theory – namely, looking at the IQs of black children raised in white families where there is no wider black culture with which to identify, and few if any black peers with whom to socialize?

This, then, brings us to the second of the two studies which Harris’s theory potentially reconciles, namely the famous Eyferth study.  

Here, it was found that the mixed-race children fathered by black American servicemen who had had sexual relationships with German women during the Allied occupation of Germany after World War Two had almost exactly the same average IQ scores as a control group of offspring fathered by white US servicemen during the same time period (Eyferth 1959). 

The crucial difference from the Minnesota study may be that these children, raised in an almost entirely monoracial, white Germany in the mid-twentieth century, had no wider African-American culture with which to identify or whose norms to adopt, and few if any black or mixed-race peers in their vicinity with whom to socialize. 

This, then, is perhaps the last lifeline for a purely environmentalist theory of race differences in intelligence – namely the theory that African-American culture depresses intelligence. 

Unfortunately, however, this proposition – namely, that African-American culture depresses your IQ – is almost as politically unpalatable and politically-incorrect as is the notion that race differences in intelligence reflect innate genetic differences.[11]

Endnotes

[1] Thus, this ancient wisdom is reflected, for example, in many folk sayings, such as the apple does not fall far from the tree, a chip off the old block and like father, like son, many of which long predate either Darwin’s theory of evolution, and Mendel’s work on heredity, let alone the modern work of behavioural geneticists.

[2] It is important to emphasize here that this applies only to psychological outcomes, and not, for example, economic outcomes. For example, a child raised by wealthy parents is indeed likely to be wealthier than one raised in poverty, if only because s/he is likely to inherit (some of) the wealth of his parents. It is also possible that s/he may, on average, obtain a better job as a consequence of the opportunities opened by his privileged upbringing. However, his IQ will be no higher than had s/he been raised in relative poverty, and neither will s/he be any more or less likely to suffer from a mental illness

[3] Similarly, it is often claimed that children raised in care homes, or in foster care, tend to have negative life-outcomes. However, again, this by no means proves that it is care homes or foster care that causes these negative life-outcomes. On the contrary, since children who end up in foster care are typically either abandoned by their biological parents, or forcibly taken from their parents by social services on account of the inadequate care provided by the latter, or sometimes outright abuse, it is obvious that their parents represent an unrepresentative sample of society as a whole. An obvious alternative explanation, then, is that the children in question simply inherit the dysfunctional personality attributes of their biological parents, namely the very dysfunctional personality attributes that caused the latter to either abandon their children or have them removed by the social services. (In other cases, such children may have been orphaned. However, this is less common today. At any rate, parents who die before their offspring reach maturity are surely also unrepresentative of parents in general. For example, many may live high-risk lifestyles that contribute to their early deaths.)

[4] Likewise, the heritability of such personality traits as conscientiousness and self-discipline, in addition to intelligence, likely also partly account for the association between parental income and academic attainment among their offspring, since both academic attainment, and occupational success, require the self-discipline to work hard to achieve success. These factors, again in addition to intelligence, likely also contribute to the association between parental income and the income and socioeconomic status ultimately attained by their offspring.

[5] This possibility could, at least in theory, be ruled out by longitudinal studies, which could investigate whether the spanking preceded the misbehaviour, or vice versa. However, this is easier said than done, since, unless relying on the reports by caregivers or children themselves, which depends on both the memory and honesty of the caregivers and children themselves, it would have to involve intensive, long-term, and continued observation in order to establish which came first, namely the pattern of misbehaviour, or the adoption of physical chastisement as a method of discipline. This would, presumably, require continuous observation from birth onwards, so as to ensure that the very first instance of spanking or excessive misbehaviour were recorded. Such a study would seem all but impossible and certainly, to my knowledge, has yet to be conducted.

[6] The fact that the relevant environmental variables must be sought outside the family home is one reason why the terms ‘between-family environment’ and ‘within-family environment’, sometimes used as synonyms or alternatives for ‘shared’ and ‘non-shared family environment’ respectively, are potentially misleading. Thus, the ‘within-family environment’ refers to those aspects of the environment that differ for different siblings even within a single family. However, these factors may differ within a single family precisely because they occur outside, not within, the family itself. The terms ‘shared’ and ‘non-shared family environment’ are therefore to be preferred, so as to avoid any potential confusion these alternative terms could cause.

[7] Both practical and ethical considerations, of course, prevent Watson from actually creating his “own specified world” in which to bring up his “dozen healthy infants”. Therefore, no one is able to put his claim to the test. It is therefore unfalsifiable and Watson is therefore free to make such boasts, safe in the knowledge that there is no danger of his actually being called to make good on his claims and thereby proven wrong.

[8] Actually, even if race differences in IQ are found to disappear after controlling for socioeconomic status, it would be a fallacy to conclude that this means that the differences in IQ are entirely a result of differences in social class and that there is no innate difference in intelligence between the races. After all, differences in socioeconomic status are in large part a consequence of differences in cognitive ability, as more intelligent people perform better at school, and at work, and hence rise in socioeconomic status. Therefore, in controlling for socioeconomic status, one is, in effect, also controlling for differences in intelligence, since the two are so strongly correlated. The contrary assumption has been termed by Jensenthe sociologist’s fallacy’.
This fallacy involves the assumption that it is differences in socioeconomic status that cause differences in IQ, rather than differences in intelligence that cause differences in socioeconomic status. As Arthur Jensen explains it:

If SES [i.e. socioeconomic status] were the cause of IQ, the correlation between adults’ IQ and their attained SES would not be markedly higher than the correlation between children’s IQ and their parents’ SES. Further, the IQs of adolescents adopted in infancy are not correlated with the SES of their adoptive parents. Adults’ attained SES (and hence their SES as parents) itself has a large genetic component, so there is a genetic correlation between SES and IQ, and this is so within both the white and the black populations. Consequently, if black and white groups are specially selected so as to be matched or statistically equated on SES, they are thereby also equated to some degree on the genetic component of IQ” (The g Factor: p491).

[9] Actually, at least some of these theories are indeed testable and potentially falsifiable. With regard to the factors quoted by Jensen (namely, “a history of slavery, social oppression, and racial discrimination, white racism… and minority status consciousness”), one way of testing these theories is to look at test scores in those countries where there is no such history. For example, in sub-Saharan Africa, as well as in Haiti and Jamaica, blacks are in the majority, and are moreover in control of the government. Yet the IQ scores of the indigenous populations of sub-Saharan Africa are actually even lower than among blacks in the USA (see Richard Lynn’s Race Differences in Intelligence: reviewed here). True, most such countries still have a history of racial oppression and discrimination, albeit in the form of European colonialism rather than racial slavery or segregation in the American sense. However, in those few sub-Saharan African countries that were not colonized by western powers, or only briefly colonized (e.g. Ethiopia, Liberia), scores are not any higher. Also, other minority groups ostensibly or historically subject to racial oppression and discrimination (e.g. Ashkenazi Jews, Overseas Chinese) actually score higher in IQ than the host populations that ostensibly oppress them. As for “the ‘black experience’”, this meanly begs the question as to why the ‘black experience’ has been so similar, and resulted in the same low IQs in so many different parts of the world, something implausible unless unless the ‘black experience’ itself reflects innate aspects of black African psychology. 

[10] The fact that the heritability of intelligence is higher in adulthood than during childhood, and the influence of the shared family environment correspondingly decreases, has been interpreted as reflecting the fact that, during childhood, our environments are shaped, to a considerable extent, by our parents. For example, some parents may encourage activities that may conceivably enhance intelligence, such as reading books and visiting museums. In contrast, as we enter adulthood, we begin to have freedom to choose and shape our own environments, in accordance with our interests, which may be partly a reflection of our heredity.
Interestingly, this theory suggests that what is biologically inherited is not necessarily intelligence itself, but rather a tendency to seek out intelligence-enhancing environments, i.e. intellectual curiosity rather than intelligence as such. In fact, it is probably a mixture of both factors. Moreover, intellectual curiosity is surely strongly correlated with intelligence, if only because it requires a certain level of intelligence to appreciate intellectual pursuits, since, if one lacks the ability to learn or understand complex concepts, then intellectual pursuits are necessarily unrewarding.

[11] Thus, ironically, the recently deceased James Flynn, though always careful, throughout his career, to remain on the politically-correct radical environmentalist side of the debate with regard to the causes of race differences in intelligence, nevertheless recently found himself taken to task by the leftist, politically-correct British Guardian newspaper for a sentence in his recent book, Does Your Family Make You Smarter, where he described American blacks as coming from a “from a cognitively restricted subculture” (Wilby 2016). Thus, whether one attributes lower black IQs to biology or to culture, either answer is certain offend leftists, and the power of political correctness can, it seems, never be appeased.

References 

Belsky, Steinberg & Draper (1991) Childhood Experience, Interpersonal Development, and Reproductive Strategy: An Evolutionary Theory of Socialization Child Development 62(4): 647-670 

Draper & Harpending (1982) Father Absence and Reproductive Strategy: An Evolutionary Perspective Journal of Anthropological Research 38:3: 255-273 

Eyferth (1959) Eine Untersuchung der Neger-Mischlingskinder in Westdeutschland. Vita Humana, 2, 102–114

Levin (1994) Comment on Minnesota Transracial Adoption Study. Intelligence. 19: 13–20

Lynn, R (1994) Some reinterpretations of the Minnesota Transracial Adoption Study. Intelligence. 19: 21–27

Scarr & Weinberg (1976) IQ test performance of black children adopted by White families. American Psychologist 31(10):726–739 

Weinberg, Scarr & Waldman, (1992) The Minnesota Transracial Adoption Study: A follow-up of IQ test performance at adolescence Intelligence 16:117–135 

Whitney (1996) Shockley’s experiment. Mankind Quarterly 37(1): 41-60

Wilby (2006) Beyond the Flynn effect: New myths about race, family and IQ? Guardian, September 27.