40 pages • 1 hour read
Stephen Jay GouldA modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.
In Chapter 6, Gould provides a brief overview of the science of factor analysis. As a process of “attempting to discover ‘underlying’ structure in large matrices of data,” factor analysis reveals positive, negative, and zero correlations between different measurements (268). These correlations (relationships) are measured by the correlation coefficient r; however, Gould notes that correlation shows a relationship, only, and does not indicate cause (270, 273). As a mathematical tool, researchers can identify multiple factors of measurement and reduce them into fewer dimensions (the first principal component and second principal component) in order to simplify relationships and arrive at a reasonable explanation of results through inference (275, 280). Gould also notes that the first principal component is a representation of a “mathematical abstraction” and “not a ‘thing’ with physical reality” (280).
Gould then introduces Cyril Burt (1883-1971), an educational psychologist and advocate for the statistical analysis of IQ tests. Both Burt and his mentor and predecessor, Charles Spearman, recognized the importance of using factor analysis in “provid[ing] a theoretical justification” for their belief in heritable and measurable intelligence (269). However, it was Charles Spearman who first identified a correlation coefficient (g), which he claimed was the “unitary quality underlying all cognitive mental ability” (281). Gould notes that Spearman’s g attempts to serve as a measurement of intelligence; however, since g can be interpreted in “either a purely hereditarian or purely environmental way,” its measurement of intelligence is problematic (282). Moreover, since the components of g can be separated into two rotating axes, it is possible to avoid the conflating of intelligence factors altogether (285).
Gould then presents a closer examination of Charles Spearman (1863-1945), a psychologist and statistician who posited a “two-factor” theory to study intelligence testing, whereby variance for individual tests (s) and an underlying factor of intelligence (g) “would reduce to a single underlying entity” (287). In his original work, Spearman used a method for analyzing four measures, which he called the “tetrad difference” (288). According to this method, a tetrad value of zero indicates the existence of a single general factor. In finding evidence for the existence of g, Spearman determined that the best measure of g would be “the average score for a large collection of individual tests of the most diverse kind” (294). After identifying “a single abstract factor” for g, Spearman attempted to find a physical “form of energy” for g (296). By the publication of his last book, Spearman had stepped back from his efforts to locate g in a physical form.
In hereditarian theories, Spearman’s g represents the identification of intelligence as a “thing,” with the inference that it resides in a physical form. While Spearman shared basic views that the strength of a person’s g “reflects heredity alone” and that blacks performed most poorly on tests of “innate general intelligence,” he was not interested in using g to prove differences among groups of people based on gender, class, or race (300, 301).
Gould then reintroduces Cyril Burt and his belief in hereditarianism. In his studies of intelligence, Burt discounted the influence of environmental factors. Instead, he correlated intelligence with parental intelligence, and concluded that upper-class boys were “smarter,” while low-achieving students possessed “innate stupidity” (305, 310). However, when research questions focused on the less-controversial topics of juvenile delinquency and left-handedness, Burt was able to identify possible environmental factors for measured differences.
While Burt did not apply heritable intelligence to groups by race or gender, he was a firm believer that inherited intelligence was reflected in a person’s social class. In his discussion on factors of intelligence, Burt referenced three possibilities: 1) intelligence as a mathematical abstraction, 2) intelligence as physically identifiable properties within the brain, and 3) intelligence as categories of thought existing on a “psychic reality” (322).
In furthering Spearman’s work on factor analysis, Burt introduced an expansion of Spearman’s g and s to include additional “group factors” (additional specialized abilities) and “accidental factors” (attributes of a trait measured once)(316, 317). He also added a technique to calculate correlations between individual people (Q-mode factor analysis) (315). The political impact of his Q-mode factor analysis led to the testing and separation of British students into university (20%) or non-university (80%) tracks.
Gould then examines the work of L.L. Thurstone (1887-1955), a professor of psychology at the University of Chicago and the man responsible for problematizing Spearman’s g. Thurstone’s primary disagreement was in the placement of Spearman and Burt’s factor axes; he argued that “faculties of the mind” are measured by tests, but that they do not vary in position regardless of how many tests measure these factors (327). In Spearman’s model, g shifts depending on the type of tests that are included in the sample. In addition, Thurstone disagreed with Burt’s theory that his group factors could have a negative projection, as “a test could not have a negative projection upon a real vector of mind” (328).
To address these issues, Thurstone proposed calculating the Spearman-Burt principal components and “rotat[ing] them to different positions” until they match the positions of actual clusters of vectors. This reflected Thurstone’s “simple structure,” which redistributes information and implies that intelligence tests measure “a small number of independent” primary mental abilities (PMAs), which he also emphasized had, as a source, “inborn biology” (331, 336).
Thurstone had introduced a second school of factor analysis. With Spearman’s g, individuals could be tested and “ranked on a single scale of innate intelligence”; with Thurstone’s PMAs, individuals are shown to excel in “different and independent qualities of mind” (334). In response, Spearman and Burt claimed that Thurstone had merely produced “an alternative mathematics for the same data” (339). In later years, Thurstone introduced the idea of oblique axes (vs. his original criterion of perpendicular axes), which led to the introduction of a “second order g” (343).
Gould concludes the chapter with a reference to Arthur Jensen, America’s best-known modern hereditarian. In his writings, Jensen does not see Thurstone’s work as a criticism of Spearman’s g, in part due to the addition of a “second order g” (349). According to Jensen’s reading of Spearman and Thurstone, g remains a means of ranking human intelligence, and also a means of measuring the statistical difference in IQ between blacks and whites as “an innate deficiency of intelligence among blacks” (350).
This controversy between Spearman and Thurstone underscores the difficulty of validating the hereditarian theory of innate and measurable intelligence. Both Spearman and Thurstone used factor analysis as a means to account for numerous features of measurable intelligence. At the same time, once again, their theories show how the idea of human intelligence mutates and transforms under the microscope.
As Gould writes, factor analysis is “a mathematical technique for reducing a complex system of correlations into fewer dimensions” (275). In his work examining the evolution of fossil reptiles, Gould used this tool to reduce fourteen different bone measurements into a single dimension representing increasing reptile body size. As a tool for measuring intelligence, however, Gould emphasizes that correlation does not necessarily imply causation, and that the measurements researchers have used to date—mental testing—have not proven to be de facto definitive measures as “the cause [of intelligence] is certainly complex” (282).
Spearman’s g attempts to capture a “thing” called hereditary intelligence, and yet, as Gould notes, how can any number can be “more real than the superficial measurements themselves” (282). With the arrival of L.L. Thurstone’s argument, that vectors of the mind must remain in invariant positions, a completely new factor analysis model of hereditable intelligence is introduced. It is a theory that is just as plausible as Spearman’s, and yet defines heritable intelligence to be, not a g, but a set of primary mental abilities (PMAs). Which of these models is correct? As Gould notes:
If the same data can be fit into two such different mathematical schemes, how can we say with assurance that one represents reality and the other a diversionary tinkering? Perhaps both views of reality are wrong, and their mutual failure lies in their common error: a shared belief in the reification of factors (340).
The search for a g or for PMAs is, in and of itself, an academic exercise; it is not until a researcher comes along with the intent of using these theories to justify the existence of innate, hereditary intelligence, and what that implies in terms of beliefs, actions, and policies towards actual groups of people, that Gould’s denunciation of reification comes into play. Arthur Jensen claims that g supports his theory that “the average difference in IQ between whites and blacks records an innate deficiency of intelligence among blacks” (350). As Gould reiterates, the argument for heritable intelligence inevitably justifies a belief regarding groups of people that has yet to be proven scientifically valid.