Skip to Main Content

About The Book

The controversial book linking intelligence to class and race in modern society, and what public policy can do to mitigate socioeconomic differences in IQ, birth rate, crime, fertility, welfare, and poverty.

Excerpt

Chapter 1

Cognitive Class and Education, 1900-1990

In the course of the twentieth century, America opened the doors of its colleges wider than any previous generation of Americans, or other society in history, could have imagined possible. This democratization of higher education has raised new, barriers between people that may prove to be more divisive and intractable than the old ones.

The growth in the proportion of people getting college degrees is the most obvious result, with a fifteen-fold increase from 1900 to 1990. Even more important, the students going to college were being selected ever more efficiently for their high IQ. The crucial decade was the 1950s, when the percentage of top students who went to college rose by more than it had in the preceding three decades. By the beginning of the 1990s, about 80 percent of all students in the top quartile of ability continued to college after high school. Among the high school graduates in the top few percentiles of cognitive ability the chances of going to college already exceeded 90 percent.

Perhaps the most important of all the changes was the transformation of America's elite colleges. As more bright youngsters went off to college, the colleges themselves began to sort themselves out. Starting in the 1950s, a handful of restitutions became magnets for the very brightest of each year's new class. In these schools, the cognitive level of the students rose far above the rest of the college population.

Taken together, these trends have stratified America according to cognitive ability.


A perusal of Harvard's Freshman Register for 1952 shows a class looking very much as Harvard freshman classes had always looked. Under the photographs of the well-scrubbed, mostly East Coast, overwhelmingly white and Christian young men were home addresses from places like Philadelphia's Main Line, the Upper East Side of New York, and Boston's Beacon Hill. A large proportion of the class came from a handful of America's most exclusive boarding schools; Phillips Exeter and Phillips Andover alone contributed almost 10 percent of the freshmen that year.

And yet for all its apparent exclusivity, Harvard was not so hard to get into in the fall of 1952. An applicant's chances of being admitted were about two out of three, and close to 90 percent if his father had gone to Harvard. With this modest level of competition, it is not surprising to learn that the Harvard student body was not uniformly brilliant. In fact, the mean SAT-Verbal score of the incoming freshmen class was only 583, well above the national mean but nothing to brag about. Harvard men came from a range of ability that could be duplicated in the top half of many state universities.

Let us advance the scene to 1960. Wilbur J. Bender, Harvard's dean of admissions, was about to leave his post and trying to sum up for the board of overseers what had happened in the eight years of his tenure. "The figures," he wrote, "report the greatest change in Harvard admissions, and thus in the Harvard student body, in a short time -- two college generations -- in our recorded history." Unquestionably, suddenly, but for no obvious reason, Harvard had become a different kind of place. The proportion of the incoming students from New England had dropped by a third. Public school graduates now outnumbered private school graduates. Instead of rejecting a third of its applicants, Harvard was rejecting more than two-thirds -- and the quality of those applicants had increased as well, so that many students who would have been admitted in 1952 were not even bothering to apply in 1960.

The SAT scores at Harvard had skyrocketed. In the fall of 1960, the average verbal score was 678 and the average math score was 695, an increase of almost a hundred points for each test. The average Harvard freshman in 1952 would have placed in the bottom 10 percent of the incoming class by 1960. In eight years, Harvard had been transformed from a school primarily for the northeastern socioeconomic elite into a school populated by the brightest of the bright, drawn from all over the country.

The story of higher education in the United States during the twentieth century is generally taken to be one of the great American success stories, and with good reason. The record was not without blemishes, but the United States led the rest of the world in opening college to a mass population of young people of ability, regardless of race, color, creed, gender, and financial resources.

But this success story also has a paradoxically shadowy side, for education is a powerful divider and classifier. Education affects income, and income divides. Education affects occupation, and occupations divide. Education affects tastes and interests, grammar and accent, all of which divide. When access to higher education is restricted by class, race, or religion, these divisions cut across cognitive levels. But school is in itself, more immediately and directly than any other institution, the place where people of high cognitive ability excel and people of low cognitive ability fail. As America opened access to higher education, it opened up as well a revolution in the way that the American population sorted itself and divided itself. Three successively more efficient sorting processes were at work: the college population grew, it was recruited by cognitive ability more efficiently, and then it was further sorted among the colleges.

THE COLLEGE POPULATION GROWS

A social and economic gap separated high school graduates from college graduates in 1900 as in 1990; that much is not new. But the social md economic gap was not accompanied by much of a cognitive gap, became the vast majority of the brightest people in the United States had not gone to college. We may make that statement despite the lack of IQ scores from 1900 for the same reason that we can make such statements about Elizabethan England: It is true by mathematical necessity. In 1900, only about 2 percent of 23-year-olds got college degrees. Even if all of the 2 percent who went to college had IQs of 115 and above (and they did not), seven out of eight of the brightest 23-year-olds in the America of 1900 would have been without college degrees. This situation barely changed for the first two decades of the new century. Then, at the close of World War I, the role of college for American youths began an expansion that would last until 1974, interrupted only by the Great Depression and World War II.

The three lines in the figure show trends established in 1920-1929, 1935-1940, and 1954-1973, then extrapolated. They are there to highlight the three features of the figure worth noting. First, the long perspective serves as a counterweight to the common belief that the college population exploded suddenly after World War II. It certainly exploded in the sense that the number of college students went from a wartime trough to record highs, but this is because two generations of college students were crowded onto campuses at one time. In terms of trendlines, World War II and its aftermath was a blip, albeit a large blip. When this anomalous turmoil ended in the mid-1950s, the proportion of people getting college degrees was no higher than would have been predicted from the trends established in the 1920s or the last half of the 1930s (which are actually a single trend interrupted by the worst years of the depression).

The second notable feature of the figure is the large upward tilt in the trendline from the mid-1950s until 1974. That it began when it did -- the Eisenhower years -- comes as a surprise. The GI bill's impact had faded and the postwar baby boom had not yet reached college age. Presumably postwar prosperity had something to do with it, but the explanation cannot be simple. The slope remained steep in periods as different as Eisenhower's late 1950s, LBJ's mid-1960s, and Nixon's early 1970s.

After 1974 came a peculiar plunge in college degrees that lasted until 1981 -- peculiar because it occurred when the generosity of scholarships and loans, from colleges, foundations, and government alike, was at its peak. This period of declining graduates was then followed by a steep increase from 1981 to 1990 -- also peculiar, in that college was becoming harder to afford for middle-class Americans during those years. As of 1990, the proportion of students getting college degrees had more than made up for the losses during the 1970s and had established a new record, with B.A.s and B.S.s being awarded in such profusion that they amounted to 30 percent of the 23-year-old population.

MAKING GOOD ON THE IDEAL OF OPPORTUNITY

At first glance, we are telling a story of increasing democracy and intermingling, not of stratification. Once upon a time, the college degree was the preserve of a tiny minority; now almost a third of each new cohort of youths earns it. Surely, it would seem, this must mean that a broader range of people is going to college -- including people with a broader, not narrower, range of cognitive ability. Not so. At the same time that many more young people were going to college, they were also being selected ever more efficiently by cognitive ability.

A compilation of the studies conducted over the course of the century suggests that the crucial decade was the 1950s. The next figure shows the data for the students in the top quartile (the top 25 percent) in ability and is based on the proportion of students entering college (though not necessarily finishing) in the year following graduation from high school.

Again, the lines highlight trends set in particular periods, here 1925-1950 and 1950-1960. From one period to the next, the proportion of bright students getting to college leaped to new heights. There are two qualifications regarding this figure. First, it is based on high school graduates -- the only data available over this time period -- and therefore drastically understates the magnitude of the real change from the 1920s to the 1960s and thereafter, because so many of the top quartile in ability never made it through high school early in the century (see Chapter 6). It is impossible to be more precise with the available data, but a reasonable estimate is that as of the mid-1920s, only about 15 percent of all of the nation's youth in the top IQ quartile were going on to college. It is further the case that almost all of those moving on to college in the 1920s were going to four-year colleges, and this leads to the second qualification to keep in mind: By the 1970s and 1980s, substantial numbers of those shown as continuing to college were going to a junior college, which are on average less demanding than four-year colleges. Interpreting all the available data, it appears that the proportion of all American youth in the top IQ quartile who went directly to four-year colleges rose from roughly one youth in seven in 1925 to about two out of seven in 1950 to more than four out of seven in the early 1960s, where it has remained, with perhaps a shallow upward trend, ever since.

But it is not just that the top quartile of talent has been more efficiently tapped for college. At every level of cognitive ability, the links between IQ and the probability of going to college became tighter and more regular. The next figure summarizes three studies that permit us to calculate the probability of going to college throughout the ability range over the last seventy years. Once again we are restricted to high school graduates for the 1925 data, which overstates the probability of going to college during this period. Even for the fortunate few who got a high school degree in 1925, high cognitive ability improved their chances of getting to college -- but not by much. The brightest high school graduates had almost a 60 percent chance of going to college, which means that they had more than a 40 percent chance of not going, despite having graduated from high school and being very bright. The chances of college for someone merely in the 80th percentile in ability were no greater than classmates who were at the 50th percentile, and only slightly greater than classmates in the bottom third of the class.

Between the 1920s and the 1960s, the largest change in the probability of going to college was at the top of the cognitive ability distribution. By 1960, a student who was really smart -- at or near the l00th percentile in IQ -- had a chance of going to college of nearly 100 percent. Furthermore, as the figure shows, going to college had gotten more dependent on intelligence at the bottom of the distribution, too. A student at the 30th percentile had only about a 25 percent chance of going to college -- lower than it had been for high school graduates in the 1920s. But a student in the 80th percentile had a 70 percent chance of going to college, well above the proportion in the 1920s.

The line for the early 1980s is based on students who graduated from high school between 1980 and 1982. The data are taken from the National Longitudinal Survey of Youth (NLSY), which will figure prominently in the chapters ahead. Briefly, the NLSY is a very large (originally 12,686 persons), nationally representative sample of American youths who were aged 14 to 22 in 1979, when the study began, and have been followed ever since. (The NLSY is discussed more fully in the introduction to Part II.) The curve is virtually identical to that from the early 1960s, which is in itself a finding of some significance in the light of the many upheavals that occurred in American education in the 1960s and 1970s.

Didn't Equal Opportunity in Higher Education Really Open Up During the 1960s?

The conventional wisdom holds that the revolution in higher education occurred in the last half of the 1960s, as part of the changes of the Great Society, especially its affirmative action policies. We note here that the proportion of youths going to college rose about as steeply in the 1950s as in the 1960s, as shown in the opening figure in this chapter and the accompanying discussion. Chapter 19 considers the role played by affirmative action in the changing college population of recent decades.

Meanwhile, the sorting process continued in college. College weeds out many students, disproportionately the least able. The figure below shows the situation as of the 1980s. The line for students entering college reproduces the one shown in the preceding figure. The line for students completing the B.A. shows an even more efficient sorting process. A high proportion of people with poor test scores -- more than 20 percent of those in the second decile (between the 10th and 20th centile), for example -- entered a two- or four-year college. But fewer than 2 percent of them actually completed a bachelor's degree. Meanwhile, about 70 percent of the students in the top decile of ability were completing a B.A.

So a variety of forces have combined to ensure that a high proportion of the nation's most able youths got into the category of college graduates. But the process of defining a cognitive elite through education is not complete. The socially most significant part of the partitioning remains to be described. In the 1950s, American higher education underwent a revolution in the way that sorted the college population itself.

THE CREATION OF A COGNITIVE ELITE WITHIN THE COLLEGE SYSTEM

The experience of Harvard with which we began this discussion is a parable for the experience of the nation's university system. Insofar as many more people now go to college, the college degree has become more democratic during the twentieth century. But as it became democratic, a new elite was developing even more rapidly within the system. From the early 1950s into the mid-1960s, the nation's university system not only became more efficient in bringing the bright youngsters to college, it became radically more efficient at sorting the brightest of the bright into a handful of elite colleges.

The Case of Ivy League and the State of Pennsylvania: The 1920s Versus the 1960s

Prior to World War II, America had a stratum of elite colleges just as it has now, with the Ivy League being the best known. Then as now, these schools attracted the most celebrated faculty, had the best libraries, and sent their graduates on to the best graduate schools and to prestigious jobs. Of these elite schools, Harvard was among the most famous and the most selective. But what was true of Harvard then was true of the other elite schools. They all had a thin layer of the very brightest among their students but also many students who were merely bright and a fair number of students who were mediocre. They tapped only a fragment of the cognitive talent in the country. The valedictorian in Kalamazoo and the Kansas farm girl with an IQ of 140 might not even be going to college at all. If they did, they probably went to the nearest state university or to a private college affiliated with their church.

One of the rare windows on this period is provided by two little-known sources of test score data. The first involves the earliest SATs, which were first administered in 1926. As part of that effort, a standardized intelligence test was also completed by 1,080 of the SAT subjects. In its first annual report, a Commission appointed by the College Entrance Examination Board provided a table for converting the SAT of that era to IQ scores. Combining that information with reports of the mean SAT scores for entrants to schools using the SAT, we are able to approximate the mean IQs of the entering students to the Ivy League and the Seven Sisters, the most prestigious schools in the country at that time.

Judging from this information, the entering classes of these schools in 1926 had a mean IQ of about 117, which places the average student at the most selective schools in the country at about the 88th percentile of all the nation's youths and barely above the 115 level that has often been considered the basic demarcation point for prime college material.

In the same year as these SAT data were collected, the Carnegie Foundation began an ambitious statewide study of high school seniors and their college experience in the entire state of Pennsylvania. By happy coincidence, the investigators used the same form of the Otis Intelligence Test used by the SAT Commission. Among other tests, they reported means for the sophomore classes at all the colleges and universities in Pennsylvania in 1928. Pennsylvania was (then as now) a large state with a wide variety of public and private schools, small and large, prestigious and pedestrian. The IQ equivalent of the average of all Pennsylvania colleges was 107, which put the average Pennsylvania student at the 68th percentile, considerably below the average of the elite schools. But ten Pennsylvania colleges had freshman classes with mean IQs that put them at the 75th to 90 percentiles. In other words, students going to any of several Pennsylvania colleges were, on average, virtually indistinguishable in cognitive ability from the students in the Ivy League and the Seven Sisters.

Now let us jump to 1964, the first year for which SAT data for a large number of Pennsylvania colleges are available. We repeat the exercise, this time using the SAT-Verbal test as the basis for analysis. Two important changes had occurred since 1928. The average freshman in a Pennsylvania college in 1964 was much smarter than the average Pennsylvania freshman in 1928 -- at about the 89th percentile. At the same time, however, the elite colleges, using the same fourteen schools represented in the 1928 data, had moved much further out toward the edge, now boasting an average freshman who was at the 99th percentile of the nation's youth.

Cognitive Stratification Throughout the College System by the 1960s

The same process occurred around the country, as the figure below shows. We picked out colleges with freshman SAT-Verbal means that were separated by roughly fifty-point intervals as of 1961. The specific schools named are representative of those clustering near each break point. At the bottom is a state college in the second echelon of a state system (represented by Georgia Southern); then comes a large state university (North Carolina State), then five successively more selective private schools: Villanova, Tulane, Colby, Amherst, and Harvard. We have placed the SAT scores against the backdrop of the overall distribution of SAT scores for the entire population of high school seniors (not just those who ordinarily take the SAT), using a special study that the College Board conducted in the fall of 1960. The figure points to the general phenomenon already noted for Harvard: By 1961, a large gap separated the student bodies of the elite schools from those of the public universities. Within the elite schools, another and significant level of stratification had also developed.

As the story about Harvard indicated, the period of this stratification seems to have been quite concentrated, beginning in the early 1950s. It remains to explain why. What led the nation's most able college age youth (and their parents) to begin deciding so abruptly that State U. was no longer good enough and that they should strike out for New Haven or Palo Alto instead?

If the word democracy springs to your tongue, note that democracy -- at least in the economic sense -- had little to do with it. The Harvard freshman class of 1960 comprised fewer children from low-income families, not more, than the freshman class in 1952. And no wonder. Harvard in 1950 had been cheap by today's standards. In 1950, total costs for a year at Harvard were only $8,800 -- in 1990 dollars, parents of today's college students will be saddened to learn. By 1960, total costs there had risen to $12,200 in 1990 dollars, a hefty 40 percent increase. According to the guidelines of the times, the average family could, if it stretched, afford to spend 20 percent of its income to send a child to Harvard. Seen in that light, the proportion of families who could afford Harvard decreased slightly during the 1950S. Scholarship help increased but not fast enough to keep pace.

Nor had Harvard suddenly decided to maximize the test scores of its entering class. In a small irony of history, the Harvard faculty had decided in 1960 not to admit students purely on the basis of academic potential as measured by tests but to consider a broader range of human qualities. Dean Bender explained why, voicing his fears that Harvard would "become such an intellectual hot-house that the unfortunate aspects of a self-conscious 'intellectualism' would become dominant and the precious, the brittle and the neurotic take over." He asked a very good question indeed: "In other words, would being part of a super-elite in a high prestige institution be good for the healthy development of the ablest 18- to 22-year-olds, or would it tend to be a warping and narrowing experience?" In any case, Harvard in 1960 continued, as it had in the past and would in the future, to give weight to such factors as the applicant's legacy (was the father a Harvard alum?), his potential as a quarterback or stroke for the eight-man shell, and other nonacademic qualities.

The baby boom had nothing to do with the change. The leading edge of the baby boomer tidal wave was just beginning to reach the campus by 1960.

So what had happened ? With the advantage of thirty additional years of hindsight, two trends stand out more clearly than they did in 1960.

First, the 1950s were the years in which television came of age and long-distance travel became commonplace. Their effects on the attitudes toward college choices can only be estimated, but they were surely significant. For students coming East from the Midwest and West, the growth of air travel and the interstate highway system made travel to school faster for affluent families and cheaper for less affluent ones. Other effects may have reflected the decreased psychic distance of Boston from parents and prospective students living in Chicago or Salt Lake City, because of the ways in which the world had become electronically smaller.

Second, the 1950s saw the early stages of an increased demand that results not from proportional changes in wealth but from an expanding number of affluent customers competing for scarce goods. Price increases for a wide variety of elite goods have outstripped changes in the consumer price index or changes in mean income in recent decades, sometimes by orders of magnitude. The cost of Fifth Avenue apartments, seashore property, Van Gogh paintings, and rare stamps are all examples. Prices have risen because demand has increased and supply cannot. In the case of education, new universities are built, but not new Princetons, Harvards, Yales, or Stanfords. And though the proportion of families with incomes sufficient to pay for a Harvard education did not increase significantly during the 1950s, the raw number did. Using the 20-percent-of-family-income rule, the number of families that could afford Harvard increased by 184,000 from 1950 to 1960. Using a 10 percent rule, the number increased by 55,000. Only a small portion of these new families had children applying to college, but the number of slots in the freshmen classes of the elite schools was also small. College enrollment increased from 2.1 million students in 1952 to 2.6 million by 1960, meaning a half-million more competitors for available places. It would not take much of an increase in the propensity to seek elite educations to produce a substantial increase in the annual applications to Harvard, Yale, and the others.

We suspect also that the social and cultural forces unleashed by World War II played a central role, but probing them would take us far afield. Whatever the combination of reasons, the basics of the situation were straightforward: By the early 1960s, the entire top echelon of American universities had been transformed. The screens filtering their students from the masses had not been lowered but changed. Instead of the old screen -- woven of class, religion, region, and old school ties -- the new screen was cognitive ability, and its mesh was already exceeding fine.

Changes Since the 1960s

There have been no equivalent sea changes since the early 1960s, but the concentration of top students at elite schools has intensified. As of the early 1990s, Harvard did not get four applicants for each opening, but closer to seven, highly self-selected and better prepared than ever. Competition for entry into the other elite schools has stiffened comparably.

Philip Cook and Robert Frank have drawn together a wide variety of data documenting the increasing concentration. There are, for example, the Westinghouse Science Talent Search finalists. In the 1960s, 47 percent went to the top seven colleges (as ranked in the Barron's list that Cook and Frank used). In the 1980s, that proportion had risen to 59 percent, with 39 percent going to just three colleges (Harvard, MIT, and Princeton). Cook and Frank also found that from 1979 to 1989, the percentage of students scoring over 700 on the SAT-Verbal who chose one of the "most competitive colleges" increased from 32 to 43 percent.

The degree of partitioning off of the top students as of the early 1990s has reached startling proportions. Consider the list of schools that were named as the nation's top twenty-five large universities and the top twenty-five small colleges in a well-known 1990 ranking. Together, these fifty schools accounted for just 59,000 out of approximately 1.2 million students who entered four-year institutions in the fall of 1990 -- fewer than one out of twenty of the nation's freshmen in four-year colleges. But they took in twelve out of twenty of the students who scored in the 700s on their SAT-Verbal test. They took in seven out of twenty of students who scored in the 600s.

The concentration is even more extreme than that. Suppose we take just the top ten schools, as ranked by the number of their freshmen who scored in the 700s on the SAT-Verbal. Now we are talking about schools that enrolled a total of only 18,000 freshmen, one out of every sixty-seven nationwide. Just these ten schools -- Harvard, Yale, Stanford, University of Pennsylvania, Princeton, Brown, University of California at Berkeley, Cornell, Dartmouth, and Columbia -- soaked up 31 percent of the nation's students who scored in the 700s on the SAT-Verbal. Harvard and Yale alone, enrolling just 2,900 freshmen -- roughly 1 out of every 400 freshmen -- accounted for 10 percent. In other words, scoring above 700 is forty times more concentrated in the freshman classes at Yale and Harvard than in the national SAT population at large -- and the national SAT population is already a slice off the top of the distribution.

HOW HIGH ARE THE PARTITIONS?

We have spoken of "cognitive partitioning" through education, which implies separate bins into which the population has been distributed. But there has always been substantial intellectual overlap across educational levels, and that remains true today. We are trying to convey a situation that is as much an ongoing process as an outcome. But before doing so, the time has come for the first of a few essential bits of statistics: the concepts of distribution and standard deviation. If you are new to statistics, we recommend that you read the more detailed explanation in Appendix 1; you will enjoy the rest of the book more if you do.

A Digression: Standard Deviations and Why They Are Important

Very briefly, a distribution is the pattern formed by many individual scores. The famous "normal distribution" is a bell-shaped curve, with most people getting scores in the middle range and a few at each end, or "tail," of the distribution. Most mental tests are designed to produce normal distributions.

A standard deviation is a common language for expressing scores. Why not just use the raw scores (SAT points, IQ points, etc.)? There are many reasons, but one of the simplest is that we need to compare results on many different tests. Suppose you are told that a horse is sixteen hands tall and a snake is quarter of a rod long. Not many people can tell you from that information how the height of the horse compares to the length of the snake. If instead people use inches for both, there is no problem. The same is true for statistics. The standard deviation is akin to the inch, an all-purpose measure that can be used for any distribution. Suppose we tell you that Joe has an ACT score of 24 and Tom has an SAT-Verbal of 720. As in the case of the snake and the horse, you need a lot of information about those two tests before you can tell much from those two numbers. But if we tell you instead that Joe has an ACT score that is .7 standard deviation above the mean and Tom has an SAT-Verbal that is 2.7 standard deviations above the mean, you know a lot.

How big is a standard deviation? For a test distributed normally, a person whose score is one standard deviation below the mean is at the 16th percentile. A person whose score is a standard deviation above the mean is at the 84th percentile. Two standard deviations from the mean mark the 2d and 98th percentiles. Three standard deviations from the mean marks the bottom and top thousandth of a distribution. Or, in short, as a measure of distance from the mean, one standard deviation means "big," two standard deviations means "very big," and three standard deviations means "huge." Standard deviation is often abbreviated "SD," a convention we will often use in the rest of the book.

Understanding How the Partitions Have Risen

The figure below summarizes the situation as of 1930, after three decades of expansion in college enrollment but before the surging changes of the decades to come. The area under each distribution is composed of people age 23 and is proportional to its representation in the national population of such people. The vertical lines denote the mean score for each distribution. Around them are drawn normal distributions -- bell curves -- expressed in terms of standard deviations from the mean.

It is easy to see from the figure above why cognitive stratification was only a minor part of the social landscape in 1930. At any given level of cognitive ability, the number of people without college degrees dwarfed the number who had them. College graduates and the noncollege population did not differ much in IQ. And even the graduates of the top universities (an estimate based on the Ivy League data for 1928) had IQs well within the ordinary range of ability.

The comparable picture sixty years later, based on our analysis of the NLSY, is shown in the next figure, again depicted as normal distributions. Note that the actual distributions may deviate from perfect normality, especially out in the tails.

The college population has grown a lot while its mean IQ has risen a bit. Most bright people were not going to college in 1930 (or earlier) -- waiting on the bench, so to speak, until the game opened up to them. By 1990, the noncollege population, drained of many bright youngsters, had shifted downward in IQ. While the college population grew, the gap between college and noncollege populations therefore also grew. The largest change, however, has been the huge increase in the intelligence of the average student in the top dozen universities, up a standard deviation and a half from where the Ivies and the Seven Sisters were in 1930. One may see other features in the figure evidently less supportive of cognitive partitioning. Our picture suggests that for every person within the ranks of college graduates, there is another among those without a college degree who has just as high an IQ -- or at least almost. And as for the graduates of the dozen top schools, while it is true that their mean IQ is extremely high (designated by the +2.7 SDs to which the line points), they are such a small proportion of the nation's population that they do not even register visually on this graph, and they too are apparently outnumbered by people with similar IQs who do not graduate from those colleges, or do not graduate from college at all. Is there anything to be concerned about? How much partitioning has really occurred?

Perhaps a few examples will illustrate. Think of your twelve closest friends or colleagues. For most readers of this book, a large majority will be college graduates. Does it surprise you to learn that the odds of having even half of them be college graduates are only six in a thousand, if people were randomly paired off? Many of you will not think it odd that half or more of the dozen have advanced degrees. But the odds against finding such a result among a randomly chosen group of twelve Americans are actually more than a million to one. Are any of the dozen a graduate of Harvard, Stanford, Yale, Princeton, Cal Tech, MIT, Duke, Dartmouth, Cornell, Columbia, University of Chicago, or Brown? The chance that even one is a graduate of those twelve schools is one in a thousand. The chance of finding two among that group is one in fifty thousand. The chance of finding four or more is less than one in a billion.

Most readers of this book -- this may be said because we know a great deal about the statistical tendencies of people who read a book like this -- are in preposterously unlikely groups, and this reflects the degree of partitioning that has already occurred.

In some respects, the results of the exercise today are not so different from the results that would have been obtained in former years. Sixty years ago as now, the people who were most likely to read a book of this nature would be skewed toward those who had friends with college or Ivy League college educations and advanced degrees. The differences between 1930 and 1990 are these:

First, only a small portion of the 1930 population was in a position to have the kind of circle of friends and colleagues that characterizes the readers of this book. We will not try to estimate the proportion, which would involve too many assumptions, but you may get an idea by examining the small area under the curve for college graduates in the 1930 figure, and visualize some fraction of that area as representing people in 1930 who could conceivably have had the educational circle of friends and colleagues you have. They constituted the thinnest cream floating on the surface of American society in 1930. In 1990, they constituted a class.

Second, the people who obtained such educations changed. Suppose that it is 1930 and you are one of the small number of people whose circle of twelve friends and colleagues included a sizable fraction of college graduates. Suppose you are one of the even tinier number whose circle came primarily from the top universities. Your circle, selective and uncommon as it is, nonetheless will have been scattered across a wide range of intelligence, with IQs from 100 on up. Given the same educational profile in one's circle today, it would consist of a set of people with IQs where the bottom tenth is likely to be in the vicinity of 120, and the mean is likely to be in excess of 130 -- people whose cognitive ability puts them out at the edge of the population at large. What might have been a circle with education or social class as its most salient feature in 1930 has become a circle circumscribing a narrow range of high IQ scores today.

The sword cuts both ways. Although they are not likely to be among our readers, the circles at the bottom of the educational scale comprise lower and narrower ranges of IQ today than they did in 1930. When many youngsters in the top 25 percent of the intelligence distribution who formerly would have stopped school in or immediately after high school go to college instead, the proportion of high-school-only persons whose intelligence is in the top 25 percent of the distribution has to fall correspondingly. The occupational effect of this change is that bright youngsters who formerly would have become carpenters or truck drivers or postal clerks go to college instead, thence to occupations higher on the socioeconomic ladder. Those left on the lower rungs are therefore likely to be lower and more homogeneous intellectually. Likewise their neighborhoods, which get drained of the bright and no longer poor, have become more homogeneously populated by a less bright, and even poorer, residuum. In other chapters we focus on what is happening at the bottom of the distribution of intelligence.

The point of the exercise in thinking about your dozen closest friends and colleagues is to encourage you to detach yourself momentarily from the way the world looks to you from day to day and contemplate how extraordinarily different your circle of friends and acquaintances is from what would be the norm in a perfectly fluid society. This profound isolation from other parts of the IQ distribution probably dulls our awareness of how unrepresentative our circle actually is.

With these thoughts in mind, let us proceed to the technical answer to the question, How much partitioning is there in America? It is done by expressing the overlap of two distributions after they are equated for size. There are various ways to measure overlap. In the following table we use a measure called median overlap, which says what proportion of IQ scores in the lower-scoring group matched or exceeded the median score in the higher-scoring group. For the nationally representative NLSY sample, most of whom attended college in the late 1970s and through the 1980s, the median overlap is as follows: By this measure, there is only about 7 percent overlap between people with only a high school diploma and people with a B.A. or M.A. And even this small degree of overlap refers to all colleges. If you went to any of the top hundred colleges and universities in the country, the measure of overlap would be a few percentage points. If you went to an elite school, the overlap would approach zero.

Even among college graduates, the partitions are high. Only 21 percent of those with just a B.A. or a B.S. had scores as high as the median for those with advanced graduate degrees. Once again, these degrees of overlap are for graduates of all colleges. The overlap between the B.A. from a state teachers' college and an MIT Ph.D. can be no more than a few percentage points.

What difference does it make? The answer to that question will unfold over the course of the book. Many of the answers involve the ways that the social fabric in the middle class and working class is altered when the most talented children of those families are so efficiently extracted to live in other worlds. But for the time being, we can begin by thinking about that thin layer of students of the highest cognitive ability who are being funneled through rarefied college environments, whence they go forth to acquire eventually not just the good life but often an influence on the life of the nation. They are coming of age in environments that are utterly atypical of the nation as a whole. The national percentage of 18-year-olds with the ability to get a score of 700 or above on the SAT-Verbal test is in the vicinity of one in three hundred. Think about the consequences when about half of these students are going to universities in which 17 percent of their classmates also had SAT-Vs in the 700s and another 48 percent had scores in the 600s. It is difficult to exaggerate how different the elite college population is from the population at large -- first in its level of intellectual talent, and correlatively in its outlook on society, politics, ethics, religion, and all the other domains in which intellectuals, especially intellectuals concentrated into communities, tend to develop their own conventional wisdoms.

The news about education is heartening and frightening, more or less in equal measure. Heartening, because the nation is providing a college education for a high proportion of those who could profit from it. Among those who graduate from high school, just about all the bright youngsters now get a crack at a college education. Heartening also because our most elite colleges have opened their doors wide for youngsters of outstanding promise. But frightening too. When people live in encapsulated worlds, it becomes difficult for them, even with the best of intentions, to grasp the realities of worlds with which they have little experience but over which they also have great influence, both public and private. Many of those promising undergraduates are never going to live in a community where they will be disabused of their misperceptions, for after education comes another sorting mechanism, occupations, and many of the holes that are still left in the cognitive partitions begin to get sealed. We now turn to that story.

Copyright © 1994 by Richard J. Herrnstein and Charles Murray

About The Authors

Richard J. Herrnstein held the Edger Pierce Chair in Psychology at Harvard University until his death in 1994.

Product Details

  • Publisher: Free Press (February 28, 1996)
  • Length: 912 pages
  • ISBN13: 9780684824291

Browse Related Books

Raves and Reviews

Michael Novak National Review Our intellectual landscape has been disrupted by the equivalent of an earthquake.

David Brooks The Wall Street Journal Has already kicked up more reaction than any social?science book this decade.

Peter Brimelow Forbes Long-awaited...massive, meticulous, minutely detailed, clear. Like Darwin's Origin of Species -- the intellectual event with which it is being seriously compared -- The Bell Curve offers a new synthesis of research...and a hypothesis of far-reaching explanatory power.

Milton Friedman This brilliant, original, objective, and lucidly written book will force you to rethink your biases and prejudices about the role that individual difference in intelligence plays in our economy, our policy, and our society.

Chester E. Finn, Jr. Commentary The Bell Curve's implications will be as profound for the beginning of the new century as Michael Harrington's discovery of "the other America" was for the final part of the old. Richard Herrnstein's bequest to us is a work of great value. Charles Murray's contribution goes on.

Prof. Thomas J. Bouchard Contemporary Psychology [The authors] have been cast as racists and elitists and The Bell Curve has been dismissed as pseudoscience....The book's message cannot be dismissed so easily. Herrnstein and Murray have written one of the most provocative social science books published in many years....This is a superbly written and exceedingly well documented book.

Christopher Caldwell American Spectator The Bell Curve is a comprehensive treatment of its subject,never mean-spirited or gloating. It gives a fair hearing to those who dissent scientifically from its propositions -- in fact, it bends over backward to be fair....Among the dozens of hostile articles that have thus far appeared, none has successfully refuted any of its science.

Malcolme W. Browne The New York Times Book Review Mr. Murray and Mr. Herrnstein write that "for the last 30 years, the concept of intelligence has been a pariah in the world of ideas," and that the time has come to rehabilitate rational discourse on the subject. It is hard to imagine a democratic society doing otherwise.

Prof. Eugene D. Genovese National Review Richard Herrnstein and Charles Murray might not feel at home with Daniel Patrick Moynihan and Lani Guinier, but they should....They have all [made] brave attempts to force a national debate on urgent matters that will not go away. And they have met the same fate. Once again, academia and the mass media are straining every muscle to suppress debate.

Prof. Earl Hunt American Scientist The first reactions to The Bell Curve were expressions of public outrage. In the second round of reaction, some commentators suggested that Herrnstein and Murray were merely bringing up facts that were well known in the scientific community, but perhaps best not discussed in public. A Papua New Guinea language has a term for this, Mokita. It means "truth that we all know, but agree not to talk about." ...There are fascinating questions here for those interested in the interactions between sociology, economics, anthropology and cognitive science. We do not have the answers yet. We may need them soon, for policy makers who rely on Mokita are flying blind.

Resources and Downloads

High Resolution Images