They’re making their lists, checking them twice, trying to decide who’s in and who’s not. It’s again admissions season and tensions run high as university leaders struggle to make difficult decisions about the future of their schools. Chief among those tensions, in the past few years, has been the question of whether standardized tests should be central to the process.
In 2021, the University of California system ditched the use of all standardized testing for undergraduate admissions. California State University followed suit last spring, and in November, the American Bar Association voted to abandon the LSAT requirement for admission to any of the nation’s law schools beginning in 2025. Many other schools have lately reached the same conclusion. Science magazine reports that among a sample of 50 U.S. universities, only 3 percent of Ph.D. science programs currently require applicants to submit GRE scores, compared with 84 percent four years ago. And colleges that dropped their testing requirements or made them optional in response to the pandemic are now feeling torn about whether to bring that testing back.
Proponents of these changes have long argued that standardized tests are biased against low-income students and students of color, and should not be used. The system serves to perpetuate a status quo, they say, where children whose parents are in the top 1 percent of income distribution are 77 times more likely to attend an Ivy League university than children whose parents are in the bottom quintile. But those who still endorse the tests make the mirror-image claim: Schools have been able to identify talented low-income students and students of color and give them transformative educational experiences, they argue, precisely because those students are tested.
These two perspectives–that standardized tests are a driver of inequality, and that they are a great tool to ameliorate it–are often pitted against each other in contemporary discourse. But in my view, they are not oppositional positions. Both of these things can be true at the same time: Tests can be biased against marginalized students and they can be used to help those students succeed. We often forget an important lesson about standardized tests: They, or at least their outputs, take the form of data; and data can be interpreted–and acted upon–in multiple ways. That might sound like an obvious statement, but it’s crucial to resolving this debate.
I teach a seminar about quantitative research methods. It focuses on the details of data interpretation and application. One of the readings I assign –Andrea Jones-Rooy’s article “I’m a Data Scientist Who Is Skeptical About Data“–contains a passage that is relevant to our thinking about standardized tests and their use in admissions:
Data can’t say anything about an issue any more than a hammer can build a house or almond meal can make a macaron. Although data is essential for discovery, it can only be used by humans to shape and transform it into insight.
When reviewing applications, admissions officials have to turn test scores into insights about each applicant’s potential for success at the university. But their ability to generate those insights depends on what they know about the broader data-generating process that led students to get those scores, and how the officials interpret what they know about that process. The way they view bias within a system will affect how they use test scores and whether or not they perpetuate or reduce inequality.
First, who takes these tests is not random. Obtaining a score can be so costly–in terms of both time and money–that it’s out of reach for many students. Public policy can help to reduce this bias, at the very least. For example, research has found that when states implement universal testing policies in high schools, and make testing part of the regular curriculum rather than an add-on that students and parents must provide for themselves, more disadvantaged students enter college and the income gap narrows. However, even if that issue is solved, there are still other issues, admittedly more difficult, to address.
The second issue relates to what the tests are actually measuring. Researchers have argued about this question for decades, and continue to debate it in academic journals. To understand the tension, recall what I said earlier: Universities are trying to figure out applicants’ potential for success. Students’ ability to realize their potential depends both on what they know before they arrive on campus and on being in a supportive academic environment. Although the tests should measure knowledge prior to arrival, American learning is complex enough that they may end up measuring other aspects.
In the United States, we have a primary and secondary education system that is unequal because of historic and contemporary laws and policies. American schools continue to be highly segregated by race, ethnicity, and social class, and that segregation affects what students have the opportunity to learn. Well-resourced schools can afford to provide more enriching educational experiences to their students than underfunded schools can. When students take standardized tests, they answer questions based on what they’ve learned, but what they’ve learned depends on the kind of schools they were lucky (or unlucky) enough to attend. This presents a problem for both test-makers as well as universities who rely on the data. They are attempting to assess student aptitude, but the unequal nature of the learning environments in which students have been raised means that tests are also capturing the underlying disparities; that is one of the reasons test scores tend to reflect larger patterns of inequality. When admissions officers see a student with low scores, they don’t know whether that person lacked potential or has instead been deprived of educational opportunity.
How can colleges and universities make use of this data given the information they have about the variables that influence it? The answer depends on how colleges and universities view their mission and broader purpose in society. Standardized tests have been used to screen out students since the beginning. A congressional report on the history of testing in American schools describes how, in the late 1800s, elite colleges and universities had become disgruntled with the quality of high-school graduates, and sought a better means of screening them. Harvard’s president first proposed a system of common entrance exams in 1890; the College Entrance Examination Board was formed 10 years later. That orientation–toward exclusion–led schools down the path of using tests to find and admit only those students who seemed likely to embody and preserve an institution’s prestigious legacy. They were forced to adopt some rather unsavory policies. For example, a few years ago, a spokesperson for the University of Texas at Austin admitted that the school’s adoption of standardized testing in the 1950s had come out of its concerns over the effects of Brown v. Board of Education. UT looked at the distribution of test scores, found cutoff points that would eliminate the majority of Black applicants, and then used those cutoffs to guide admissions.
These days universities often claim to have goals of inclusion. They talk about the value of educating not just children of the elite, but a diverse cross-section of the population. Instead of searching for and admitting students who have already had tremendous advantages and specifically excluding nearly everyone else, these schools could try to recruit and educate the kinds of students who have not had remarkable educational opportunities in the past.
This goal could be supported by careful analysis of test data. Universities might consider investing more resources in areas that require more assistance if students score well. They could hire more instructors or support staff to work with low-scoring students. And if schools notice alarming patterns in the data–consistent areas where students have been insufficiently prepared–they could respond not with disgruntlement, but with leadership. They could advocate for the state to provide K-12 schools with better resources.
Such investments would be in the nation’s interest, considering that one of the functions of our education system is to prepare young people for current and future challenges. These include improving equity and innovation in science and engineering, addressing climate change and climate justice, and creating technological systems that benefit a diverse public. All of these areas benefit from diverse groups of people working together–but diverse groups cannot come together if some members never learn the skills necessary for participation.
But universities–at least the elite ones–have not traditionally pursued inclusion, through the use of standardized testing or otherwise. At the moment, research on university behavior suggests that they operate as if they were largely competing for prestige. It makes more sense to use test scores to exclude students if that is their goal. Schools can optimize their market metrics by enrolling the students with high scores. This is known as their ranking.
Which is to say, the tests themselves are not the problem. These biases are prevalent in most components of admissions portfolios. In terms of favoring the rich, admissions essays are even worse than standardized tests; the same goes for participation in extracurricular activities and legacy admissions. All of this information provides universities useful data about students arriving on campus.
None of these data are indisputable. The people who are able to interpret this data and take action have historically been a benefit for wealthy students. They can now make better decisions. It doesn’t depend on the way their students fill out bubble sheets. Universities can continue to follow their own paths or be more inclusive. Schools must decide for themselves what kind of business they are in and who they serve.
The post Are Standardized Tests Racist, or Are They Anti-racist? appeared first on The Atlantic.