Building a Better Student Body

College admissions offices typically rely on two major cognitive measures to supplement prospective students’ applications: high-school grade point average (GPA) and SAT or ACT scores. But for too long, these measures have been given disproportionate weight as indicators of whether a student will thrive in a college environment and be an asset to the university, argued APS James McKeen Cattell Fellow Neal Schmitt in his award address at the 26th APS Annual Convention.

Schmitt, who is University Distinguished Professor Emeritus at Michigan State University, has spent the past decade developing alternative methods of measuring students’ abilities — and working to convince testing boards and admissions offices of their validity. These “noncognitive” measures, says Schmitt, may be more accurate predictors of which students will flourish or founder in an institution of higher learning.

In meetings with the College Board, “we told them that they couldn’t possibly improve on the SAT in combination with high-school GPA if the only outcome they were considering was first-year GPA. We said, ‘Broaden the scope of student outcomes that you’re considering and the set of capabilities considered in college admissions, and you may be able to do better,’” Schmitt said.

To that end, Schmitt and his colleagues have been working to develop noncognitive methods of measuring students’ abilities. These methods have three criteria: They must be “valid, practical in terms of time and effort required to assess, and less susceptible to faking.”

The researchers began by reviewing universities’ websites to see what college administrators hope to develop in their graduates.

“Obviously, you want them to graduate [and] to do well academically,” Schmitt said, “but most Web pages also mentioned things like developing leadership, social responsibility, ethics, perseverance, and adaptability. We took them to heart.”

The Standout Traits

By combining this anecdotal evidence, interviews with Michigan State University staff responsible for promoting student life on campus, and the available scientific literature, Schmitt’s team created a list of 12 characteristics that seemed to be important to admissions departments and resident life offices. The list included intellectual (knowledge and mastery of general principles, intellectual interest and curiosity, and artistic/cultural appreciation), interpersonal (appreciation for diversity, leadership, and interpersonal skills), and intrapersonal (social responsibility and citizenship, physical/psychological health, career orientation, adaptability/life skills, perseverance, and ethics/integrity) components.

The researchers then developed two noncognitive measures: situational judgment questions (e.g., “What would you do if faced with [a certain hypothetical situation]?”) and biodata (e.g., multiple choice reports of past experience/background or interest/preferences). These measures were designed to reflect the 12 dimensions relevant to admissions offices and resident life departments.

Schmitt and his colleagues found the situational judgment questions particularly useful for determining how college students would react to a variety of scenarios they might face in college. Admissions officers might, for example, present the following hypothetical situation to applicants to see how they would deal with a situation requiring leadership: “You are assigned to a group to work on a particular project. When you sit down together as a group, no one says anything. What would you do?” Answers might range from “Look at them until someone eventually says something” (the worst option, according to the answer key) to “Get to know everyone first and make sure the project’s goals are clear to everyone” (the best choice). Although these sorts of questions do not measure cognitive intelligence, Schmitt explained, they might give administrators a better idea of a student’s behavior when facing a variety of realistic problems, adding a valuable dimension to assessment measures that often fail to take things like leadership potential into account.

“We wanted [these questions] to broadly represent student life as a student would experience it in our institution and hopefully in others,” he added. “What we ended up doing was to use them as a composite reflecting what we call judgment — common sense, perhaps.”

The researchers also asked first-semester college students open-ended questions about their activities, hobbies, and academic lives to learn more about how different kinds of students had behaved in high school. The questions that resulted from an analysis of their answers were also grouped to reflect the 12 behavioral dimensions and served as the second set of noncognitive measures. Consistent with the literature, Schmitt and his colleagues called these measures biodata.

To validate these noncognitive measures, the researchers collected a variety of student outcomes including things like self-rated class attendance, grades, organizational citizenship behavior (e.g., mentoring students, attending extracurricular activities, participating in community service), deviance (e.g., cheating on exams, destroying school property), and continuation in school and education.

“When administrators raised questions about the use of organizational citizenship as a relevant student outcome, we mentioned that it also could reflect the number of alumni who would give gifts to the university later, and they automatically quit objecting,” the researcher said to laughter.

Getting Buy-In

Joking aside, Schmitt admitted one of the biggest obstacles to his research was convincing colleges and universities to try combining cognitive and noncognitive testing measures when evaluating potential students. But his research shows adopting such a strategy could help academic institutions build stronger student bodies: Although using only cognitive tests might originally have been a good method for evaluating a student’s potential, it’s now nearly universally recognized that being academically talented and intelligent is not the only important measure of whether a person will do well in college — or in life. Schmitt demonstrated this concept by correlating the scores on cognitive and noncognitive tests with student measures of performance for different demographic subgroups.

When Schmitt and his colleagues evaluated differences in just the cognitive measures (SAT/ACT and high-school GPA) among three subgroups — Caucasians, Hispanic Americans, and African Americans — they found large and significant differences among the three populations. In the SAT/ACT comparison, African Americans scored nearly 1.5 standard deviations lower than their Caucasian counterparts, and Hispanic Americans scored more than one standard deviation lower than Caucasians.

By contrast, Schmitt said, most of the mean deviations for the noncognitive measures were close to zero among all three subgroups. “In some cases, the minority group actually scores better … one reason to develop and use measures like this, with respect to adverse impact [on minority groups], is if they are included in a battery along with high school grades and ACT/SAT scores that contributes to admissions decisions, we will dampen the adverse impact on these groups,” he explained. Subgroup differences still exist, but they are smaller.

Schmitt emphasized that his team’s nontraditional measures were developed to increase the ability of  administrators to get a better picture of a student’s overall personality and abilities: “You can use these noncognitive measures, but they’re not going to have a great deal of impact on the quality of the student body overall as reflected [only] by GPA.”

Such measures, Schmitt added, could be helpful even after students have been admitted to a university — for example, advisors could use them to tailor their counseling based on individual needs. To that end, he and his team did a profile analysis to classify subgroups of students based on different configurations of scores on GPA, SAT/ACT, and noncognitive variables. They were able to identify five types of students — low academic but career-oriented, high ability but culturally limited, marginal, artistically able, and academically able and well-rounded — each with specific needs and abilities. Marginal students, for example, might need early academic intervention to prevent them from dropping out; academically able individuals would likely be good peer mentors.

Despite Schmitt’s substantial findings on the practicality of noncognitive measures, both before and after students are admitted to universities, noncognitive measures have not been widely implemented, he said. Among the reasons he listed were the cost of administering new tests, a lack of consensus on effectiveness, the cost of implementation, pressure on college admissions offices to continue business as usual, and the willingness to share responsibility during the beginning stages.

“Introducing something new like this is not only a lot of work, but it also can impact [the process] in unknown ways,” Schmitt said. Most importantly, he hopes to impart to his colleagues that “if we’re going to implement some of the products of our research, we need to engage in the political process that is instrumental in getting new things adapted.”


This article seems like it should be published by the American Psychological Association and not the Association for Psychological Science. Social engineering and political correctness seem to be of more concern than the scientific method.

APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.