The IAT: How and When It Works

“Left…right…left…right” could be heard echoing from the Hilton Washington’s Military Room during the APS 19th Annual Convention.  And while the chorus may have sounded like boot camp exercises to curious passers-by, it was merely APS Fellow Anthony Greenwald administering the Implicit Association Test (IAT) en masse during his Psi Chi Distinguished Speakers address.  Throughout his talk, the University of Washington Professor educated his audience about the history and validity of the IAT and, of course, provided the opportunity to experience the IAT firsthand.

“So what is the IAT?,” Greenwald asks his audience.  “It’s a measure of associative knowledge.  And I don’t describe it as a measure of prejudice or bias, although it can be used to measure implicit prejudice or bias.”  Greenwald takes care to emphasize that “implicit prejudice” is not the overt discriminatory phenomenon that we usually associate with the word “prejudice”.

The IAT has risen to prominence since Greenwald first published his findings in 1998; the Race Attitude IAT has been taken over a million times. On the surface, the IAT asks participants to categorize words or images on a computer screen with a touch of the keyboard.  These categorizations begin to require some cognitive gymnastics, however, as categories become combined.  The time it takes to sort out stimuli from the combined categories, provides some insight into participants’ mental associations.

“How does it work?  Well, fairly simply.  If two concepts are associated, it is easy to give the same response to exemplars of both” says Greenwald.  That’s a deceptively uncomplicated explanation behind one of contemporary psychology’s most influential research paradigms.  “There’s little more theory underlying the IAT than the idea of association between concepts.”

But the provocative implications of the IAT have sparked controversy in both research circles and the mainstream media.  So in his address “Assessing the Validity of Implicit Association Test Measures,” Greenwald came to the IAT’s defense and discussed its psychometric worthiness.

When discussing internal validity, for example, Greenwald says “empirical research demonstrated that there are several things that might get in the way that in fact did not.”  Things like participants’ familiarity with the items or lack thereof, which side of the screen categories are presented on, or whether the person is right or left handed have all been mentioned as possible confounds, but haven’t been borne out in research.

Greenwald went on to illustrate the convergent validity of the IAT with self report using an example from the 2004 presidential election.  Implicit attitudes toward each candidate correlated .73 with self-report measures.  “That’s quite high,” Greenwald says.  “And that’s evidence of convergence.”

Conversely, research on IAT measures of age attitudes have demonstrated evidence of discriminant validity with self report.  Greenwald offers this explanation for the dichotomous results: “I think you get convergent validity when both implicit and explicit attitudes are shaped by the same influences, which means they are formed relatively late in life, such as political preferences.”  For those attitudes that are formed earlier in life — in particular racial/ethnic, young/old, and male/female stereotypes — IAT results are likely to diverge from the explicit self report measures.

In a time where social desirability confounds are of pervasive concern in psychological research, one of the IAT’s greatest merits appears to be resistance to faking.  Studies have demonstrated that participants rarely devise a successful faking strategy.  It appears that taking one’s time is the easiest way to doctor results.  “It does work,” Greenwald says of the strategy, “but it also tends to be detectable statistically.”

But as with any test, the IAT has its psychometric vulnerabilities.  Greenwald describes the elasticity of the IAT, where experiences with exemplars of test categories shortly before the test can alter results.  So, according to Greenwald, having a friendly interaction with a black experimenter just before the test will temporarily reduce evidence of bias.

Greenwald is frank in his assertion that the test-retest reliability of the IAT leaves something to be desired.  “The test-retest reliability is okay for research, but not very good for an individual difference measure that you want to be diagnostic of a single person.”  In order to boost its reliability, Greenwald and his colleagues resort to the “standard trick” of administering the test several times.  At this point the reliability is “getting good, but still not good enough.”

The Brief IAT, and its less time-consuming administration, offers some promise in this arena.  “I think we have to look at the test-retest reliability of the Brief IAT in more studies than we have so far.  But the average of the first set of a relatively small set of studies that was done was around .50.  This is large enough so that four repetitions of the measure should have a satisfactory test-retest reliability of about .80.”

“The last topic is the most interesting one,” Greenwald asserts.  “Does the IAT predict anything interesting?”  Pointing to a meta-analysis being conducted by recent Yale University PhDs Andy Poehlman and Eric Uhlmann, and long-time collaborator Mahzarin Banaji of Harvard,Greenwald says that the IAT performs better than self report at predicting behavior in the critical domain of intergroup discriminatory behavior. “The IAT has incremental predictive validity relative to self-report. [The results are] statistically significant, and very clearly so in the meta-analysis.”

It’s a good start for a test that has garnered so much attention and will likely be used increasingly in the near future.  But, there are few certainties when using psychological tools such as the IAT and more research is needed on the topic.  The jarring possibilities of how unconscious attitudes, as measured by the IAT, could affect behavior will ensure that the test garners the examination it deserves.

APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.