Skip to main content
Free Accesseditorial

Implicit Attitude Measures

Published Online:https://doi.org/10.1027/0044-3409/a000001

Arguably, one of the most thriving research areas in current psychology is assessing attitudes and related constructs with implicit measures, which we define as those indirect measures that rely on response latencies or other indices of spontaneous trait association, the activation of action semantics, or even real behavior. This research area is united by a shared excitement about the discoveries enabled by these measures, be they related to social attitudes and behavior, clinical disorders, consumer decisions, or self-representations, among others. As this enumeration suggests, in spite of the common excitement about the new research questions implicit measures allow us to investigate, there is much diversity in this research. First of all, these approaches bridge subdisciplines of psychology traditionally characterized by little cross-talk. Furthermore, the variety of implicit measures used is already broad and still growing, given variants and implementations of these implicit measures in different samples and research approaches. Given this diversity, we deemed it appropriate to summarize research that focuses either on the comparison of different implicit measures or on the mechanisms underlying one of the measures. Such knowledge is necessary and helpful to determine which measure to employ in a given research context and also to be aware of limitations of certain measures and advantages of others. Thus, the articles collected in this special issue compare two or more different implicit measures, or they focus on the measurement properties of one.

One of the mostly used implicit measures, the Implicit Association Test (IAT; Greenwald, McGhee, & Schwartz, 1998), was introduced in a way that would have allowed researchers to implement it as if it was a standardized test. In spite of this, researchers not only used stimuli, numbers of trials, evaluation procedures, and other specifics different from those suggested; but in the end, they even suggested their own variants of IATs or implicit measures that keep certain aspects of IATs while eliminating or adding others (e.g., De Houwer, 2003; Nosek & Banaji, 2001; Olson & Fazio, 2004; Sriram & Greenwald, 2009; Steffens, Kirschbaum, & Glados, 2008). As a first consequence, the answer to the question how a “good” IAT should be constructed is not as clean and tidy anymore as it appeared in 1998. As a second consequence of these methodological debates IAT research now comprises a diversity at the expense of addressing comparisons with other implicit measures (the implicit-explicit relation on the contrary has been attended to, e.g., Hofmann, Gawronski, Gschwendner, Le, & Schmitt, 2005).

Matters get much more complicated if we add other implicit measures to this cocktail, the most prominent ones among them being based on priming effects (Fazio, Sanbonmatsu, Powell, & Kardes, 1986; Payne, Cheng, Govorun, & Stewart, 2005), but many recent ones based on approach-avoidance reactions (see Reinecke, Becker, & Rinck, 2010). Whereas some research has systematically compared different implicit measures and found disturbing discrepancies among them (e.g., Bosson, Swann, & Pennebaker, 2000) and among the mechanisms underlying them (e.g., Gawronski & Bodenhausen, 2005; Wittenbrink, Judd, & Park, 2001), such research is the exception rather than the rule, compared to the enormous number of studies applying implicit measures. On top of that, some types of implicit measures have not yet reached a consolidation level that would allow these comparisons, but are still dealing with the analysis of the robustness and determinants of the respective effects. For instance, this is the case for automatic behavior activation following categorical primes where it still needs to be figured out what kind of behavior is being activated and when (Jonas & Sassenberg, 2006). Consequently, we believe that few informed decisions can be made which implicit measure is most appropriate for a given research question. Of course, this is not to say that we know nothing. For instance, we know that a measure such as subliminal priming is preferable if we want to assess social cognition in the absence of conscious perception of stimuli, and such measures may yield reliable interindividual differences (e.g., Bianchi, Mummendey, Steffens, & Yzerbyt, in press). On the contrary, we know that even if participants do not control their responses during an IAT, they will afterwards have a pretty clear idea which constructs were assessed (e.g., Steffens, 2004), thus, the latter measures are certainly less implicit than the former (cf. Dasgupta, 2010). Whereas we know that subliminal measures may yield replicable effects (Draine & Greenwald, 1998), it is possible that they yield less reliable findings than other measures. So what we do not know is under what conditions and for which research questions an IAT outperforms a priming paradigm or an approach-avoidance dependent measure, and vice versa. For example, when there was no consensus on the best implicit measure to use, a group from our laboratory decided on a pretest. Much to our surprise, a subliminal affective priming measure showed the best validity, so it was used in the main experiment, and with success (Heigener, Martiny, Steffens, & Kessler, 2009).

Similarly, we need to know to which research questions a given measure can be applied in principle, and which are precluded due to features of the measure. Which findings can be interpreted with regard to implicitly assessed cognition, and which reflect features of the measure instead? Thus, experiments are badly needed in the field that focus on the features of the measures themselves, and those directly comparing the strengths and weaknesses of different implicit measures against each other.

Ironically, it appears that there is some publication bias against these studies that we consider vital. Whereas in cognitive psychology, there is consensus that it is theoretical progress to understand how response-compatibility effects or negative priming effects come about, in social psychology reviewers and editors often seem to detect “no contribution to theory” if a mechanism underlying an implicit measure is investigated (cf. also Degner, Wentura, & Rothermund, 2006). At the same time, cognitive journals often regard research on implicit attitude measures as appropriate for the journals of the outgroup (i.e., social psychologists). These are the main reasons why we were happy to edit a special issue on implicit attitude measures. The number of abstracts and submissions we received corroborates the timeliness of the idea.

Reflecting the diversity in the field, the articles in the present issue target attitudes toward social groups (Blair, Judd, Havranek, & Steiner, 2010; Popa-Roch & Delmas, 2010; von Stülpnagel & Steffens, 2010), self-attitudes (Popa-Roch & Delmas, 2010; Rudolph, Schröder-Abè, Riketta, & Schütz, 2010), consumer attitudes (Summerville, Hsieh, & Harrington, 2010), attitudes toward spiders (Reinecke et al., 2010), and those toward risk taking (Dislich, Zinkernagel, Ortner, & Schmitt, 2010). The hope underlying these investigations is that general conclusions about the respective measures can be drawn from their specific instantiations.

So what can we learn from the research collected in this issue? Reinecke and colleagues demonstrated encouraging reliabilities and validities for three very different implicit tasks. Particularly impressive were the obtained correlations with a behavioral measure (approaching a spider), all the more so as the sample consisted of university students with no clinical comparison groups that would increase variance. Implicit measures thus provide valuable additions to clinicians’ toolboxes. Similarly, using a double-dissociation approach, Rudolph et al. showed that implicit measures of self-esteem predict spontaneous behavior. This was true both for an IAT and a new measure capitalizing on self-judgments under cognitive load. Using a similar approach, the findings reported by Dislich et al. are compatible with the view that implicit (here: an IAT) and explicit measures predict different aspects of risk taking behavior.

These studies converge on finding indicators of the quality of different implicit measures, instead of demonstrating the superiority of one measure over another. It thus appears that for many research questions, it does not play a major role which implicit measure one chooses. Going beyond this, Summerville and colleagues showed that two different implicit measures, evaluative movement assessment and evaluative priming, were related to each other, but not to purchase intentions, which were however predicted by a lexical decision task. In other words, these findings point at the differential validity of different implicit measures, a fruitful avenue for future research.

As is quite common, the current studies focusing on only a single implicit measure investigated IATs. The findings mirror both sides of the debate on IATs’ validity. Blair et al. add considerable weight to the evidence that IAT effects reflect what they are supposed to, demonstrating their discriminant validity. It is subject to debate whether the main finding of Popa-Roch and Dumas speaks against the validity of IATs or not, namely that apparent negative attitudes toward an outgroup can be based on positive attitudes toward the self (i.e., an ingroup member; cf. Dasgupta, 2010). In contrast, a short report by von Stülpnagel and Steffens points at a potential threat to IATs’ validity by showing that IAT effects are sometimes correlated with intelligence measures in the direction opposite to that suggested by explicit prejudice measures. Taken together, these findings are compatible with the view that IATs contain a large portion of variance related to the purpose of measurement (e.g., attitudes), but that some of their variance is related to other constructs. We want to hasten to add that the same may be true for other implicit measures, and this will go undiscovered until they are investigated with the same scrutiny as IATs are. Similarly, the present and other findings show that there is no reason to automatically prefer IATs to other implicit measures independent of the research question. The current issue closes with a commentary by Dasgupta who, among other things, elaborates on “The next generation of unresolved questions.” In a nutshell, we hope to fuel both the diversity in our research field with the current issue and at the same time highlight existing knowledge on the interrelations of implicit measures.

References

  • Bianchi, M. , Mummendey, A. , Steffens, M. C. , Yzerbyt, V. (in press). What do you mean by European? Evidence of spontaneous ingroup projection. Personality and Social Psychology Bulletin. . First citation in articleGoogle Scholar

  • Blair, I. V. , Judd, C. M. , Havranek, E. P. , Steiner, J. F. (2010). Using community data to test the discriminate validity of ethnic/racial group IATs. Zeitschrift für Psychologie / Journal of Psychology, 218, 36–43. First citation in articleLinkGoogle Scholar

  • Bosson, J. K. , Swann, W. B. Jr. , Pennebaker, J. W. (2000). Stalking the perfect measure of implicit self-esteem: The blind men and the elephant revisited?. Journal of Personality and Social Psychology, 79, 631–643. First citation in articleCrossrefGoogle Scholar

  • Dasgupta, N. (2010). Implicit measures of social cognition: Common themes and unresolved questions. Zeitschrift für Psychologie / Journal of Psychology, 218, 54–57. First citation in articleLinkGoogle Scholar

  • Degner, J. , Wentura, D. , Rothermund, K. (2006). Indirect assessment of attitudes with response-time-based measures. Zeitschrift für Sozialpsychologie, 37, 131–139. First citation in articleLinkGoogle Scholar

  • De Houwer, J. (2003). The extrinsic affective Simon task. Experimental Psychology, 50, 77–85. First citation in articleLinkGoogle Scholar

  • Dislich, F. X. R. , Zinkernagel, A. , Ortner, T. M. , Schmitt, M. (2010). Convergence of direct, indirect, and objective risk taking measures in the domain of gambling: The moderating role of impulsiveness and self-control. Zeitschrift für Psychologie / Journal of Psychology, 218, 20–27. First citation in articleLinkGoogle Scholar

  • Draine, S. C. , & Greenwald, A. G. (1998). Replicable unconscious semantic priming. Journal of Experimental Psychology: General, 127, 286–303. First citation in articleCrossrefGoogle Scholar

  • Fazio, R. H. , Sanbonmatsu, D. M. , Powell, M. C. , Kardes, F. R. (1986). On the automatic activation of attitudes. Journal of Personality and Social Psychology, 50, 229–238. First citation in articleCrossrefGoogle Scholar

  • Gawronski, B. , & Bodenhausen, G. V. (2005). Accessibility effects on implicit social cognition: The role of knowledge activation and retrieval experiences. Journal of Personality and Social Psychology, 89, 672. First citation in articleCrossrefGoogle Scholar

  • Greenwald, A. G. , McGhee, D. E. , Schwartz, J. L. K. (1998). Measuring individual differences in implicit cognition: The Implicit Association Test. Journal of Personality and Social Psychology, 74, 1464–1480. First citation in articleCrossrefGoogle Scholar

  • Heigener, M. , Martiny, S. , Steffens, M. C. , Kessler, T. (2009). Implicit and explicit group-based self-esteem – Dynamics in the prediction of in-group bias and identity management strategies. Poster presented at the 51. Tagung experimentell arbeitender Psycholog|innen, Jena March 29–April 1, 2009. First citation in articleGoogle Scholar

  • Hofmann, W. , Gawronski, B. , Gschwendner, T. , Le, H. , Schmitt, M. (2005). A meta-analysis on the correlation between the Implicit Association Test and explicit self-report measures. Personality and Social Psychology Bulletin, 31, 1369–1385. First citation in articleCrossrefGoogle Scholar

  • Jonas, K. J. , Sassenberg, K. (2006). Knowing how to react: Automatic response priming from social categories. Journal of Personality and Social Psychology, 90, 709–721. First citation in articleCrossrefGoogle Scholar

  • Nosek, B. A. , Banaji, M. R. (2001). The Go/No-go Association Task. Social Cognition, 19, 625–666. First citation in articleCrossrefGoogle Scholar

  • Olson, M. A. , Fazio, R. H. (2004). Reducing the influence of extrapersonal associations on the Implicit Association Test: Personalizing the IAT. Journal of Personality and Social Psychology, 86, 653–667. First citation in articleCrossrefGoogle Scholar

  • Payne, B. K. , Cheng, C. M. , Govorun, O. , Stewart, B. D. (2005). An inkblot for attitudes: Affect misattribution as implicit measurement. Journal of Personality and Social Psychology, 89, 277–293. First citation in articleCrossrefGoogle Scholar

  • Popa-Roch, M. , & Delmas, F. (2010). Prejudice IAT effects: The role of self-related heuristics. Zeitschrift für Psychologie / Journal of Psychology, 218, 44–50. First citation in articleLinkGoogle Scholar

  • Reinecke, A. , Becker, E. S. , Rinck, M. (2010). Test-retest reliability and validity of three indirect tasks assessing implicit threat associations and behavioral response tendencies. Zeitschrift für Psychologie / Journal of Psychology, 218, 4–11. First citation in articleLinkGoogle Scholar

  • Rudolph, A. , Schröder-Abè, M. , Riketta, M. , & Schütz, A. (2010). Easier when done than said! Implicit self-esteem predicts observed or spontaneous behavior, but not self-reported or controlled behavior. Zeitschrift für Psychologie / Journal of Psychology, 218, 12–19. First citation in articleLinkGoogle Scholar

  • Sriram, N. , Greenwald, A. G. (2009). The brief Implicit Association Test. Experimental Psychology, 56, 283–294. First citation in articleLinkGoogle Scholar

  • Steffens, M. C. (2004). Is the Implicit Association Test immune to faking?. Experimental Psychology, 51, 165–179. First citation in articleLinkGoogle Scholar

  • Steffens, M. C. , Kirschbaum, M. , Glados, P. (2008). Avoiding stimulus confounds in Implicit Association Tests by using the concepts as stimuli. British Journal of Social Psychology, 47, 217–243. First citation in articleCrossrefGoogle Scholar

  • Summerville, A. , Hsieh, B. , & Harrington, N. (2010). A multi-measure investigation of the divergence of implicit and explicit consumer evaluations. Zeitschrift für Psychologie / Journal of Psychology, 218, 28–35. First citation in articleLinkGoogle Scholar

  • von Stülpnagel, R. , Steffens, M. C. (2010). Prejudiced or just smart? Intelligence as a confounding factor in the IAT effect. Zeitschrift für Psychologie / Journal of Psychology, 218, 51–53. First citation in articleLinkGoogle Scholar

  • Wittenbrink, B. , Judd, C. M. , Park, B. (2001). Evaluative versus conceptual judgments in automatic stereotyping and prejudice. Journal of Experimental Social Psychology, 37, 244–252. First citation in articleCrossrefGoogle Scholar

The writing of this article and the compiling of the special issue were supported by a grant from the German Research Foundation DFG (Ste 938/9-1) to both authors.

Melanie C. Steffens, Institut für Psychologie, Friedrich-Schiller-Universität Jena, Am Steiger 3, Haus 1, D-07743 Jena, Germany+49 3641 945111+49 3641 945112