Student Notebook

Careless Responding on Internet-Based Surveys

Campus Representative Program

The Campus Representative Program is designed to increase communication between students and APS/APSSC. Serving as a Campus Rep is a great way to network with other students and directly connect your campus with APS and the APSSC Executive Board. Sign up online, or reapply for the next academic year to maintain your position. Contact Jonathan Waldron at [email protected] for more information.

Mentorship Program

The APSSC Mentorship Program is designed to connect undergraduate student affiliates with graduate mentors. We are currently accepting applications for both graduate student mentors and undergraduate student mentees. For more information on how to get involved, undergraduate and graduate student affiliates are encouraged visit the APSSC website or contact Staci Weiss, the Undergraduate Advocate, at [email protected].

Advances in technology have spurred extensive use of Internet-based surveys, assessments, and measures. We see Internet-based surveys asking for our feedback as customers, and as students of psychology, we administer surveys to people over the Internet. Internet-based survey data inform applied work in organizations and knowledge accumulation in research, and for good reasons. Convenience for both survey researchers and respondents, ease of standardization, speed, and scalability are some of the most salient reasons why Internet-based surveys are popular (Barak & English, 2002). The utility and popularity of Internet-based surveys will likely increase commensurate with widening access to the Internet among the general population.

Although Internet-based surveys clearly have advantages, there are challenges to this mode of data collection. These challenges stem from the physical disconnection of the researcher and respondent, which is inherent to Internet-based survey methodology. This limited human-to-human interaction is one factor related to what has been termed “careless responding” (CR; e.g., Johnson, 2005; Ward & Pond, 2013). CR occurs when survey respondents, regardless of their intentions, respond to the survey in a manner that does not accurately reflect their true scores.

There are three reasons you should care about CR. First, though the precise depiction of CR depends on the indicators used to estimate it, CR is evident in many datasets derived from Internet-based surveys (Hardré, Crowson, & Xie, 2012). Second, CR is complicated, and researchers are still determining how to address it. Thus, best practices for detecting and dealing with CR are still under development. Finally, the third and perhaps most compelling reason to care about CR is that it can distort results and weaken conclusions via psychometric problems. CR can lead to problems in correlation and reliability estimates, scale development, and factor analysis — elements that underlie theoretical development and exploratory studies (Meade & Craig, 2012; Woods, 2006). For these reasons, prudent researchers in all domains of the social sciences need to address CR in their data.

Ways to Address CR in Your Internet-Based Surveys

There are a few main approaches to addressing CR. The first approach is to exclude data from certain respondents exhibiting CR. To do this, researchers can compute values of CR indicators for each respondent and exclude data from respondents with CR indicator values that are beyond a cutoff score (see “Indicators of CR” on the next page). The assumption is that removing respondent data is preferable to keeping low-quality data. Although this first approach is more researched than alternative approaches, it is a limited solution to CR. Removing respondents reduces sample size and threatens random sampling, and in turn the generalizability of results. Therefore, it is imperative to find ways of preventing CR in addition to correctly identifying CR after it happens.

The second approach attempts to prevent CR before it occurs. To this end, initial research has manipulated the perceived interaction between respondents and researchers. Changing instructions to warn respondents of the consequences of carelessness, to identify respondents (e.g., “On each page of the survey you will be asked to enter your name.”), or to promise feedback about respondent data quality (e.g., “You will receive feedback about the quality of your survey responses and whether we can use the information that you provided to us upon completion of the survey.”) have influenced some forms of CR (Meade & Craig, 2012; Ward & Pond, 2013). In one study, instructions that introduced respondents to the researchers increased the number of respondents who said they were diligent but did not change objective indicators of CR (Ward, Meade, Gasperson, & Pond, 2014). Thus, changing instructions can potentially reduce CR, but what if restrictions prevent you from manipulating your survey instructions?

Aside from manipulating instructions, there are other ways to increase perceived interaction in order to prevent CR. One potentially promising approach is rehumanizing Internet-based surveys by manipulating virtual presence. Adding a virtual human may increase the perceived human-to-human interaction between researchers and respondents. In a recent study, the presence of a virtual human did not show a significant main effect but did show significant interaction effects on CR with different types of instructions (Ward & Pond, 2013). The virtual human appeared for the duration of the survey in a space that was approximately 1 square inch. Future studies may reveal larger effects on CR by framing the virtual human as an agent that represents the researcher. Bigger reductions in CR might also be found using a virtual human with more salient features, such as increased interaction with the respondent or greater similarity to the physical appearance of the respondent (Behrend & Thompson, 2012). Taken together, using instructions and virtual presence to rehumanize Internet-based surveys can reduce CR to some extent.

Indicators of CR

This begs the question, how do you know if you have been successful in reducing CR? The values of CR indicators specify the amount of CR present in your data. There are various CR indicators because there are different types of CR, including inconsistent responding and long strings of identical responses. Fortunately, researchers can choose from numerous CR indicators, some of which are outlined below (see Meade & Craig, 2012, for a more complete discussion of CR indicators).

A commonly used CR indicator is instructed-response items. An example of an instructed-response item is, “Select ‘strongly disagree’ for this item.” The metric is clear for scoring correct versus incorrect responses on instructed-response items. Note that embedding instructed-response items too frequently (i.e., more than once every 50 items) can irritate respondents. Currently, researchers use their best judgment to determine the appropriate frequency of instructed-response items for a given survey. Even–odd consistency is another CR indicator that shows the extent to which participants choose equivalent response options to items measuring similar constructs. The rationale for even–odd consistency is based on the logic that an individual respondent would not agree strongly and disagree strongly with items assessing the same construct. A third CR indicator bears mentioning because it enables detection of a notorious type of CR in student samples, namely, survey items consistently answered with the same response option. LongString is the CR indicator that identifies response patterns where respondents repeatedly chose the same response option. The longest string of identical responses becomes the LongString value for a respondent. As it stands, researchers use their best judgment to determine cutoff values for the LongString CR indicator; more research is needed to determine the most useful cutoff values for different types of surveys.

Aside from the three CR indicators just described, there are several other options, including self-report items (directly asking respondents at the end of the survey whether they think their survey responses are of adequate quality for use in the study), outlier analysis, and bogus items. These three alternatives, as well as the three indicators described, have differential utility in detecting various types of CR. Thus, the researcher must decide what indicators are most relevant (see Meade & Craig, 2012, for a more complete discussion).

Conclusions

In sum, the prevalence of CR and the potential detriment to the quality of survey data makes this an important topic. Various CR indicators can identify CR post hoc, whereas virtual presence and instructions hold promise for CR prevention. To address CR in your survey project, consider including carefully crafted instructions, virtual presence, and instructed-response items, and review post hoc CR indicators. In these ways, researchers can rehumanize Internet-based surveys to improve data quality.

References and Further Reading

Barak, A., & English, N. (2002). Prospects and limitations of psychological testing on the Internet. Journal of Technology in Human Services, 19, 65–89.

Behrend, T. S., & Thompson, L. F. (2012). Using animated agents in learner-controlled training: The effects of design control. International Journal of Training and Development, 16, 263–283. doi: 10.1111/j.1468-2419.2012.00413.x

Hardré, P. L., Crowson, H. M., & Xie, K. (2012). Examining contexts-of-use for Web-based and paper-based questionnaires. Educational and Psychological Measurement, 72, 1015–1038 doi: 10.1177/0013164412451977

Huang, J., Curran, P., Keeney, J., Poposki, E., & DeShon, R. (2012). Detecting and deterring insufficient effort responding to surveys. Journal of Business and Psychology, 27, 99–114. doi: 10.1007/s10869-011-9231-8

Johnson, J. A. (2005). Ascertaining the validity of individual protocols from Web-based personality inventories. Journal of Research in Personality, 39, 103–129. doi: 10.1016/j.jrp.2004.09.009

Meade, A. W., & Craig, S. B. (2012). Identifying careless responses in survey data. Psychological Methods, 17, 437–455. doi: 10.1037/a0028085

Ward, M. K., Meade, A. W., Gasperson, S., & Pond, S. B. (2014, May). Manipulating instructions to reduce careless responding on Internet-based surveys. Paper presented at the 28th annual meeting of the Society for Industrial and Organizational Psychology, Honolulu, HI.

Ward, M. K., & Pond, S. B. (2013). Using virtual presence and survey instructions to minimize careless responding on Internet-based surveys. Computers in Human Behavior.

Woods, C. (2006). CR to reverse-worded items: Implications for confirmatory factor analysis. Journal of Psychopathology and Behavioral Assessment, 28, 186–191. doi: 10.1007/s10862-005-9004-7

Yu, L. (2011). The divided views of the information and digital divides: A call for integrative theories of information inequality. Journal of Information Science, 37, 660–679.


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.