Practice

Methods: Don’t Be Too Creative With Your Measures! Avoiding Questionable Measurement Practices 

In recent years, the field of psychological science has faced significant pressure to implement better research practices, including becoming more open and transparent, to improve research quality. However, not much has been done to improve specific measurement practices to strengthen the validity of psychological science research, and there remains little transparency regarding measurement practices, with scientific manuscripts failing to report important information regarding measurement decisions.  

In a 2020 article in Advances in Methods and Practices in Psychological Science, Jessica Kay Flake (McGill University) and APS Fellow and Spence Awardee Eiko I. Fried (Leiden University) examined questionable measure practices (QMPs) that raise doubts about the validity of measures and ultimately of psychological research and its conclusions. The authors defined the most common questionable practices and offered suggestions about how to avoid them. 

Defining and measuring what is being studied 

Before creating any study, researchers must define what they are studying and decide on the appropriate measures of the construct under study (e.g., which scale to measure depression). These measures must adequately capture the construct; when they do not, the study’s validity is called into question. Whenever researchers do not define their constructs and/or make wrong inferences about their measures, “neither rigorous research design, nor advanced statistics, nor large samples can correct such false inferences,” Flake and Fried wrote. 

However, there is usually a vast array of possible approaches to measurement, forcing researchers to make many decisions. “We can think of no single psychological construct for which there exists exactly one rigorously validated measure that is universally accepted by the field, with no degrees of freedom regarding how to score it,” Flake and Fried explained. The many degrees of freedom possible in any measurement can render a study’s results questionable. Flake and Fried argue that transparency in measurement choices—reporting all measurement decisions—is the first step to reforming measurement standards. “Transparency does not automatically make science more rigorous, but it facilitates rigor by allowing thorough and accurate evaluation of the evidence,” they wrote. 

Six questions that will help to avoid QMPs 

To help researchers identify, avoid, and/or confront QMPs when planning, conducting, reviewing, and consuming research, Flake and Fried suggested asking six questions. These questions are aimed at promoting reporting of the information needed to evaluate the selection and use of the measures in a study (Flake & Fried, 2020; Table 1): 

1. What is your construct? 

  • Define the construct. 
  • Describe theories and research supporting the construct. 

2. Why and how did you select your measure? 

  • Justify the measure selection. 
  • Report existing validity evidence. 

3. What measure did you use to operationalize the construct? 

  • Describe the measure and administration procedure. 
  • Match the measure to the construct. 

4. How did you quantify your measure? 

  • Describe response coding and transformation. 
  • Report the items or stimuli included in each score. 
  • Describe the calculation of scores. 
  • Describe all conducted (e.g., psychometric) analyses. 

5. Did you modify the scale? And if so, how and why? 

  • Describe any modifications. 
  • Indicate if modifications occurred before or after data collection. 
  • Provide justification for modifications. 

6. Did you create a measure on the fly? 

  • Justify why you did not use an existing measure. 
  • Report all measurement details for the new measure. 
  • Describe all available validity evidence; if there is no evidence, report that. 

Systematic questionable practices 

Besides improving transparency in research, answering these questions can also help to detect systematic questionable practices that repeatedly appear in the literature.  

For example, reviewing findings from the meta-scientific measurement literature, Flake and Fried noted how the relationship between loneliness and extraversion was shown to be moderated by the particular scales used. This is an example, they wrote, of how “lack of clarity in what scales measure, despite their names, muddies the interpretation of the relationship between constructs.” The literature review also indicated that measurement flexibility (and lack of details about measurement quantification) as well as questionable measurement modifications appear to have been used to misrepresent and mislead, hindering studies’ validity. Finally, they found that creating and using measures that have not been validated appear to be common practice, demonstrated by examples from the literature on emotions, education and behavior, and social psychology and personality. 

Measure practices can impact the whole field 

Although Flake and Fried focused on transparency in this article, they noted that QMPs are also the product of other issues, including ignorance, negligence, and misrepresentation. The authors urged researchers and the field to broaden scrutiny of current research practices to include measurement. They gave as an example the study of depression, which even with larger samples, adequate power, and preregistered methods might not adequately measure the intended construct.  

What effect might these reforms have? “If the field took the validity of measure use more seriously, researchers would report more information, provide better training to early-career colleagues, and demand rigorous practices during the review process,” wrote Flake and Fried. 

Feedback on this article? Email [email protected] or login to comment.

Reference 

Flake, J. K., & Fried, E. I. (2020). Measurement schmeasurement: Questionable measurement practices and how to avoid them. Advances in Methods and Practices in Psychological Science, 3(4), 456–465. https://doi.org/10.1177/2515245920952393 

Related content we think you’ll enjoy



APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.