Individuals in Their Environments: Increasing Rigor in Assessment

People’s social and physical environments can cause their moods, thoughts, behaviors, and symptoms to fluctuate throughout the day. Over the past 30 years, researchers have developed techniques to measure these dynamic processes in participants’ daily lives (e.g., Trull & Ebner-Priemer, 2014). These ambulatory assessment methods include experience-sampling methods (ESM; Csikszentmihalyi & Larson, 1987) and ecological momentary assessment (EMA; Stone & Shiffman, 1994).  

Researchers often use the terms ESM and EMA interchangeably, although their historical antecedents and original aims differ (Stone & Shiffman, 1994). As Trull and Ebner-Priemer described in a 2014 article in Current Directions in Psychological Science, “Experience sampling emphasizes random-sampling schemes and often involves paper-and-pencil diaries and beepers, whereas ecological momentary assessment is most often associated with momentary self-report using electronic diaries.” These methods can provide valuable information, but their use requires the researcher to make many methodological and statistical decisions, increasing possibilities for variation and challenging the transparency, reproducibility, and replicability of the research.  

To increase the rigor of these methods, Olivia J. Kirtley, Ginette Lafit, Robin Achterhof, Anu P. Hiekkaranta, and Inez Myin-Germeys, of the Katholieke Universiteit Leuven offered a template and tutorial for registration of studies using ambulatory assessment in a recent article in Advances in Methods and Practices in Psychological Science. The researchers defined ESM as a method in which participants complete brief questionnaires one or more times per day—most commonly via a smartphone app—to give in-the-moment reports on their thoughts, behaviors, contexts, and emotions. 

Challenges to the use of experience-sampling methods 

Kirtley and colleagues discussed threats to transparency, reproducibility, and replicability in ESM research. 

  • Calculations of power and sample size are more complex in ESM research because of the multilevel nature of the data. 
  • Compared with other methods, ESM requires additional considerations about item selection, psychometrics, and analytic strategy. 
  • Researchers using ESM must make many choices to develop a study, which can introduce more points of variation among studies and make them harder to reproduce and replicate.  
  • Many analytical choices may exist for the same data set, creating analytical flexibility, which also threatens reproducibility. 

Registration as a means to improve rigor in ESM 

“Registration is a tool with great potential to increase transparency, reproducibility, and replicability within ESM research,” wrote Kirtley and colleagues. However, registering ESM studies might not be as straightforward as registering non-ESM studies. ESM uses complex models and needs a priori strategies for handling when the model’s estimates do not converge with a clean solution. It also usually involves multiple researchers with varied research questions, making it difficult to detail all the prospective analyses before data collection.  

Kirtley and colleagues discussed ways to select models, account for potential model-convergence issues, use preexisting data sets, and document these sets in preregistration. They made several additions and modifications to the Preregistration Challenge template (Mellor et al., 2019) to create a new template that facilitates the registration of ESM studies. While developing this template, Kirtley and colleagues considered “(a) specific characteristics of ESM studies that may affect or even preclude their replicability and reproducibility and (b) aspects that may be vulnerable to questionable research practices or analytic flexibility, particularly after data have already been (partially) accessed.” 

What to register using the new template? 

Sampling plan 

  • Data-collection procedure. Kirtley and colleagues added this new subsection to increase transparency about data-collection decisions, including sampling schemes, methods, and participant incentives. 
  • Study duration. You may have to change the planned duration of an ESM study after data have already started being collected (e.g., if participants show reduced compliance). You can address this issue before conducting the study by specifying, and registering, a rule to follow in case more data are needed. 
  • Type of sampling scheme. The timing of questionnaire prompts in ESM (e.g., when a specific event occurs vs. randomly) should depend on the construct that you want to measure. Your registration should specify that temporal design. 
  • Total number and type of items (open-ended or close-ended). Many reports of ESM studies have described only the variables analyzed. However, detailing the number, order, and type of all items is important to understand some patterns of results (e.g., the questionnaire’s length might have an effect on the compliance rate) and facilitate future replications.  
  • Time-out specifications. The template has space for you to provide a theoretical rationale for and register decisions about how long participants have to begin responding to a questionnaire, to respond to one item, or to complete a full questionnaire. 
  • Details about instructions and practice questionnaires, which are important for reproducing the study’s methods. 
  • Rationale for sample size and temporal design, including power analyses that calculate the required sample size to find an effect. 
  • Stopping rule. You may want to implement a rule that specifies a threshold number of participants before data collection must stop. Usually, such rules are based on power analyses. Power analyses can also indicate the minimum number of measurements per person to reach a certain level of power; your stopping rule should account for this threshold, too. 


Kirtley and colleagues’ template asks researchers to specify measured variables and manipulated variables. It will ask you to describe in detail only the variables you will use in confirmatory analyses, but you must provide a full list of the ESM items as well. The template also includes subsections for measured non-ESM/time-invariant variables (e.g., gender) and measured ESM/time-variant variables (e.g., moment-dependent mood). Finally, given the multilevel structure of ESM data, the template asks you to specify varying levels of both measured and manipulated ESM variables. 

Prior knowledge of the data  

When researchers use preexisting data, any knowledge of the data can lead them to make data-dependent decisions. Report any prior knowledge, as well as the source of it (e.g., conference presentations, preprints). 

Analysis plan 

  •  Model complexity and convergence issues. Register how you plan to evaluate model complexity and what you will do when data violate assumptions, the model does not converge, or other analytic problems arise. 
  • Model selection and robustness. Register the criteria used for model selection and all information needed to reproduce the analysis (including the software used). For instance, if you estimate a model using a Bayesian approach, your analysis plan can describe the distribution of the parameters as well as the priors. 
  • Excluding data and handling missing data. In ESM studies, factors such as compliance and technical issues can affect the data quality. Specify data-exclusion criteria in the analysis plan, including exclusion criteria related to technical problems. It is also important to state how missing data and outliers will be handled.


Csikszentmihalyi, M., & Larson, R. (1987). Validity and reliability of the experience-sampling method. Journal of Nervous & Mental Disease175, 526–537. 

Kirtley, O. J., Lafit, G., Achterhof, R., Hiekkaranta, A. P., & Myin-Germeys, I. (2021). Making the black box transparent: A template and tutorial for registration of studies using experience-sampling methods. Advances in Methods and Practices in Psychological Science4(1). 

Mellor, D. T., Esposito, J., Hardwicke, T. E., Nosek, B. A., Cohoon, J., Soderberg, C. K., Kidwell, M. C., Clyburne-Sherin, A., Buck, S., DeHaven, A., & Speidel, R. (2019). Preregistration Challenge: Plan, test, discover. OSF. https:// 

Stone, A. A., & Shiffman, S. (1994). Ecological momentary assessment (EMA) in behavioral medicine. Annals of Behavioral Medicine16(3), 199–202. 

Trull, T. J., & Ebner-Priemer, U. (2014). The role of ambulatory assessment in psychological science. Current Directions in Psychological Science23(6), 466–470. 

APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.