When Research Findings and Social Norms Collide

A Look at the Role of Public Policy in Research Reporting

There have been a good many discussions of the complicated and challenging inter- relationships between behavioral science research and social policy or program development, viewed from a variety of perspectives (White, 1996; Zigler & Hall, 1999). Some have considered the various ways in which research findings may be used (or misused) to inform or influence policy development (Bertenthal, 2002; Lerner, Fisher & Weinberg, 2000; McCall, Groark & Nelkin, 2004; Newcombe, 2002). Others deal with the importance of the scientific benefits frequently resulting from research addressed specifically to applied problems or social policy questions (Bronfenbrenner, 1974; McCall & Groark, 2000.).

This commentary addresses a narrower issue regarding the relationship between the reporting of developmental research findings and social policy considerations. The main purpose is to raise awareness and promote discussion of some potential and often subtle constraints on the open reporting of research findings which arise from concerns about their possible social policy implications. More specifically, these are concerns that the findings may be viewed as undermining support for a particular social policy, program, or legislative initiative, or as potentially minimizing the significance of particular social problems. I will focus primarily on such concerns as they may arise on the part of investigators themselves or professional colleagues prior to reporting or publication, and also on the part of editorial or other reviewers.

The publication constraints resulting from these concerns can significantly limit the open exchange of knowledge generated by the research of various investigators, thus potentially impeding the normal growth of scientific knowledge, as well as the development of sound social policies or programs. In the paragraphs which follow, I briefly summarize some selected examples, based on my own experience and that of a number of colleagues (as shared with me), in order to concretize some of the ways in which these constraining influences can play an inhibiting role. These examples raise several key questions concerning the appropriate role which should be played by consideration of the social implications of research findings when issues of publication or open reporting arise. An outline and brief discussion of these questions follows the specific illustrative situations outlined below.

Several types of socially relevant research findings seem to trigger resistance to open reporting or publication. The first three examples below illustrate concerns about the reporting of negative or questionable findings on the adverse effects of various risk factors assumed to threaten children’s development. Findings which question the effectiveness of particular intervention programs also may be seen as potentially threatening to continued policy or program support, as illustrated in the final three examples.

  1. A reviewer expressed serious concerns about publishing a paper I had submitted which reported that under certain circumstances single parenthood need not have adverse effects on children’s achievement. The reviewer worried that the take home message for general readers would be that “single parenting doesn’t matter,” thus trivializing the problem. Similar concerns were expressed in conversations at various meetings, namely that such negative findings could minimize the significance of single parenthood as a social problem and thus weaken the case for supportive programs for single mothers. The paper was eventually published, but only after it was actively defended as adding to our understanding of the influence of single parenthood experience.
  2. A colleague presented and then published a study indicating that cognitive competencies at age 11 were not significantly impaired in children with a restricted variety of early experience, thus suggesting considerable developmental resilience despite previous environmental restrictions. The investigator was roundly criticized by others in the field for reporting these results at a time when they could be used to weaken the case for federal legislation supporting enriched early experience programs.
  3. everal colleagues and I were actively discouraged by other colleagues from planning a much-needed symposium intended to clarify some of the inconsistencies and uncertainties in the association between early malnutrition and intellectual development. They were concerned that to raise questions about these linkages in a public symposium might threaten current efforts to increase funding for child nutrition programs. Because of their reluctance to participate, the symposium idea was subsequently dropped.
  4. On a small-group visit to a research program in Peru, I and several others heard a report on an important study of nutritional supplementation for infants and toddlers from rural low income families, which had not resulted in significant enhancements of early growth and development. It was obvious that the investigators considered their research a failure, and not worth publishing. They also were concerned that releasing their findings could threaten continuing commitments to support supplementation programs for the neediest young children. The visiting team encouraged them to report their findings and the knowledge derived from their research, so that other investigators could profit from their experience. (One of the visitors was a distinguished nutrition scholar who commented that he was aware of a number of studies failing to demonstrate gains from particular interventions which were never published, to the great detriment of other investigators planning such interventions.)
  5. A colleague had difficulty publishing a paper which reported that the amount of time children spent in after school programs was associated with increased aggressive behavior. The author’s strong impression was that the major reason for rejection was concern about the potential damage that might be done to public perceptions of after school programs, even though the manuscript provided a reasonable explanation of the findings obtained for these particular programs. The final stated editorial view in declining publication was that the findings were too potentially controversial and detrimental. (The paper was subsequently published in a “lesser” journal.)1
  6. A technical advisory committee on which a colleague served was reviewing the early findings of an investigative group evaluating the quality of a government-funded early childhood Program. The committee recommended that the investigators report additional important data which indicated that program quality could be significantly improved, to the overall benefit of the program and its continued development. There was much resistance on the part of the investigators to reporting these data, apparently based largely on their concerns that the findings would reflect unfavorably on the program and the government agency supporting the evaluation.

These brief illustrative anecdotes raise a number of questions about the proper role of concerns about the potential social consequences of research findings, and whether such concerns may, under some circumstances, inhibit research reporting to the disadvantage of research progress in the field. It is recognized that some of these questions are complex and may generate considerable difference of opinion in the scientific and policy communities, as the need for open communication of research findings is balanced against the possibility of undesirable social consequences of publication. Nevertheless, I suggest that it would be very helpful for these kinds of issues to be surfaced and openly discussed. Some of the questions raised, and a short commentary, are outlined below.

  1. Are the distinctions between technical or scientific concerns, and concerns about unfavorable policy implications of the findings, made explicit in the thinking of investigators and others who may be involved in making decisions about the reporting of research findings? (It is not uncommon for researchers (or reviewers) to have particular social policy interests or preferences, sometimes even finding themselves in the position of policy advocates. If these roles as researchers and advocates are blurred, it may make it difficult for researchers to keep the evaluation of research findings and the perceived implications of reporting them free of undue influence by their policy priorities. As noted in the illustrative anecdotes above, sometimes investigators or reviewers were quite explicit in indicating their concerns about the potential policy implications of particular research findings. At other times, however, it has seemed less clear to authors that these social concerns have been separated from scientific concerns when judgments are being made about research reporting.)
  2. How much weight should investigators, reviewers and editors give to the potential social policy consequences of a study’s findings in arriving at evaluative judgments about open reporting or publication?
  3. Are the technical/scientific “bars” to reporting or publication set higher in the case of findings that might be seen as having potentially unfavorable social or policy consequences? If so, is this appropriate? (It is recognized that the more significant the potential negative policy consequences of the research, the greater the felt need for assurances that the study is without serious flaws and the findings reliable. At the same time, questions arise as to how to calibrate and weight the relevant “policy impact” and “scientific quality” scales, while avoiding inappropriate constraints on research reporting. It has sometimes been suggested that the relevant scientific criteria for research reporting or publication should be equivalently applied, whether or not the research may have negative policy consequences. The likelihood of such negative consequences could be minimized by explicitly clarifying the boundaries of the social or policy implications suggested (or not suggested) by the research findings at the time of publication, and thereafter.2 (For helpful comments on issues related to questions 2 and 3, see Newcombe, 2002, and also Lillienfield, (2002); Phillips, (2002): and Sher & Eisenberg, (2002).
  4. Should studies which fail to demonstrate the effectiveness of particular interventions be viewed as non-publishable because of their negative findings? Conversely, should such studies be seen as a potential source of valuable empirical information which can benefit subsequent related research by the investigators as well as by others in the field? (This is a particularly sensitive issue, since often so much is at stake in the case of program evaluation research. Obviously, one would want to avoid reporting premature or weakly based findings, whether negative or positive. On the other hand, even negative findings from a well designed intervention and evaluation can provide invaluable data and explanatory hypotheses which can meaningfully guide subsequent research or lead to modifications in the intervention.)
  5. Are some sponsoring agencies being unduly restrictive when determining whether technically sound research findings should be reported or published? If so, would a more open stance on publication of research results benefit both the science involved as well as the sponsoring agency?

(While this commentary has not focused on constraints from this source, which are sometimes institutionally determined, it seems useful to raise the question since some of the issues involved are related to those already discussed. Also, it is conceivable that institutional practices may be open to some modification through the efforts of internal investigators or science administrators.)

Moving beyond these specific questions, we return to the overall issue raised in this commentary, namely, the question of the extent to which the types of constraints on research reporting discussed here are in fact adversely affecting open communication of research findings, with potentially detrimental effects on both the scientific enterprise as well as on policy or program development. My own experience, and that of a good many colleagues, suggest that these constraining influences, which are often subtle, do occur commonly enough to be worrisome, and that they have the potential to impede significantly the free flow of research information among scholars in the field. It is important to note that concerns about potential social or policy implications may inhibit investigators from reporting or submitting their findings in the first place, or from persisting in efforts to see them through to publication after initial review. Also, there may be significant disagreements about reporting research findings even among collaborating investigators, based on differing views of the weight that should be given to such social or policy concerns.

Given this background, I believe that the questions raised are significant enough that the field would benefit from serious consideration and open discussion of these issues by research investigators and editorial reviewers, as well as by colleagues who regularly deal with social policy matters. A very useful early step would be a more systematic assessment of the scope of the research-reporting constraint problems which I’ve raised, perhaps through a survey of investigators and editorial reviewers by one of our professional organizations (APA or APS) or their editorial committees.

  1. Several of the examples presented involve situations in which papers were eventually published despite initial social or policy-related editorial concerns which had to be overcome. However, it is reasonable to assume that in a good many instances, investigators faced with such editorial reservations or initial rejection would be discouraged from further pursuing publication of their findings.
  2. In an effort to clarify such issues, some journals have begun to require authors to include a final section commenting specifically on the practical or social implications suggested by their research findings (e.g., Journal of Family Psychology, Early Childhood Research Quarterly).

References

  • Albee, G.W. (Ed.) (2002). Interactions among scientists and policymakers: Challenges and opportunities. American Psychologist (Special Issue), 57, 161-227
  • Bertenthal B.I. (2002). Challenges and opportunities in the psychological sciences. American Psychologist, 57, 215-218.
  • Bronfenbrenner, U. (1974). Developmental research, public policy, and the ecology of childhood. Child Development, 45, 1-5.
  • Lerner, R.M., Fisher, C.B. & Weinberg, R.A.(2000). Toward a science for and of the people: Promoting civil society through the application of developmental science. Child Development, 71, 11-20.
  • Lilienfeld, S.O. (2002). When worlds collide: Social science, politics, and the Rind et al. (1998) child sexual abuse meta-analysis. American Psychologist, 57, 176-188.
  • McCall, R.B., & Groark, C.J. (2000). The future of applied child development research and public policy. Child Development, 71, 197-204.
  • McCall, R.B., Groark, C.J. & Nelkin, R.P. (2004) Integrating developmental scholarship and society: From dissemination and accountability to evidence-based programming and policies. Merrill-Palmer Quarterly, 50, 326-340.
  • Newcombe, N. (2002) Five commandments for APA. American Psychologist, 57, 202-205.
  • Phillips, D. (2002). Collisions, logrolls, and psychological science. American Psychologist, 57, 219-221.
  • Sher, K.J. & Eisenberg, N. (2002) Publication of Rind et al. (1998): The editors’ perspective. American Psychologist, 57, 206-210.
  • White, S.H. (1996). The relationship of developmental psychology to social policy. In E.Zigler, S.L.Kagan, & N.W.Hall (Eds.) Children, families and government (pp. 409-426). New York: Cambridge University Press.
  • Zigler, E.F. & Hall, N.W. (1999) Child development and social policy: Theory and applications. New York: McGraw-Hill.
Observer Vol.18, No.7 July, 2005

Leave a comment below and continue the conversation.

Comments

Leave a comment.

Comments go live after a short delay. Thank you for contributing.

(required)

(required)