Science in Educational Media: Taking it to the (Sesame) Street

There is a tremendous range in the degree to which research plays a role in the production of educational media for children. Many producers rely on little or no research input, limited, perhaps, to occasional consulting by educational advisors or a test of the appeal of a pilot episode. By contrast, a smaller number of producers use research more extensively, an approach that is typified (and was pioneered) by Sesame Workshop. The creators of classic television series such as Sesame Street, The Electric Company, and 3-2-1 Contact, as well as more recent successes such as Ghostwriter and Dragon Tales, the Workshop also produces books, magazines, outreach materials for use in schools and child care settings, and interactive online material and CD-ROMs.

Under the model of production that has come to be known as the “CTW Model” or “Sesame Workshop Model,” producers in a variety of media work hand-in-hand with educational content specialists and researchers throughout the life of a project, from the inception of the idea through the delivery of the finished product.

Big Bird and Elmo
Big Bird and Elmo
©1995 John E. Barrett CTW

Production staff (i.e., producers, writers, editors, actors, etc.) are responsible for the physical production of the material. Educational content specialists devise the educational curriculum that sets goals for the project (e.g., to encourage literacy, positive attitudes toward science, or social development among the target audience) and they help ensure that the material being produced is educationally sound. Researchers test material with the target audience (e.g., children of a certain age) and use the lessons learned from these data to suggest ways to maximize the appeal and educational effectiveness of the end product. In this way, empirical data and more general child development expertise become integral parts of the production process (e.g., Mielke, 1990).

FORMATIVE AND SUMMATIVE RESEARCH
Research conducted in support of educational television production falls into two broad categories. Formative research is conducted while material is being produced (or even before production begins) to investigate questions that arise out of production (e.g., Cronbach, 1963; Flagg, 1990; Scriven, 1967). These questions can include such diverse issues as:

  • Will a particular component of the project be comprehensible and appealing to its target audience?
  • How can material be presented to maximize its effectiveness (e.g., through the optimal placement of print on a television screen, or via online navigation structures that will be obvious to children and easy for them to use)?
  • Which of several potential visual designs for a character will be most appealing?
  • What do viewers already know about a particular topic and where do their misconceptions lie, so that subsequent scripts can address these misconceptions directly?

The second type of research used is summative research, which is conducted after production is complete, and is intended to assess the impact of the materials on their target audience. Some of the questions addressed by summative research might include:

  • Are viewers better at reading and writing (or are they more motivated to read and write) after watching a television series about literacy?
  • Does exposure to an outreach program for child care settings result in measurable improvements in the care provided in those settings?
  • Are preschool children more likely to cooperate with their peers after watching a television series designed to promote social development?

Data from these types of studies provide a gauge of the projects’ success in achieving their educational goals. For example, summative research on Sesame Street has shown exposure to the series to result in significant and long-term effects on children’s academic skills and knowledge, as well as their social behavior (see Fisch & Truglio, 2001; Fisch, Truglio, & Cole, 1999 for reviews of the literature).

D IS FOR DECISIONS, DEVELOPMENT, DIVERSITY … AND DEADLINES
The kinds of applied research described above are similar in many ways to the kinds of research that might be conducted on viewers’ interaction with, or processing of, media in an academic setting. Yet, these two types of research also are very different in many ways. One of the chief distinctions between academic research and applied research in this area lies in their ultimate purposes. Where the ultimate purpose of basic social science research typically is to expand our understanding of mental or physical processes, the ultimate purpose of formative research is to inform the creation or revision of a product.

While children’s interaction with television has been the subject of both basic and formative research, these two types of research have pursued different goals. The ultimate purpose of basic research in this area is generally to inform our understanding of children’s processing of, interactions with, and reactions to television (see, e.g., Huston & Wright, 1997 for a review). Although such concerns are also important in applied research on children and television, they are not the final goal of the research; rather, the ultimate purpose in this case is to use that information to inform the design of television programs that will be comprehensible, appealing, and age-appropriate for their target audience. In other words, the implications of the academic research focus on children; the implications of the applied research focus on the television program.

‘QUICK AND CLEAN’
As such, the success of basic research is judged by the degree to which the data inform our understanding of children. The success of formative research, on the other hand, is judged by the degree to which the data inform the production of effective educational materials. This fundamental distinction leads to several further points:

Formative research is oriented toward practical purposes or questions. Formative research is rarely, if ever, done simply out of the researcher’s own interest or curiosity in a particular topic. Rather, formative research speaks to concrete questions that are essential to informing specific production decisions; it is what Stufflebeam and Webster (1980), among others, term “decision-oriented research.”

Formative research must be done quickly. Because the purpose of formative research is to inform production decisions, the data must be available by the time those decisions are made, or it will be useless. As a result, the schedule for conducting formative research must fit into the larger production schedule. Unlike basic research, in which a single study may be conducted over a period of months, formative researchers typically have a turnaround time of no more than one or two weeks for a study, from the posing of the initial question to the reporting of the data. To take an admittedly extreme example, a last-minute production issue once led my research team to conduct a study in a total of 27 hours, from the posing of the question through analysis and verbal report of the data (Big Bag Research Department, 1998). While this type of turnaround is by no means typical, the underlying principle is a constant.

Formative research must be generalizable. Because of the speed with which it is typically conducted, formative research has occasionally been called “quick and dirty” research (although the term is only sometimes intended to be disparaging). However, this term is actually a misnomer; formative research may be “quick,” but it cannot be “dirty” if it is to be useful. Formative data must be clean enough to be generalizable beyond the sample tested to the larger target audience. Otherwise, the data are likely to mislead producers into decisions that will hurt, rather than help, the material being developed. Indeed, studies have shown that formative data, when collected and analyzed with appropriate care and controls, can be highly consistent across geographically and demographically different samples, and over a gap of as much as three years between studies (Big Bag New Mexico Research, 1995; Fisch, McCann, Body, & Cohen, 1993).

In addition to being generalizable across children, formative data must also be generalizable beyond the material tested to help inform decisions about other, untested material. For example, an early study on one animated Sesame Street segment about the letter J found that, because the children shown on screen were moving while the J was static, viewers tended to look at the characters and not the J. Producers then revised the segment by animating the J as well, which was successful in drawing viewers’ attention to the letter (Lesser, 1974). Clearly, this finding was helpful, not only in revising the particular segment that had been tested, but in approaching subsequent print-based segments as well, so that these segments could be produced effectively without the need for revision. This sort of generalizability to other material is achieved, not only through care in methodology and analysis, but by carefully selecting the material to be tested; usually, the tested material is chosen because it is representative of a larger body of material as well.

Formative research is conducted for an audience of non-researchers. Where basic research is generally conducted for an audience of other researchers, via journals, books, and conferences, the primary audience for formative research data consists of non-researchers: producers, writers, editors, animators, and others with little or no technical background in research. Moreover, this audience must find the research to be credible and convincing if it is to inform their decisions (particularly if the data conflict with the producers’ own instincts). When the news is good, it is easy for a researcher to tell producers that viewers love the material. However, when a researcher informs the production staff that viewers disliked something they created and suggests revisions, the researcher is in a position analogous to telling parents that their baby is ugly, but could be made more attractive with a few improvements.

Formative research must be seen as relevant to the production team, speaking to issues that they consider important; after all, they cannot be expected to make costly changes that correct issues unless they feel that the underlying issues justify the expense. In addition, the data must be reported concisely and clearly for a lay audience, avoiding the sorts of jargon and statistics that are more typically reported in papers for academic audiences. For example, while inferential statistics are certainly important in analyzing formative data, the production team needs to know whether girls liked something more than boys, not that the difference was found via a T-test. Finally, the findings must be presented in a way that is persuasive and carries concrete implications for production. In many ways, this is one of the most difficult aspects of formative research to master.

Like all human relationships, the relationship among producers and researchers must be handled with sensitivity, tact, and mutual respect. If either side comes to the relationship with the attitude that they are always right, then the relationship cannot function productively. Instead, the relationship between production and research must be approached from the standpoint that all of the players are on the same team, and that it is not a case of “us versus them.” The ultimate goal is not to be right; it is to produce the best product possible. Such a goal can only be reached by all of the parties involved working hand-in-hand.

At its best, the relationship among producers and researchers becomes truly collaborative, with each side contributing its own unique perspective and expertise. The result is a whole that is greater than either could have created alone, and that can make significant contributions to the lives of children.

REFERENCES

Big Bag Research Department. (1998). Kids-on-screen study. Unpublished research report. New York: Sesame Workshop.
Big Bag New Mexico Research. (1995). Acquired materials: New Mexico study. Unpublished research report. New York: Sesame Workshop.
Cronbach, L.J. (1963). Course improvement through evaluation. Teachers College Record, 64, 672-683.
Fisch, S.M. (in press). Researchers for educational television programs. In Reina Schement, J. (Ed.), Encyclopedia of communication and information. New York: Macmillan Reference.
Fisch, S.M., & Bernstein, L. (2001).Formative research revealed: Methodological and

process issues in formative research. In Fisch, S.M., & Truglio, R.T. (Eds.), “G” is for growing: Thirty years of research on children and Sesame Street (pp. 39-60). Mahwah, NJ: Lawrence Erlbaum Associates.
Fisch, S.M., McCann, S.K., Body, K.E., & Cohen, D.I. (1993, April). Can formative
research achieve reliability? In Martin, L. & Fisch, S.M. (Chairs), Meeting the challenges of formative research: Lessons from the Children’s Television Workshop. Symposium conducted at the annual meeting of the American
Educational Research Association, Atlanta, GA.
Fisch, S.M., & Truglio, R.T. (Eds.; 2001). “G” is for growing: Thirty years of research on children and Sesame Street. Mahwah, NJ: Lawrence Erlbaum Associates.
Fisch, S.M., Truglio, R.T., & Cole, C.F. (1999). The impact of Sesame Street on preschool children: A review and synthesis of thirty years’ research. Media Psychology, 1, 165-190.
Flagg, B.N. (Ed.; 1990). Formative evaluation for educational technology. Hillsdale, NJ: Lawrence Erlbaum Associates.
Huston, A.C., & Wright, J.C. (1997). Mass media and children’s development. In Damon, W., Sigel, I., & Rennings, A. (Eds.), Handbook of child psychology: Vol. 4. New York: John Wiley.
Lesser, G.S. (1974). Children and television: Lessons from Sesame Street. New York: Vintage Books/Random House.
Mielke, K.W. (1990). Research and development at the Children’s Television Workshop. Educational Technology Research and Development, 38 (4), 7-16.
Scriven, M. (1967). The methodology of evaluation. In Tyler, R., Gagné, R., & Scriven, M. (Eds.), Perspectives of curriculum evaluation (pp. 39-83). Chicago: Rand McNally.
Stufflebeam, D.L., & Webster, W.J. (1980). An analysis of alternative approaches to evaluation. Educational Evaluation and Policy Analysis, 3 (2), 5-19.

APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.