Government Relations

Science in Service: Making Noise That Can’t Be Missed

My career applying psychological science to policy

Tom Hilton
Tom Hilton

If a tree falls in the woods and everyone is around to hear it, does it make a sound? Of course it does—but the event won’t change much unless the right people are listening. Like that tree in the woods, behavioral research does not seem to lead to much change either. I think that is because we too often focus our attention on scholarship and the implications for theory rather than for daily life. For example, although most psychologists understood the implications of Philip Zimbardo’s prisoner study and Albert Bandura’s Bobo doll study, it took many years for that work to impact public policies and practices. My career as an industrial/organizational (I/O) psychologist has shown me time and again that it is possible for research to affect public and organizational policies and practices—if we make an effort to help policymakers apply it to everyday problems in the public and private sectors.

My career started when I was drafted right out of college and spent 4 years as a Navy officer. Thanks to a research fellowship at the Texas Christian University Institute of Behavioral Research (IBR), I completed my doctorate in January 1980. At the IBR, we tailored research projects to the needs of clients (large corporations and federal agencies) and strived to explain findings in ways that could help our clients readily identify changes to make things better. That approach never interfered with scholarly productivity—it probably enhanced it. The late Saul Sells, IBR’s director, had been a military researcher during World War II, a fact that attracted contract studies from the armed services as well as federal agencies and private-sector corporations. All of IBR’s research had direct implications for clients’ organizational policies and practices, and that became my orientation as well.

[B]ehavioral research does not seem to lead to much change either … because we too often focus our attention on scholarship and the implications for theory rather than for daily life.

The year before graduating, I joined the faculty of the University of Texas at its Dallas campus. After a couple of years, I began to think about a career change because none of the trees falling around me were being heard. In 1982, I got a call out of the blue from the Chief of the Navy Medical Service Corps asking me to get back into uniform to lead a Navy-wide study to improve the delivery of health services aboard ships. Senior hospital corpsmen on independent duty (in lieu of physicians) were failing in their jobs at an unacceptable rate. Figuring that I could return to academia afterwards, I decided to give the Navy another try and reported for duty at the Naval Health Research Center in San Diego. I surveyed every shipboard corpsman in the fleet and interviewed most of the training faculty. Project results enabled both data-based policy and practice improvements that were easily implemented in the fleet and at the schools. Within 18 months, job failures dramatically dropped off.

In 1985, the surgeon general ordered me to Bethesda to conduct career-development studies for both the Navy Medical Corps (physicians) and the Navy Nurse Corps. In addition, I was tasked with evaluating the leadership and management training program for senior medical staff. In each project, I made sure that results and recommendations informed policies and practices to enable constructive changes by senior commanders and staff. In 1988, I began working at the Pentagon for the Chief of Naval Personnel. I was put in charge of Navy studies addressing recruitment, screening, advancement, training, career development, and many other topics I/O psychologists study. It was my best job ever, in part because the admiral was very data oriented. Only a Lieutenant Commander at the time, I soon became the only non-admiral participating in weekly board meetings. From that perch, I was able to help the admirals make changes to Navy personnel policies and practices, and I commissioned studies and analyses to address issues related to emerging problems the admirals faced.

After my Pentagon tour, the admiral arranged for me to manage a laboratory for the Federal Aviation Administration (FAA) which conducted organizational research that included an annual survey of the workforce. We also evaluated leadership and technical training programs. Though my lab was in Oklahoma City, I was in Washington regularly to advise the administrator and his executive staff on human resource policy issues. While in DC, I often met with Pentagon officials to discuss the implications of studies I was doing for them on the side.

In 1992, the Department of Defense assigned me a 3-year part-time detail to the Clinton White House to oversee a national security project. Three years later the FAA administrator arranged a second detail to help evaluate Vice President Al Gore’s Reinventing Government Program. I was able to assemble a dream team of federal research psychologists from across the government—especially the U.S. Merit Systems Protection Board and the Office of Personnel Management (OPM). We developed a 40-item survey that my FAA lab distributed to a random sample of 40,000 military and civilian federal employees stratified by agency size. With a 40% response, results confirmed that Gore’s program was very popular among federal employees. However, Clinton’s elimination of all first-level supervisors (to make government more efficient) was viewed to have the opposite effect. The most impactful result of our project was that OPM liked our survey so much that its director decided to adopt it as an annual government-wide event. For the past 20 years, U.S. presidents have used OPM’s Federal Employee Viewpoint Survey results to gauge the relative success of all federal departments, and Cabinet members have used it to gauge how well their agencies are doing.

By late 1999, my career was at a crossroad. My next promotion would take me out of research, so I was considering retirement from the Navy. I shared my quandary with an old friend and his wife over dinner one night and he reminded me that I had a standing invitation to join his team at the National Institutes of Health (NIH), which I decided to do. My official role at NIH was as a program official (PO) at the National Institute on Drug Abuse. I oversaw a large portfolio of grants that focused on addiction health service delivery systems.

A key role of NIH POs is to help researchers shape their grant applications in ways that not only advance health science but also inform public health policy. We could freely do that because grant applications are independently peer-reviewed. Once applications are scored, POs advocate for funding the best ones. Very costly projects often hinge on POs convincing other NIH institutes, federal agencies, and even private sector foundations to co-fund. Thus, the size and scope of a PO’s portfolio of grants often depends on their awareness of the current research interests of other agencies and organizations. Once funded, POs administer the grants by monitoring progress and authorizing project modifications as situations might demand. As projects start producing information, POs also help to ensure dissemination to policymakers as well as the scientific community.

The thing I loved the most about being a PO was the freedom and resources to continue to engage in what I call “creative mischief.” Every grant initiative had its own unique challenges. It was not uncommon for more-senior POs to form small cabals of like-minded colleagues to help convince NIH top management to authorize new grant initiatives to promote using novel research designs, studying emerging topics, as well as employing promising methodologies that could help to advance health science while also informing health policy. If these were successful, NIH issued new program announcements and POs then worked to encourage researchers to apply for funding.

The thing I loved the most about being a program officer was the freedom and resources to continue to engage in what I call “creative mischief.”

The biggest “cabal project” I worked on was to promote the use of computerized adaptive testing (CAT) technology, which uses Item Response Theory to reduce patient response burden. Sick and recovering people do not like filling out lengthy questionnaires to monitor their recovery progress, yet both healthcare providers and researchers need such information. CAT can reduce self-report response times from, say, 40 minutes to 10 minutes yet validly measure the same number of symptoms and capabilities. Our end goal was to create a gold-standard tool that would both increase patient participation and have high enough validity generalization to broaden use in research studies and clinical trials.

I joined a half dozen POs from different institutes starting in late 2002 to brainstorm and draft a convincing proposal. We estimated the cost would be about $100 million over 10 years, which was too high for any single NIH institute to fund. Thus, we hoped to convince the NIH director to get all 27 NIH institutes and centers to help fund the project. We did, and the Patient-Reported Outcomes Measurement Information System (PROMIS) was the result. The project was approved in 2004 and is now in global use in more than a dozen languages. My entire career shows that it is possible for behavioral research not only to advance science but also to influence policies and practices.


Science in Service highlights psychological scientists who work in government or apply their research to policymaking. Would you be a good fit for this column? Write to adesoto@psychologicalscience.org.

Feedback on this article? Email apsobserver@psychologicalscience.org or scroll down to comment.


APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines.

Please login with your APS account to comment.