A panel from the University of Chicago and Brookings Institution explored what goes wrong — and what can go right — when researchers and policymakers undertake the science of scaling
By Sarah Steimer
When evidence-based programs and policies show positive results for a small sample size, there’s a strong desire to scale so that many more may see the benefits. Citizens and lawmakers can enthusiastically latch onto the adoption of such interventions on a larger scale — but expansions don’t always deliver the anticipated societal benefits. The result can instead be a waste of resources, a missed opportunity on improving lives and a loss of public trust in the role of science in policymaking.
To explore the scalability in policymaking, a June 17 panel co-hosted by Governance Studies at the Brookings Institution and the University of Chicago’s TMW Center for Early Learning + Public Health and Griffin Applied Economics Incubator brought together researchers, practitioners, and government innovation experts.
The discussion led off with an acknowledgement that scaling programs or policies means involving far more stakeholders than initially necessary in the study. Elaine Kamarck, a senior fellow in the Governance Studies program and director of the Center for Effective Public Management at Brookings, pointed to the myriad variables outside the research to be considered. She argued for viewing policymaking as a battlefield: Look not just at the idea and whether it’s good, but assess the inside players and outside players. Then consider the public and whether there will be a public reaction, strategy, tactic, and conflicts.
“We are only at the beginning of the process when we're looking at good, solid research that maybe should be scaled,” Kamarck said. “Then we get into a very murky world, particularly when we're talking about government.”
The panel — moderated by Omar Woodard, vice president of solutions at Results for America — explored the roadblocks to scaling programs and policies, along with some ways to overcome barriers to success.
Obstacles to scaling
Woodard pointed to four sources of challenges around scaling evidence-based programs, as identified by University of Chicago researchers, including panelists John List and Dana Suskind, both founders and co-director of the TMW Center for Early Learning + Public Health.
The first challenge is representation: Individuals studied in the research setting often aren't representative of the population at large, making the results difficult to replicate. Second, the specifics of the program may not be representative of how it’s actually delivered in a broader, real-world context. Third, the initial promising results are interpreted incorrectly, meaning there's not enough sufficient evidence to actually support scaling in the first place. Fourth, there are often spillover effects in the initial study that weren't accounted for, so the effects of the program may be stronger than the initial research. Because the research really didn't measure those effects, there’s not a clear sense of the benefits or harms.
List added a fifth challenge: diseconomies of scale. If you expand a program or policy, does it just cost a lot of money? And will keeping it within a budget mean downgrading to lower-quality inputs?
“If they don't satisfy these five threats to scalability, they'll fail,” List said, “regardless of how ingenious the people who are carrying them out are.”
When programs don’t scale, both science and policymakers take a hit to their credibility. Kamarck said policymakers have a difficult time knowing when evidence is actionable, because it requires them to predict the future. “There are people who may admit to the science, but who are skeptical that if you look down the street and around the corner, that the science will not hold,” she said.
Policy-based evidence and momentum
List described what he calls three links of knowledge creation. The first link he calls the philanthropy of science, or how to generate dollars to fund science The second link is related to overcoming scalability concerns among policymakers.
“We always talk about evidence-based policy, we need to be thinking about policy-based evidence,” List said. “What are the incentives in the system right now that researchers, policymakers, and individuals are doing the right thing to put forward actionable evidence that will really work?”
From here, he points to prescriptions such as including the research’s original scientists on the program implementation team. These experts would spell out all negotiables and non-negotiables in their original research design.
The third link in the chain is political will. And policymakers will feel more driven to enact research-based policies at a broader scale if researchers can provide greater certainty to what’s down the street and around the corner.
Speaking from the policymaker perspective, Michael Nutter, the former mayor of Philadelphia, emphasized the benefits of scaling up gradually with a pilot project, for example. “That's the government's level of risk-taking: We're willing to do this little thing,” he said. “It may or may not work, but it’s not so big that it’ll cause the place to collapse if it doesn't work. The challenge, like most other things, is quality control.”
Nutter also underscored the need for momentum, as scaling a project may take time. To this, Kamarck recommended making the bureaucracy the audience for scaling programs. At the federal level, she said, bureaucrats can stay for 20 or 30 years, versus an elected official who may only hold office for one term.
Don’t just read data, tell stories
Even when there’s certainty around the scalability of a program or policy, people need to believe in the possibilities. As Nutter explains it, you need to bring people down the street with you through storytelling.
What moves academic studies and research forward is press coverage, Kamarck argues. “Press coverage requires stories, it requires anecdotes, it requires something that makes the data come alive,” she said. “If you can get press coverage, and if you can get stories, you can get a politician to talk about it. And politicians are experts at turning complicated things into kitchen table language.”
Stories are so powerful that they can even create false positives. List references the D.A.R.E. program, which had only one set of results from Honolulu — which could be neither scaled nor replicated. But the narrative was so powerful that the program was rolled out nationally.
To convince a policymaker to adopt a research story to tell their audience, the study itself must be believably scalable. Kamarck said this is where researchers need to put themselves in the shoes of policymakers and practitioners and understand what barriers to adoption could exist. “You do no good by developing programs that are the Cadillac of all programs that are unattainable and incredibly expensive,” she said.
The importance of equity in scaling
Scaling studies into evidence-based policies and programs must include a conversation about equity, List said.
“There needs to be evidence both based on how big the pie is, but also on how that pie is divided,” he said. He urged researchers to consider multi-site trials or how — over space and time — the pieces of the pie may go. “That's what we should be doing as researchers in the very beginning in the petri dish, figuring out: Does this work? And who does it work for? Is that the value proposition we want society to take?”
Kamarck echoed List, saying researchers need to build equity into the data collection. Oversample to ensure your study shows how a policy could affect a wide range of people.
List also noted the importance of rolling out policies and programs in a way that continues to track the data. That way, reviewers can take continuous measurement to ensure the program delivers on the research’s original promises on equity.
It can be done
The barriers and solutions of scaling research may seem onerous, but the panelists emphasized the benefits of the work.
“The science of science and understanding scaling is, without a doubt, one of the most critical issues that we face in having a fiscally responsible government and truly giving our population a better quality of life,” Suskind said.
And while scaling research into larger policies and programs can make an outsize difference in many lives, smaller-scale studies that also enact positive change shouldn’t be discounted. Some of a study’s ingredients, such as humans, can’t scale. List likens it to a successful restaurant: If the chef is the magic ingredient, you can’t reproduce them for other locations — but it also doesn’t take away from the positive experience that restaurant produces.
“I don't want people walking away from this conversation thinking big change can only happen at scale,” List said. “And that a program has to be large-scale to matter. That’s not true.”