UK
back Back
Services
back Back
Insights
back Back
About us

Natural Experiments

1 August 2022

Kizzy Gandy, National Director of our Program Evaluation team in Australia, discusses the use of natural experiments in our evaluations. 

Natural experiments can be a cost-effective way to evaluate long-term program outcomes. In Australia, and elsewhere, program evaluation budgets are typically small and dedicated to measuring 1-2-year outcomes at most. However, we know that many social problems take decades or even generations to turn around, so unless evaluations measure long-term outcomes, the full benefits of a program may never be known.

Natural experiments are unplanned opportunities to evaluate how a program influenced outcomes, either in the short-term or long-term. They arise where there is natural variation in exposure to a program which resembles a treatment group and a control group. In other words, those exposed and not exposed to a program are comparable. Natural experiments differ from randomised controlled trials because variation in exposure is not manipulated by a researcher.

An illustrative example

A recent academic paper (yet to be peer reviewed) highlights how a natural experiment helped to build evidence about whether cash assistance to families has long-term benefits for children when they grow up.

In 1993, in the Southeastern United States, 1,420 children aged 9, 11 and 13 years were enrolled in a longitudinal study which lasted until 2016. The study cohort was recruited from 11 contiguous counties in the Appalachian Mountains of western North Carolina. Four years into the study, an American Indian tribe within the study cohort opened a casino and distributed the profits to every tribal member unconditionally (i.e., whatever their age or work status). This resulted in an average payment of $5,000 per person per year. Children’s allocations were paid into a trust fund until they graduated high school. Therefore, the total amount of extra cash households had available in any given year depended on the number of American Indian adults that lived there – households with two American Indian adults had twice as much extra cash available as households with only one American Indian adult.  

When the children in the longitudinal study were aged 25 and 30, researchers compared the mental health and wellbeing of those from families who had received the cash transfers versus those from families who hadn’t, while adjusting for pre-existing differences. They found that the larger and longer the cash transfers in childhood, the better the outcomes: fewer anxiety and depressive symptoms; better physical health and financial well-being; and fewer risky/illegal behaviours in adulthood. This ‘dose response’ relationship – associated with the number of American Indian parents in the household and the age of the child when the casino opened – confirmed that individuals from the American Indian tribe did not simply benefit from features of the tribe itself (e.g., social capital, community investments); rather, their long-term outcomes were influenced by the household cash transfers during childhood.

How can we use natural experiments in our evaluations?

At Kantar Public, we see lots of opportunities for our government and not-for-profit clients to use natural experiments to make evidence-informed program decisions. For example, there is often a large gap between the number of people who register for a program and the number who actually participate due to a range of factors such as inertia and hassle. Comparing outcomes between these two naturally occurring groups enables evaluation of the program impacts.

However, in any natural experiment, it’s important to consider any potential systematic differences between the two groups which could influence their outcomes. For example, program ‘no-shows’ are likely to be less motivated than those who participate. Controlling for these differences can be achieved using different methods, such as:

  • matching people in each group with similar baseline characteristics (propensity score matching);
  • comparing the post-program outcomes between the two groups, after adjusting for their pre-program outcomes (difference-in-differences);
  • only comparing groups where it’s reasonable to assume they have roughly equivalent backgrounds because they fall just either side of an arbitrary cut-off point that determined program participation (regression discontinuity); or
  • if data are collected at multiple and equally spaced time points (e.g., weekly, monthly, or yearly), observing whether the data pattern changed after the program was introduced (interrupted time series).

We predict there will be lots of natural experiment papers published in the near future.  The COVID-19 pandemic led to the sudden introduction of many new programs in Australia, often with major variation in implementation. For example, workers can claim the Pandemic Leave Disaster Payment if they are required to isolate for seven days. The payment is $750 if you lose 20 hours or more of work due to isolation, but only $450 if you lose one day to 19 hours of work. This eligibility criteria creates a natural experiment using regression discontinuity design.

For example, if we wanted to find out whether giving insecure workers cash improves or worsens their attachment to employment, in a few years’ time Australian Tax Office data could be used to compare the employment status of those who lost 19 hours versus 20 hours of work, controlling for pre-payment employment history. We can be confident that any differences in employment status between these two groups was influenced by receiving the extra $300. This is because it’s reasonable to assume that individuals who narrowly fall either side of the cut-off for the extra $300 are comparable in ways that could influence their attachment to employment in the long term (e.g., motivation, capacity to work). However, it would be harder to make this assumption about workers who lost 8 hours versus 40 hours of work, highlighting how identifying the conditions under which a natural experiment provides valid and reliable findings requires careful thought.

Thoughts for the future

Last year’s Nobel Prize winner in economics, David Card, showed the benefits of natural experiments to mainstream audiences with his evaluation of how changes to the minimum wage affect employment levels. We hope that more of our clients will be eager to explore natural experiments as a pragmatic, cost-effective evaluation design, especially for understanding long-term program outcomes.

If you want to read more, Liam Delaney, Professor of Behavioural Science and Head of Department of Psychological and Behavioural Science at the LSE, has a great blog post listing 19 natural experiments.

This article was issued under our former global brand name: Kantar Public.  

Kizzy Gandy
National Director,
Program Evaluation
Australia

Our latest thinking

Subscribe to receive regular updates on our latest thinking and research across the public policy agenda.

Our expert teams around the world regularly produce research and insights relating to public policy issues.

You can unsubscribe at any time.