UK
back Back
Services
back Back
Insights
back Back
About us

A simple explanation of Process Tracing and a tip for using it

03 February 2023

Process Tracing can be complex to execute. Kantar Public Australia discuss the best tips to thoroughly plan how you will conduct four tests of causality before commencing data collection.

As evaluators, an ongoing challenge is balancing the provision of clear-cut findings to inform decision-making whilst exercising caution when making causal claims (X caused Y). Many outcome evaluations cannot confidently make causal claims because they cannot establish a counterfactual, due to ethical and/or practical reasons. Process Tracing is one alternative approach to demonstrate causal links.

1. Establishing causality in evaluation

There are two main ways to approach causal attribution in evaluation:

1. Using experimental or statistical methods to examine whether X caused Y.

2. Using mixed methods within a Theory-Based Evaluation approach to examine whether X contributed to M which contributed to Y (and the contexts or settings in which this occurs).

M represents a causal ‘mechanism’. Mechanisms are described in lots of different ways in the literature, but the most practical definition is that they exist inside people’s heads. Mechanisms are the cognitive and emotional reactions people have to the cause (X). It’s these reactions that create change – interventions don’t create change. Therefore, mechanisms are the force that produces the outcome (Y). In a Theory of Change diagram, mechanisms are found in between outputs and outcomes.

One reason we have previously argued that behavioural science can improve evaluation is because it offers important insight into mechanisms.

Process Tracing is a Theory-Based Evaluation approach and therefore makes causal claims by examining evidence for theorised causal pathways – the links between causes, mechanisms, and outcomes. It can make more rigorous causal claims than some other theory-based approaches, such as Contribution Analysis, but it also requires more data collection and analysis.

Step 1

First, evaluators generate a set of hypotheses and counterfactual hypotheses about causes, mechanisms, and outcomes. Ideally, they should specify the evidence needed to confirm or disconfirm each causal pathway before any data are collected. For example:

Step 2

Next, evaluators collect data to ‘trace’ these hypotheses and counterfactual hypotheses within a single case study (e.g., one workplace) or multiple case studies (e.g., multiple workplaces).

Step 3

Finally, causal claims are made by applying four tests to the evidence:

  1. Straw-in-the-wind (neither confirmatory nor disconfirmatory): If the evidence doesn’t strongly confirm or disconfirm a hypothesis, then no claims about causal inference can be made.
  2. Hoop test (disconfirmatory): If the evidence that is necessary to confirm a hypothesis (i.e., the evidence must jump through a hoop) is not observed then the hypothesis can be eliminated, but we can’t draw conclusions about rival hypotheses.
  3. Smoking gun (confirmatory): If the evidence strongly confirms a hypothesis but can’t eliminate rival hypotheses then we can be somewhat confident about making a causal claim.
  4. Doubly decisive (both confirmatory and disconfirmatory): If the evidence strongly confirms a hypothesis and eliminates all rival hypotheses then we can be reasonably confident about making a causal claim.

3. A tip for using Process Tracing
  • Process Tracing can be complex to execute so the best tip we can offer is to thoroughly plan how you will conduct the four tests of causality before commencing data collection. Furthermore, you don’t want to waste time and resources collecting ‘straw in the wind’ evidence when a little planning could ensure you collect data that enables stronger causal claims.

    Start with a detailed Theory of Change to develop your hypotheses and counterfactual hypotheses. Create a table like we have shown above which lists the indicators and sources of data for the confirmatory and disconfirmatory evidence. You can then design your data collection instruments knowing that you’re not just accumulating data which you hope will generate clear-cut findings. Instead, you’re efficiently gathering data for testing purposes so that you can support or negate the larger claims that are most helpful to decision-makers.

This article was issued under our former global brand name: Kantar Public.  

Kizzy Gandy
National Director
Program Evaluation, Australia

Our latest thinking

Subscribe to receive regular updates on our latest thinking and research across the public policy agenda.

Our expert teams around the world regularly produce research and insights relating to public policy issues.

You can unsubscribe at any time.