Abt’s Daniel Litwok used an experimental test of providing emergency financial assistance to participants in a job training program to estimate a benchmark for the impact of the assistance on educational progress; he then compared it to nonexperimental estimates of the impact of receiving assistance to assess correspondence between the experimental and nonexperimental evidence. Litwok used two different comparison groups for those who received emergency assistance: those who were assigned to the experimental treatment group but did not receive emergency assistance and those who could not receive emergency assistance because they were randomly assigned to the control group. Using each of these comparison groups, Litwok then estimated impacts by applying three strategies: unadjusted mean comparison, regression adjustment, and inverse propensity weighting. He then compared these estimates to the experimental benchmark.
The nonexperimental approaches to addressing selection bias suggested large, positive impacts on educational progress from the assistance. This evidence was different from the experimental benchmark, which indicated that receipt of emergency assistance did not improve educational progress. Given this divergence, unless a stronger set of predictors are available, future evaluations of such interventions should be wary of relying on these nonexperimental methods.