Casual Inference

ANALYSIS WORKGROUP
Our mission is to foster the improvements in the analysis of health related phenomena among older minority; encourage the development of methods and measures that better capture the health and determinants of health of diverse elders; promote collaboration between RCMAR sites on analysis issues; and disseminate new knowledge in this area.

Casual Inference


Strengthening Causal Inference in Nonrandomized Health Disparity Designs

created by Michigan Center for Urban African American Aging Research (MCUAAAR), Univ. of Michigan, Ann Arbor and Wayne State University

Research in racial/ethnic health disparities is increasingly interested in identifying the causal mechanisms by which social inequities are translated into disparities in health outcomes. Of necessity, much of this kind of research is observational or naturalistic in design and relies on correctly specified causal modes to explicate the structural paths and control for confounding. Inherent methodological problems in these designs include preexisting group differences, specification of relevant control variables (potential confounders), and measurement error. Some helpful readings are described below.


  • Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design & analysis issues for field settings. Chicago: Rand McNally.

Chapters 2 and 3 describe threats to internal and external validity in categories of sample selection, research design, measurement, and statistical conclusions. Chapter 4 by Charles Reichardt covers essentials in statistical control of confounding including problems posed by unreliable measurement.

  • Cochran, W. G. (1965). The planning of observational studies of human populations. Journal of the Royal Statistical Society, Series A (General), 128(2), 234-266.

Cochran makes a strong case for causal analysis, includes recommendations on the planning, analysis, and interpretation of observational data from a statistician’s perspective.

This article describes four types of causal models for health-sciences research: structural-equations models, graphical models based on the work of Jueda Pearl, potential-outcome (counterfactual) models (Rubin's causal model), and the sufficient-component cause model. Causal analysis is, in some sense, a field of study not tied to a particular analytic technique.

  • Hayduk, L., Cummings, G., et al. (2003). Pearl's D-separation: One more step into causal thinking. Structural Equation Modeling, 10(2), 289-311.

This is a good introduction to some of Pearl's key ideas.

A cohesive presentation of concepts of, and methods for, causal inference. [http://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/]

The authors use examples from epidemiology to show how blind statistical adjustment can go wrong. A priori knowledge and causal models are essential to determine the appropriate covariance adjustment model.

The counterfactual model of causality for observational data analysis is presented, and methods for causal effect estimation are demonstrated using examples from sociology, political science, and economics.

  • Pearl, J. (2000). Causality: Models, reasoning, and inference. New York, NY, Cambridge University Press.

Pearl's work represents the most comprehensive new development in thinking about causal modeling. It is somewhat difficult because so much of it is new.

This is a very informative discussion of covariate selection and of the need for well explicated causal models to guide analysis. Randomized designs face many of the same problems in analysis as nonrandomized designs do when moderating (or mediating) variables are involved.

Shadish covers recent developments in quasi experimental and observational designs including multilevel analyses, propensity scoring adjustment methods, and Rubin’s causal model.

  • Wainer, H. (1991). Adjusting for differential base-rates: Lord's paradox again. Psychological Bulletin, 109, 147-151.

Wainer proposes some guidelines for the choice between difference score adjustment and regression adjustment. Very useful because methodological problems with the use of difference scores have steered analyst’s away from what often seems so sensible on intuitive grounds. Wainer’s analysis supports the use of difference scores when the expected change in the control group is nil.

The authors apply causal modeling ideas to solve a difficult problem in developmental research.


Last updated April 2013