Monday 23 January 2017

Face value

This is actually a not-so-recent paper, but I've only discovered now and I think it's very interesting. The underlying issue is about trying to do "causal inference" from observational data $-$ perhaps one could see this in a simpler way by considering the idea of "balancing" observational data, to mimic as far as possible an experimental setting (and so be able to estimate "causal" effects). [There's lot more on the philosophical aspects behind this problem, which I'm conveniently swiping under the carpet, here...]

Anyway, one of the most popular ways of dealing with this issue of unbalanced background covariates (or generally, confounding) is to use propensity score matching. But, while I think that the idea is somewhat neat and clearly important, what has always bothered me (among other things) is the fact that the resulting outcome model does assume that the estimate of the propensity score (PS) is "perfect" $-$ known with absolute precision, although the basic assumption is that "the PS model needs to be correct". But of course, there's no way of knowing perfectly that the PS model is correct...

So the idea of joining model selection and propagation of uncertainty through the outcome model is actually very interesting. I've only flipped through the paper and I did have some very preliminary ideas in mind on this, so I really want to have a proper look at this!

No comments:

Post a Comment