### Abstract

Statistics tries to understand data by means of mathematical models, but they are a mere approximation to reality. ‘All models are wrong, but some are useful.’ In many situations, a researcher cannot get data for all variables she considers necessary to answer a certain question. Hence, the statistical models that can be implemented are incomplete and researchers may wonder how reliable the conclusions are that they get from those.
This work develops sensitivity analysis methods for the two most used types of statistical models in practice, linear regression and instrumental variables. We characterise the bias stemming from the omission of an unmeasured variable in terms of $R^2$-values, also called coefficients of determination, which express the amount of variance which one variable can explain in another. These are part of a basic statistics syllabus, widely adopted by applied researchers and easy to interpret. Based on this result, we provide several ways for a user to quantify his knowledge/believes about the maximal effect that the unmeasured confounding variable has on the model. Subject to such a bound, we develop a generalised version of confidence intervals which account for the omission of potentially influential variables. Moreover, our methodology provides a framework for researchers to have a principled discussion on the effect of unmeasured confounding on their models.
In summary, we present an easily applicable method for sensitivity analysis in linear regression and instrumental variable models which enables researchers to quantify their domain knowledge in a flexible way and yields more robust conclusions.

Date

2022-03-12 11:45 — 13:00

Event

Jesus College Graduate Conference 2022