You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As expressed in #42 and #43, we would like to check if normalization has a proper behavior.
"Normalization" refers to probability weighting normalization, as described in "Identifying causal mechanisms (primarily) based on inverse probability weighting", Huber (2014), DOI: 10.1002/jae.2341
We normalize the probabilities involved in score functions by dividing by their average on all the sample, respectively for treated and non-treated units. I describe that in my internship report page 22.
As far as I understood, it is a trick supposed to eliminate selection bias and make the estimation robust to extreme probability values.
A possible test would be to compare normalized and non-normalized estimators, or just to check the relative error like all other estimators (just add it to the tolerance dictionary).
As for now, normalization is implemented in med_dml and multiply_robust, which both give consistent results.
Normalization also is the very idea behind IPW estimator formulas.
The text was updated successfully, but these errors were encountered:
By the way, @bthirion suggested to implement normalization into a function.
It is a bit tricky because it really depends on the estimator, but I guess it is doable.
Sorry, my suggestion was actually a bit different (sorry if that was unclear): when you have 10-15 lines of code running a computation (any computation), you should cut that into an ancillary function. The point is that you can test that small function independently + it makes code more readable.
this has been implemented, and included in the tests as normalize is the default behavior, so not sure extra exploration or tests are needed as tests are already too long (#52). Probably should be included in a general discussion on tests
As expressed in #42 and #43, we would like to check if normalization has a proper behavior.
"Normalization" refers to probability weighting normalization, as described in "Identifying causal mechanisms (primarily) based on inverse probability weighting", Huber (2014), DOI: 10.1002/jae.2341
We normalize the probabilities involved in score functions by dividing by their average on all the sample, respectively for treated and non-treated units. I describe that in my internship report page 22.
As far as I understood, it is a trick supposed to eliminate selection bias and make the estimation robust to extreme probability values.
A possible test would be to compare normalized and non-normalized estimators, or just to check the relative error like all other estimators (just add it to the tolerance dictionary).
As for now, normalization is implemented in
med_dml
andmultiply_robust
, which both give consistent results.Normalization also is the very idea behind IPW estimator formulas.
The text was updated successfully, but these errors were encountered: