-
-
Notifications
You must be signed in to change notification settings - Fork 70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
关于classifier normalize? #10
Comments
可以参考“An Introduction to Propensity Score Methods for Reducing the Effects of Confounding in Observational Studies”这篇文章。如果不用normalization相当于de-confound不彻底,会导致之后TDE不work,即,因为confound path的存在,所以得不到direct effect |
follow了一下,所以您是将当前观测到的样本当作是从do(x)之后的distribution中采样来的,因此给他们赋予了一个$weight = 1/P(x|M=m)$对吧?那为什么P(x|M=m) 可以近似为 L2-norm呢?这是基于什么得出的呢? |
关于怎么设计这个normalization,我们还是偏经验主义。因为Propensity Score的思想只告诉我们需要对effect做balancing,但具体怎么设计其实是很开放的。 |
@KaihuaTang |
我没有试过,听起来好像挺靠谱的。你可以试一下看work不work |
@KaihuaTang |
你好,看了您的paper受益匪浅。
这边有个问题想问一下,paper中提到,借鉴propensity score的思想,对logit进行normalize。不知道这个是什么原理呢?如果不加normalize对de-confound的影响大吗?
The text was updated successfully, but these errors were encountered: