Skip to content

Latest commit

 

History

History
51 lines (51 loc) · 2.04 KB

2024-04-18-ahuja24a.md

File metadata and controls

51 lines (51 loc) · 2.04 KB
title software abstract layout series publisher issn id month tex_title firstpage lastpage page order cycles bibtex_author author date address container-title volume genre issued pdf extras
Multi-Domain Causal Representation Learning via Weak Distributional Invariances
Causal representation learning has emerged as the center of action in causal machine learning research. In particular, multi-domain datasets present a natural opportunity for showcasing the advantages of causal representation learning over standard unsupervised representation learning. While recent works have taken crucial steps towards learning causal representations, they often lack applicability to multi-domain datasets due to over-simplifying assumptions about the data; e.g. each domain comes from a different single-node perfect intervention. In this work, we relax these assumptions and capitalize on the following observation: there often exists a subset of latents whose certain distributional properties (e.g., support, variance) remain stable across domains; this property holds when, for example, each domain comes from a multi-node imperfect intervention. Leveraging this observation, we show that autoencoders that incorporate such invariances can provably identify the stable set of latents from the rest across different settings.
inproceedings
Proceedings of Machine Learning Research
PMLR
2640-3498
ahuja24a
0
Multi-Domain Causal Representation Learning via Weak Distributional Invariances
865
873
865-873
865
false
Ahuja, Kartik and Mansouri, Amin and Wang, Yixin
given family
Kartik
Ahuja
given family
Amin
Mansouri
given family
Yixin
Wang
2024-04-18
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics
238
inproceedings
date-parts
2024
4
18