Date:24 July 2025, Thursday
Location:S16-06-118, Seminar Room
Time:2pm, Singapore
In causal inference, true causal order and the graph of causal interaction can be uniquely determined if you have sufficient interventional data. Interventions are local (randomized control trials) RCTs done where different variables, taken few or one at a time, in a causal graph are randomized. We consider a harder problem when the causal variables are not directly observed and are “latent”. Instead we observe a high dimensional transformation (as images etc.) of the true causal variables. Central problem in causal representation learning is to invert the unknown transformation between true causal variables and the observations up to coordinate wise scaling and permutation. We show that this is possible with enough interventional diversity by exploiting two key ideas: a) Represent interventional distributions in terms of their scores (gradient of likelihoods). b) The encoder-decoder pair that minimizes reconstruction loss and sparsifies the score difference in the latent space is the optimal pair. We show various versions of these results for linear transforms and general transforms with mild regularity assumptions on the diversity of interventions. We also will discuss empirical results on some simple image datasets. Time permitting this will be followed by some preliminary results from ongoing work on extending this to robotic simulators
Joint work with Burak Varici (CMU), Emre Acarturk (RPI), Abhishek Kumar (Amazon, ex-GDM), Ali Tajer (RPI)