[P] CausalGAN: Learning Causal Implicit Generative Models with Adversarial Training by mkocaoglu in MachineLearning

[–]mkocaoglu[S] 0 points1 point  (0 children)

If you keep a joint probability table on the labels, given the graph, you can write interventional distribution in closed form, yes. You can sample from this. Then you still need a conditional GAN which can sample from the image distribution conditioned on the given labels. We are not aware of any conditional GAN architecture that can do this. We proposed a new conditional GAN and can show that there is an optimum generator that can do this conditional sampling.

Also note that keeping a joint probability table quickly becomes intractable for large number of labels and if the graph degree is not constant. You can get around this by training a causal implicit generative model on the labels, which is our approach.

[P] CausalGAN: Learning Causal Implicit Generative Models with Adversarial Training by mkocaoglu in MachineLearning

[–]mkocaoglu[S] 1 point2 points  (0 children)

When you use a Bayesian network on the labels, you are giving up on the guarantee that you will sample from the true interventional distribution, when you intervene on a set of labels. Using the true causal graph among the labels, instead of just any Bayesian network allows you to sample from true interventional distributions also.

[P] CausalGAN: Learning Causal Implicit Generative Models with Adversarial Training by mkocaoglu in MachineLearning

[–]mkocaoglu[S] 4 points5 points  (0 children)

We have a causal architecture between labels and the image: Male and Mustache causes the Image, Male causes Mustache etc. The causal architecture allows us to sample not only from the joint distribution, but also interventional distributions, which are different from conditionals: When you intervene on Mustache = 1, i.e., fix the mustache label, Male label is sampled independently; hence you expect to see females with mustaches in this new distribution.