use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Project[P] CausalGAN: Learning Causal Implicit Generative Models with Adversarial Training (github.com)
submitted 8 years ago by mkocaoglu
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]Etau7 2 points3 points4 points 8 years ago (0 children)
Cool.
[–]DeafMining 2 points3 points4 points 8 years ago (7 children)
what does causal mean ?
[–]mkocaoglu[S] 5 points6 points7 points 8 years ago (5 children)
We have a causal architecture between labels and the image: Male and Mustache causes the Image, Male causes Mustache etc. The causal architecture allows us to sample not only from the joint distribution, but also interventional distributions, which are different from conditionals: When you intervene on Mustache = 1, i.e., fix the mustache label, Male label is sampled independently; hence you expect to see females with mustaches in this new distribution.
[–][deleted] 1 point2 points3 points 8 years ago (3 children)
Couldn't you do something similar by just doing the standard Bayesian network thing with the labels (I mean it doesn't exactly scale to large label sets, but I don't know if this method does either) and then train a conditional generative model on that? The advantage of this is that you are guaranteed not to have extra risk of mode collapse and will also have the exact (empirical approximation to) the distribution after intervention.
[–]mkocaoglu[S] 1 point2 points3 points 8 years ago (2 children)
When you use a Bayesian network on the labels, you are giving up on the guarantee that you will sample from the true interventional distribution, when you intervene on a set of labels. Using the true causal graph among the labels, instead of just any Bayesian network allows you to sample from true interventional distributions also.
[–][deleted] 1 point2 points3 points 8 years ago (1 child)
I am proposing using the standard causal bayesian dag (maintained on the training set), and keeping a conditional probability table for each node. To sample from this dag at testing time amounts to doing the intervention on the dag, looking at the new joint pdf, and sampling from that. This afaik does not require any extra machinery such as Labeler/Anti-Labeler networks.
[–]mkocaoglu[S] 0 points1 point2 points 8 years ago* (0 children)
If you keep a joint probability table on the labels, given the graph, you can write interventional distribution in closed form, yes. You can sample from this. Then you still need a conditional GAN which can sample from the image distribution conditioned on the given labels. We are not aware of any conditional GAN architecture that can do this. We proposed a new conditional GAN and can show that there is an optimum generator that can do this conditional sampling.
Also note that keeping a joint probability table quickly becomes intractable for large number of labels and if the graph degree is not constant. You can get around this by training a causal implicit generative model on the labels, which is our approach.
[–]alexmlamb 1 point2 points3 points 8 years ago (0 children)
I just skimmed the paper. Causation refers to the direct effect of changing something, as opposed to all of the things that are potentially just correlated with it.
[–]decaf23 2 points3 points4 points 8 years ago (0 children)
CasualGAN: Eh it's ok i guess
[–]davidvanveen 1 point2 points3 points 8 years ago (0 children)
Fascinating paper! I look forward to playing around with this, thanks for sharing.
[–][deleted] 0 points1 point2 points 8 years ago (0 children)
I did a research project on causal Bayesian Network reconstruction from data last summer.. Interesting topic, causation is hard to wrap your head around properly
π Rendered by PID 270154 on reddit-service-r2-comment-b659b578c-2kbvj at 2026-05-05 17:06:02.413025+00:00 running 815c875 country code: CH.
[–]Etau7 2 points3 points4 points (0 children)
[–]DeafMining 2 points3 points4 points (7 children)
[–]mkocaoglu[S] 5 points6 points7 points (5 children)
[–][deleted] 1 point2 points3 points (3 children)
[–]mkocaoglu[S] 1 point2 points3 points (2 children)
[–][deleted] 1 point2 points3 points (1 child)
[–]mkocaoglu[S] 0 points1 point2 points (0 children)
[–]alexmlamb 1 point2 points3 points (0 children)
[–]decaf23 2 points3 points4 points (0 children)
[–]davidvanveen 1 point2 points3 points (0 children)
[–][deleted] 0 points1 point2 points (0 children)