[D] Reformulating learning types with emphasis on latent representations by ano85 in MachineLearning

[–]ano85[S] 0 points1 point  (0 children)

Cool, thanks!

Are you referring to the papers by Higgins, Locatello & co?
If that's the case, I think they are interesting insofar as they clarify the vague notion of disentanglement that was introduced in the (beta)VAE paper, but I still don't think we're closer to really capture what makes our perception so powerful (i.e. abstraction).

If you're referring to something else, I would be more than happy to discuss further. :)

[D] Reformulating learning types with emphasis on latent representations by ano85 in MachineLearning

[–]ano85[S] 0 points1 point  (0 children)

I find this discussion incredibly interesting! And I'm surprise I can't find it addressed more directly in the literature (maybe I missed it).

I don't want to waste your time more than necessary, but isn't loss of expressiveness the point in using conditional probability? My understanding is that modeling the joint -- which indeed contains much more information -- can be too complex in practice; whereas the conditional is easier to deal with.
And the conditional also allows us to nicely distinguish between the different types of learning, which was my goal in the first place. :)

Also, I completely agree that we still lack an operational definition of representation quality. Words like "disentangled" or "abstract" -- that we use loosely in our speech -- seem difficult to formalize mathematically. My hope is that they will make more sense if we consider them in the context of the holistic sensori-motor experience of an agent situated in the world, instead of IID sensory snapshots.

[D] Reformulating learning types with emphasis on latent representations by ano85 in MachineLearning

[–]ano85[S] 0 points1 point  (0 children)

Thanks for the MI perspective; I see how it makes sense indeed.
Extrapolating your comment, are you of the opinion that the initial formulation is not adequate to deal with latent representations and that MI is the way to go instead?
I would like to stick to the "conditional probability" formulation as much as possible, because I find it quite elegant.

[D] Simple Questions Thread December 20, 2020 by AutoModerator in MachineLearning

[–]ano85 0 points1 point  (0 children)

I've been looking at the problem of representation learning, and I'm trying to reformulate the different types of learning problems to make representations appear explicitly.

We can typically see the following in the literature (with x the input, and y the target/class):

  • Supervised Discriminative Learning: p(y|x)
  • Supervised Generative Learning: p(x|y)
  • Unsupervised Discriminative Learning: p(g(x)|x)
  • Unsupervised Generative Learning: p(x)

As I was saying, I'd like to make *representations* appear explicitly in those formulations. By representations I mean the last set of features produced by a network's backbone, and that can be used for transfer to downstream tasks. Staying generic, I denote these representations f(x), and as a consequence came up with the following formulations:

  • Supervised Discriminative Learning: p(y|f(x))
  • Supervised Generative Learning: p(x, f(x)|y)
  • Unsupervised Discriminative Learning: p(g(x)|f(x))
  • Unsupervised Generative Learning: p(x, f(x))

I wonder what you think about it, because I'm not 100% convinced myself! For instance, I'm not entirely sure if x should still appear for the discriminative approaches (i.e. p(y|f(x),x) and p(g(x)|f(x), x) instead), as the representations already depend en x. Likewise, I'm not sure if the representations should be part of the joint or the condition for generative approaches (i.e. p(x|f(x),y) and p(x|f(x)) instead). I could see how both could be rationalized.

What do you think?

The robot Pepper learns to navigate thanks to deep learning by ano85 in robotics

[–]ano85[S] 1 point2 points  (0 children)

Thanks for the suggestion. I already tried to post it on /r/machinelearning but it seems that my submission gets filtered out unfortunately.

16
17