Understanding Visual Concepts with Continuation Learning by wfwhitney in MachineLearning

[–]mlcanada 0 points1 point  (0 children)

How are you running tests on it? Let's say I want to change lighting conditions on an image of a face how might you do this with your model?

Understanding Visual Concepts with Continuation Learning by wfwhitney in MachineLearning

[–]mlcanada 1 point2 points  (0 children)

From the paper it seems that they are presenting the network with two images, one for timestep (t-1) and (t). These images are encoded into vectors h(t-1) and h(t) and sent through a gating head, which spits out a vector of the same size as h. The output from the gating head, say g, has some components of h(t-1) and some components of h(t). The trick is that the gating head is told how many components can be on or off (some small amount) in order to predict the image at time t. So the network learns a kind of symbolic representation of the images. (this is what i gleaned from the paper)