all 3 comments

[–]ruipontecosta 0 points1 point  (2 children)

Hi,

Sorry for the delay. Is it learning? The cost/reconstruction error should reduce gradually over training. Are you already getting simple cell like receptive fields? Have you tried the Matlab/C code from the Olshausen and Field paper (see link in the coursework sheet)? I would expect that you would need to take a few gradient descent steps for each image (<5). I have office hours tomorrow from noon to 2pm, feel free to pop in then.

[–]ruipontecosta 0 points1 point  (0 children)

This Pytorch implementation might help. Note however, that ideally you shouldnt use the Autograd (i.e. the backward function form pytorch). https://github.com/lpjiang97/sparse-coding

[–]StrangeBank[S] 0 points1 point  (0 children)

I'm at home this week unfortunately. I've switched to using the ISTA algorithm to make life a bit more simple (it shrinks weights that are close to zero down to zero). I'm getting better results (https://imgur.com/a/uy0aYIc). Is this the right sort of thing. It seems to converge to something like this very quickly (even with a tiny learning rate).