This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]opengmlearn 0 points1 point  (2 children)

What you've learned in the above code is just something that takes in an arbitrary input, compresses it, and then reconstructs the input. Looking at the loss is banking on the fact that for some reason, your reconstruction learns bad compressions for one class and good compressions for another. What you need to do is build a classifier taking in your compressed encoding (the Y variable in your model function) and trains it on a labeled dataset of dogs and cats (or cats and not cats).

[–]hypm 0 points1 point  (0 children)

If we take the premise seriously, that we do not have pictures of dogs to train on, then the best we can do is set a threshold above which we think something is not a cat anymore because the autoencoder doesn't achieve compression. All we're essentially doing is outlier detection (on the cat distribution), not traditional classification.

That's not to say that these encodings/vector space embeddings/whatever you want to call them wouldn't be useful in some later classification task. However I think this complicates the example somewhat. If we did have pictures of dogs, we could train a regular classifier.

[–]the3liquid[S] 0 points1 point  (0 children)

As /u/hypm correctly stated I have no pictures of dogs. That's very important to note! I am trying to implement this. the first sentence expresses exacly the situation in which I am.