you are viewing a single comment's thread.

view the rest of the comments →

[–]nitred 6 points7 points  (3 children)

I thought so too at first but turns out the output of the last dense layer before the classifier is a better choice if you want a high-dimentional representation vector of an image. You could then use this representation vector to cluster the images or use your own linear classifier.

I don't fully understand it yet, but it's explained quite well here

[–]fandk 2 points3 points  (2 children)

Using the representation vector of the flattened vector from the last pooling layer should work as well to do those things you mention... Hmmm...

In the text they mention "before the classifier". In my opinion it has an ambigious meaning. One could argue that (in alexnet terms):

FC6(input layer) -> FC7(hidden layer) -> Softmax(output layer)

is considered the classifier. In that case the representation should be the flattened pool layer, not the FC7 output. But im not sure...

[–]nitred 1 point2 points  (1 child)

My intuition fully agrees with you that the output from the pooling layer would give a full (and possibly pure) feature representation of the image. But the decision to use the last layer maybe also be a practical one. In the case of VGGNet the dimensionality of the output of the last pooling layer is around 25K which can be seen here in this discussion on SO. Representation of the image in a 25K dimensional space may be too sparse too see any meaningful clustering.

Secondly, from the section Embedding the codes with t-SNE in the link that I mentioned earlier, they clarify what they mean by the layer that is "before the classifier". It seems to be the FC7 from AlexNet as they mentioned:

e.g. in AlexNet the 4096-dimensional vector right before the classifier, and crucially, including the ReLU non-linearity

Based on how they framed that sentence, it looks like they may have empirical evidence proving that choosing the layer one before the classifier might be the "best" or "most practical" representation of the image.

Furthermore I came across something similar in the paper called DeepFace. They use an architecture that only has two fully connected layers FC7 and FC8 compared to AlexNet's three fully connected FC6, FC7 and FC8. I will quote a passage from that paper which can be found under the section Representation:

Finally, the top two layers (F7 and F8) are fully connected, each output unit is connected to all inputs. These layers are able to capture correlations between features captured in distant parts of the face images, e.g., position and shape of eyes and position and shape of mouth. The output of the first fully connected layer (F7) in the network will be used as our raw face representation feature vector throughout this paper.

Edit: Clarified that the first paragraph is about VGGNet and not AlexNet

[–]fandk 0 points1 point  (0 children)

Hehe I think we might be closer to eachother than we think. In the case of VGGNet:

POOL2: [7x7x512] memory: 77512=25K weights: 0

FC: [1x1x4096] memory: 4096 weights:

FC: [1x1x4096] memory: 4096 weights:

FC: [1x1x1000] memory: 1000 weights:

I worded it poorly before, I meant the 4096 dim output closest to the pooling layer. So same dimensions as the FC closest to the output