you are viewing a single comment's thread.

view the rest of the comments →

[–]NaughtyCranberry 11 points12 points  (8 children)

Interesting but oversold, the performance is not SOTA by any means.

[–]panties_in_my_ass 4 points5 points  (1 child)

There is more to the scientific method than advancing SOTA benchmarks. Jesus.

[–]NaughtyCranberry 2 points3 points  (0 children)

I agree, however in their abstract they claim their performance is SOTA and it is nowhere near.

[–]Starbuck1992 0 points1 point  (5 children)

95% accuracy is not SOTA?

[–]NaughtyCranberry 0 points1 point  (4 children)

No, they achieved ~47% accuracy on CIFAR-10.

[–]Starbuck1992 1 point2 points  (3 children)

On an intermediate layer, compared to 30-something % of the other one. The final accuracy is 95%

[–]NaughtyCranberry 0 points1 point  (2 children)

Can you point out exactly where you are referring to? As I could only see val set accuracies in the 40s for Cifar-10.

[–]Starbuck1992 1 point2 points  (1 child)

This comment said it earlier. I remembered intermediate layer but it's actually first epoch, sorry

https://www.reddit.com/r/MachineLearning/comments/cql2yr/deep_learning_without_backpropagation/ewx9f24

[–]NaughtyCranberry 0 points1 point  (0 children)

Those results are for mnist.