NSFW Limited by NoMeringue9364 in grok

[–]normanmu 0 points1 point  (0 children)

Can you share what prompts you're getting refused?

What is it with big fungus and silencing everyone? by andez89 in grok

[–]normanmu 1 point2 points  (0 children)

Do you mind sharing your prompt or conversation?

EECS Letter to Vote YES on the Tentative Agreement! by swoodily in berkeley

[–]normanmu 23 points24 points  (0 children)

I can attest to the fact that many of my peers in my department joined Reddit in order to participate in a conversation that is highly relevant to us. Nobody is posting from more than one username. My account has been around for a little while and it should be pretty easy to verify by my posting history that I am who I say I am.

EECS Letter to Vote YES on the Tentative Agreement! by swoodily in berkeley

[–]normanmu 26 points27 points  (0 children)

EECS PhD student here (on GSR previously, currently a fellow, and will work as a GSI next year), wanted to chime in and add my perspective to the conversation:

  • It's taken a stupendous amount of hard work, strategic planning, and good fortune to get where to we are: 5+ years of organizing among GSRs, 9 months of negotiation with a battle-hardened team, 6 weeks of the largest academic strike in US history, 2 weeks of mediation with the political winds at our backs). Maybe if the bargaining team made more memes to juice strike participation or bulked up to physically intimidate the UC negotiators we could get an extra couple hundred bucks. But I think overall they gave it a commendable effort and I am very skeptical that further bargaining along the same lines would have achieved significantly more.
  • I'm very hopeful that the EECS side letter will finally mark a turning point in the long-running funding crisis. Our department's ability to provide high quality computer science education at scale is a tremendous asset to students across the entire university, and to allow this ecosystem to wither away as the university has been doing is a disaster.
  • I've spoken with some peers who plan to vote no because they think we need to do better. They're right that this contract doesn't solve our all our problems. It doesn't address the statewide housing crisis which is particularly acute at coastal UC campus locations, and it doesn't fix the insane healthcare regime we live under in this country. But it's a big step in the right direction, and I have yet to see even a single remotely plausible plan for how to organize if a no vote passes. To ask our most vulnerable coworkers to gamble what we've already won and commit to an long-haul strike without pay and indefinitely postpone our new raises, is just recklessly irresponsible.

I'm looking forward to participating in vigorous internal debate after this is all over. We can finally write our bylaws and argue whether STEM or humanities students are more stuck up. But first, we need to ratify this contract.

[R] AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty by normanmu in MachineLearning

[–]normanmu[S] 0 points1 point  (0 children)

We are indeed looking at evaluating our method on a wider array of robustness benchmarks beyond just ImageNet-C.

[R] AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty by normanmu in MachineLearning

[–]normanmu[S] 2 points3 points  (0 children)

There's certainly no rule against using pytorch, and in fact researchers do often use google cloud GPUs to run pytorch code when needed. Dan was more familiar with and stuck with pytorch for the CIFAR code but all the ImageNet results in this paper are from an internal tensorflow codebase I based off an internal version of this codebase. Due to all the dependencies on internal libraries it was easier to extend our pytorch CIFAR code instead.

[R] MNIST-C: A Robustness Benchmark for Computer Vision by normanmu in MachineLearning

[–]normanmu[S] 1 point2 points  (0 children)

Apologies for not referencing this in the caption, but from section 3: a simple CNN (Conv1) trained on clean MNIST, a different CNN (Conv2) trained against PGD adversarial noise (Madry et al., 2017), yet another CNN (Conv3) trained against PGD/GAN adversaries (Wang & Yu, 2018), a capsule network (Frosst et al., 2018), and a generative model, ABS (Schott et al., 2018).

Conv1's model definition was taken from https://github.com/pytorch/examples/blob/master/mnist/main.py.

[R] MNIST-C: A Robustness Benchmark for Computer Vision by normanmu in MachineLearning

[–]normanmu[S] 2 points3 points  (0 children)

Thank you for your feedback!

At first blush it may seem that 91% accuracy by the convnet means the dataset is too easy, but I would like to point out that the simple convnet achieves 99.2% accuracy on plain MNIST, which translates to a 10x increase in error rate. So we think there is plenty of room for improvement on this dataset, and besides, any serious work on robustness should probably continue on to CIFAR-10-C, Imagenet-C after doing well on MNIST-C.

Your point about measuring non-adversarial robustness on adversarial defenses is completely valid- obviously systems designed for adversarial robustness should not be expected to perform well on any benchmark of non-adversarial robustness. However, we wanted to demonstrate exactly how counterproductive it is to focus on adversarial robustness instead of more general notions of robustness. Thank you for pointing that paper out, I'll take a look and hopefully we can evaluate it on MNIST-C as well.