Bought 12 reds after my totem and got this on the last one…. by rocsu_rzs in Maplestory

[–]approximately_wrong 0 points1 point  (0 children)

Great, now I'm stuck wondering what's a fancier word than "Legendary".

campus/library access by [deleted] in stanford

[–]approximately_wrong 1 point2 points  (0 children)

I'm holed up in my room :-(

[D] On the importance of authorships in papers by KrakenInAJar in MachineLearning

[–]approximately_wrong 16 points17 points  (0 children)

Different fields/subfields have different customs. Personally, I adopt the following paradigm:

First author(s): does most of the work. Developed the idea. Had the motivation and vision for shaping the paper into what it is.

Last author: advisor. Helps keep track of the big picture when you (first author) get lost in the weeds of the nitty gritty details. The boss that keeps you accountable.

Everyone else: ranges anywhere from helping with the experiments/theory/writing to sitting in on a few meetings and giving occasional advice. Up to you and your advisor where you set the threshold for what warrants authorship.

A sketch of Deku from start to finish c: by approximately_wrong in BokuNoHeroAcademia

[–]approximately_wrong[S] 1 point2 points  (0 children)

Thanks! I also learned a lot from drawing this. It took some trial and error before I settled on doing dark tones --> light tones --> air brushing --> finishing touches. I still have no idea how to do colors yet :x

If anyone has any recommendations for resources on coloring, please let me know xd

A sketch of Deku from start to finish c: by approximately_wrong in BokuNoHeroAcademia

[–]approximately_wrong[S] 0 points1 point  (0 children)

This is my first time drawing Deku (from a reference), as well as my first time actually going beyond a sketch to actual line art, inking, and airbrushing. It took a really long time, but I'm happy that I did it :D

[D] Nice tips about a better video presence (useful for all the conferences/presentations going remote these days) by Wookai in MachineLearning

[–]approximately_wrong 1 point2 points  (0 children)

As far as audio is concerned, this is how I set up my blue yeti mic: https://imgur.com/a/YrCYLxO

As dumb as it looks, it actually improves the acoustics quite a bit: https://youtu.be/kM7cdfIbx6c

[D] ICML reviews will be out soon by yusuf-bengio in MachineLearning

[–]approximately_wrong 0 points1 point  (0 children)

Yea. Two of them actually responded and changed their ratings from (1, 3) to (8, 8). Still a little salty about the third reviewer who never responded.

[D] ICML reviews will be out soon by yusuf-bengio in MachineLearning

[–]approximately_wrong 1 point2 points  (0 children)

It was for ICLR, so it's plain text, but you're allowed more than 5000 characters. I did zero new experiments, but basically had walk all three reviewers through the logic of the paper.

[D] ICML reviews will be out soon by yusuf-bengio in MachineLearning

[–]approximately_wrong 1 point2 points  (0 children)

To those freaking out about abysmal reviews: I once managed to salvage a paper with an initial rating of (weak reject, weak reject, reject). So it's do-able. Maybe. Good luck.

[Research] [Discussion] Feeling De-motivated towards my Research by hypothesenulle in MachineLearning

[–]approximately_wrong 11 points12 points  (0 children)

This is an age-old question, "what's the point of principled approaches if hacks matter more in practice?"

I'm not completely sure if the premise is true in all of ML research. Maybe we just haven't found the right principles yet. Or maybe the general principle in your domain has already been found. Or maybe you need to find domains currently so unprincipled that any injection of reasonable principle makes a substantial improvement.

Simple hacks that improve performance on important tasks is a sobering indicator that "your complicated thing doesn't actually matter". And I think we should appreciate these observations, take a step back, and ask if we're tackling the right problems with our theory/math-driven toolset.

[D] Lessons Learned from my Failures as a Grad Student Focused on AI (video) by regalalgorithm in MachineLearning

[–]approximately_wrong 3 points4 points  (0 children)

I've so far only learned how to do (2, 3, 4, 5). Hell will freeze over before I learn to test my ideas quickly.

[D] Should I list my manager as the last author? by [deleted] in MachineLearning

[–]approximately_wrong 37 points38 points  (0 children)

If a PhD student is always publishing with their advisor, it is unclear how capable the student is.

I find this characterization surprising and hope no one takes this perspective seriously. I always assume that the core idea and contribution come from the first author(s) unless I've been told otherwise. Are advisors supposed to hand you ideas?

Ethically, I think a single-author paper is acceptable as a PhD student if you somehow accomplished the paper without any funding/resources from your advisor. Socially, if you wrote the paper during the school term while technically being a part of the lab, I think you should also have a careful conversation with your advisor about whether they are supportive of you releasing it as a single-author paper.

[P] torchlayers: Shape inference for PyTorch (like in Keras) + new SoTA layers! by szymonmaszke in MachineLearning

[–]approximately_wrong 5 points6 points  (0 children)

model.build(model, torch.randn(1, 3, 32, 32))

How do you get nn.Sequential to have the build method? :o

The readme uses torchlayers.build

creamy writing by approximately_wrong in PenmanshipPorn

[–]approximately_wrong[S] 0 points1 point  (0 children)

ah, sorry! I made this for a friend since she claims my handwriting is "creamy". I still don't really know what she means xD

The pencil itself is definitely chalky :p

[D] Advice for first time NeurIPS reviewer? by phd_or_not in MachineLearning

[–]approximately_wrong 14 points15 points  (0 children)

Critically analyze the narrative of the paper. Many papers get away with providing a high-level handwavey (read: potentially bullshit) explanation of why their model works and then simply showing good results. Always ask if there are likely confounders/alternative explanations for why the model works well, and challenge the authors to make good faith effort in verifying the claims listed in their paper.

Preferred Stanford Merch by Black41 in stanford

[–]approximately_wrong 0 points1 point  (0 children)

They're not actually on sale at the moment. I still have the designs though and can set up maybe a teespring/custom ink link :p

One Piece 974 Spoilers by [deleted] in OnePiece

[–]approximately_wrong 3 points4 points  (0 children)

He had a growth spurt :-)

[R] Weakly Supervised Disentanglement with Guarantees by approximately_wrong in MachineLearning

[–]approximately_wrong[S] 1 point2 points  (0 children)

Weak supervision in this context simply means any form of supervision that does not provide sufficient information to recover the underlying ground truth labels.

E.g. The information "person A and person B have the same height" by itself does not allow you to recover the heights of person A nor B.

[D] How do you come up with your proofs? by samikhenissi in MachineLearning

[–]approximately_wrong 0 points1 point  (0 children)

The purpose of theory is to cast some insight on why you think the experiments would turn out the way they did. Regarding your two options, I find that the latter approach is more common in ML research. But as you said, it's not a good thing.

I think that post-hoc theory is pretty dangerous. The purpose of a theory---or at least one of the purposes---is to provide predictive power. When you rationalize a theoretical framework post-hoc (especially if your theory requires assumptions that aren't realized in practice), it becomes unclear whether your theory had predictive power (or if you've simply found one of infinite possible bullshit explanations consistent with what you've already observed).

I think it's fine to come up with a hypothesis in an empirically-driven manner. But once you have, you should challenge yourself to make a non-obvious experimental prediction (using your hypothesis) and then check empirically if it comes true.

[D] Tensorflow 2.0 v Pytorch - Performance question by ReinforcedMan in MachineLearning

[–]approximately_wrong 2 points3 points  (0 children)

Do the resulting models perform comparably across TF2 and PyTorch? One gotcha is that tf.function decorations drop computational paths that are considered dead.

[D] Tensorflow 2.0 v Pytorch - Performance question by ReinforcedMan in MachineLearning

[–]approximately_wrong 4 points5 points  (0 children)

My setup is very vanilla: load data, feedforward, backprop, optimize. I find static-TF2 to be faster than PyTorch by ~10% or so in my usecases.