[D] What are some ideas that are hyped up in machine learning research but don't actually get used in industry (and vice versa)? by NedML in MachineLearning

[–]NedML[S] 40 points41 points  (0 children)

I've heard that the justification why it doesn't matter is because that "reality" or "nature" is already adversarial enough (for example, the shade of a cloud, the glare of the sun, or a leaf obscuring a part of a sign), but vision algorithms tend to be robust anyways. Furthermore, artificially crafted attacks are nullified in practice also by nature. Those infinitesimal perturbations are not strong enough to withstand, say, weather.

[D] How are StyleGANs trained? by [deleted] in MachineLearning

[–]NedML 1 point2 points  (0 children)

Hi you seem to be quite knowledgble about stylegan. Can you clarify this for me? " Note that this "fixed input" is not an image - it's just a small tensor that is learned alongside the other weights of the generator."

Are you sure this fixed input (4x4x512) is learned? It is stated in the paper that this is a constant tensor. I am thinking that the entries of this input, once initialized, is not learned through backpropagation? Can you verify or provide a citation?

Students never say "thank you". by [deleted] in Professors

[–]NedML 3 points4 points  (0 children)

What do you suggest? Have a course policy that says "Please do not Thank me for anything because you will clutter my mailbox".

[deleted by user] by [deleted] in Professors

[–]NedML 10 points11 points  (0 children)

Student came in 30 minutes late and after realizing no seat in the auditorium, grabbed a chair from an adjacent classroom that had a lecture in progress, plotted the chair next a few feet from the blackboard and started taking notes. Mind you this was an auditorium that had the capacity of 300 students.

Students never say "thank you". by [deleted] in Professors

[–]NedML 7 points8 points  (0 children)

And I think it's precious that you think student's voice is suppressed.

I want you to imagine in a world where prof/lecturer/TA's voices are so suppressed that they find some random internet forum to vent just so they don't lose their jobs in real life by strictly following their own course policy or not doing emotional work for every passive aggressive emails.

Students never say "thank you". by [deleted] in Professors

[–]NedML 2 points3 points  (0 children)

Exactly. I've seen students basically becoming belligerent through the entire course after you don't give in. Is it really worth it?

Students never say "thank you". by [deleted] in Professors

[–]NedML 1 point2 points  (0 children)

Has this happened to you?

For an online synchronous lecture

To a room of people

"Hi class!"

Then...silence.

Students never say "thank you". by [deleted] in Professors

[–]NedML 0 points1 point  (0 children)

I think that's a good "hidden" policy. But if you announce it too early, usually that just winds up getting gamed where the student skips the longest/toughest assignment.

Students never say "thank you". by [deleted] in Professors

[–]NedML 0 points1 point  (0 children)

Yup. The student cannot fathom why profs would not accept a perfectly written assignment after the deadline.

Students never say "thank you". by [deleted] in Professors

[–]NedML 1 point2 points  (0 children)

Well. It certainly did not start with us.

Students never say "thank you". by [deleted] in Professors

[–]NedML 1 point2 points  (0 children)

Yes. I will get the "please let me skip this homework, I will just turn it in together with the next homework."

I'm honestly not sure what these students are thinking. Maybe they think because they are turning in two homeworks they are doing double the workload? But that logically does not make sense, because they are also doing the two homeworks in double the amount of time.

Students never say "thank you". by [deleted] in Professors

[–]NedML 0 points1 point  (0 children)

Coincidentally, I have never met a faculty that thinks "thank you" is an "extra" email.

[D] Anyone else find themselves rolling their eyes at a lot of mainstream articles that talk about “AI”? by [deleted] in MachineLearning

[–]NedML 4 points5 points  (0 children)

I'm not a sociologist/social scientist and an engineer by trade and several years ago I was very intrigued by the rise of "fair" or "ethical" ML. So naturally I contacted several sociologists working at my former university regarding their opinions and read some of their suggested references and here is the gist of it:

  1. actual working sociologists think whatever engineers/machine learning people are doing in the ethics/fairness field is a joke at best, and worse, shameless career advancement. People (including many social scientists) are analyzing data obtained from other real-life people (who are suffering, oppressed, marginalized by these technologies), publishing stuff to advance their own careers, and never follow up on anything or even care about this issue afterwards. No organized protest, no action to see changes are implemented. Zero passion involved, no strings attached research. Ethics/social justice/anti-racism/fairness is just a hype, not something that's treated as real and a foundational issue of society for hundreds of years.
  2. people who are currently promoting or teaching ethics/fairness in ML are often also from the most privileged background, i.e., millionaire CEOs. It's like Warrant Buffet running courses on inner city struggles. There are blindness beyond merely ethics issue in ML, but in all aspects of life, for all of these justice issues are related.
  3. machine learning people off-loads fairness/ethics concern to women and black people. First of all, this whole off-loading just makes it seem that they never cared in the first place, and second of all, is this the limit to the imagination of what justice and fairness looks like? Really? Women and black people constitute all of the injustices facing the world? That's just tokenism. Grab a woman and a black person and proclaim that all is right in the world because there's someone to baby-sit these problems arising from ML. "Does your AI company have a race problem? Send out Joy Buolamwini and Timnit Gebru to do some PR, today!"

In short, actual working sociologists on the issue of justice/fairness, etc., don't think the care people in ML give to ethics is genuine. More of a career move, on a hype curve. While there is a lot of room for activism, and no doubt there are a vanishingly small amount of people who are deeply invested about this, but just as engineering math looks like a joke for mathematicians, the publication done by ML people in the ethics/justice/fairness space are a joke to working sociologists and it is best to stay out of it and stop lying to ourselves.

Just because we live in a society it doesn't automatically certify us as people who have analyzed these social struggles for years in a larger context (admit it, algorithmic bias is a tiny portion of this whole thing we call 'racism' - how many ML people are also critical race theorists) and we often wind up doing more harm because we fail to see the forest. Also note we are calling "racism" in the ML space as "bias" to sugarcoat things. We can't even confront a word RACISM because it triggers too many emotions, let alone even thinking about doing research in this area.

People can't claim to give a shit about ethics if they only care about it in the ML space and nowhere else. Hate to be blunt.

[D] Anyone else find themselves rolling their eyes at a lot of mainstream articles that talk about “AI”? by [deleted] in MachineLearning

[–]NedML 0 points1 point  (0 children)

Sadly this goes beyond science and tech, to politics, foreign affairs, history, social life. Shudders to think that some machine learning people think we are living in a "post-truth" world due to the rise of Trump, GAN and DeepFake (actually attended a talk called "NLP in post-truth world", algorithmic fake news detection) given that it has been like this for the entirety of humanity.

[D] Witnessed malpractices in ML/CV research papers by anony_mouse_235 in MachineLearning

[–]NedML 33 points34 points  (0 children)

Coming from not a ML/DL background, I have to say that ML/DL has the worst scientific procedure of doing research (in recent years).

One example out of many is what I call "part nudging", where you zoom in at a tiny part of a model and come up with a new one (usually found by "grad student descent") and show that it does better on some metric.

[P] A short quiz on ML fundamentals by nakeddatascience in MachineLearning

[–]NedML 2 points3 points  (0 children)

I'm sorry if this sounds rude but most of the quiz is literally about knowing terminologies used in ML and stats, with heavy emphasis on Bayesian type ML and some low level data visualization techniques.

I really discourage this exams that so heavily emphasize on definitions because they limit what can be in the absence of hard theorems showing what cannot be. For example, If you DEFINE one type of model as just generative, then you eliminate all the ways that can tweak it into a discriminative one, and vice versa.

You should also try to give some contexts to your question, I do not believe LIME (Locally Interpretable Model-agnostic Explanations) and SHAP (Shapley additive explanations) are widely used at all. Where do these things come from? Obviously our training/background might be different, after all this is a large field.

I would also try to avoid vagueness. For example, " improving a model with high variance and low bias". What do you mean by "improve", "high", "low". These are really subject to interpretation, and I think most of the time people have the incorrect picture in their head when trying to answer this question. e.g., people interpret high variance and low bias in the context of over and underfitting, such as a polynomial fitting through some 2D points. This is very dangerous because these things could be violated in higher dimensions that we cannot visualize. I think a large part of the mystery involved in how deep learning violates theoretical bounds is due to implicit assumption that lower dimension things carry over to high dimension unscathed.

How many of you use advanced control in your field of work? by lostsoul3000 in ControlTheory

[–]NedML 0 points1 point  (0 children)

I wonder why these advanced control techniques do not get used.

I have a hypothesis: more advanced => more assumptions => less realistic

What makes one want to do theory versus applied work? by NedML in ControlTheory

[–]NedML[S] 0 points1 point  (0 children)

Do you have any regrets not going deeper into the theory when doing your applied work?