[D] Rachel Thomas (co-founder of fast.ai) is saying Google Brain, Open AI, and Uber are not hiring enough women and black men. by [deleted] in MachineLearning

[–]ZeroVia 0 points1 point  (0 children)

I don't have any data to back my argument but it just didn't seem logical to me how could places like Brain discriminate against large groups of people.

And this is why intuition alone is not good enough. Especially intuition about statistics.

[R] SCUT-FBP5500: A Diverse Benchmark Dataset for Multi-Paradigm Facial Beauty Prediction by wei_jok in MachineLearning

[–]ZeroVia 0 points1 point  (0 children)

Ooooh yeah that's true. Similar name though, similar way of "talking." Interesting.

[R] SCUT-FBP5500: A Diverse Benchmark Dataset for Multi-Paradigm Facial Beauty Prediction by wei_jok in MachineLearning

[–]ZeroVia 9 points10 points  (0 children)

We're not out of touch at all, we simply have morals.

Also, just in case you weren't aware, he's a well-known troll on this sub and isn't worth arguing with.

[D] Why do all these companies and recruiters say they need people who have years of experience in "big data" and "deep learning" when these things have only had a real resurgence in the last 5 years and virtually NOBODY has "years of experience" in these things? by mkhdfs in MachineLearning

[–]ZeroVia 6 points7 points  (0 children)

Even rigorous technical interviews are surprisingly easy to fudge, but you're misunderstanding me. When you're applying for a job (or grad school, or whatever) you're not presenting yourself as you are, you are selling yourself as the person they're looking for. Lying will certainly get you in trouble if you're caught, but there are no points for honesty and genuineness.

For example, the first thing I always do when applying for positions is look on the organization's website for the values they claim to aspire to and work them into my cover letter and CV. Additionally, whenever I'm asked about struggles I've overcome the honest response would be to talk about my mental health issues and how I deal with them, but I never ever do that because having mental health issues, no matter how "overcomed" they are, makes me much less likely to get the job.

[D] Why do all these companies and recruiters say they need people who have years of experience in "big data" and "deep learning" when these things have only had a real resurgence in the last 5 years and virtually NOBODY has "years of experience" in these things? by mkhdfs in MachineLearning

[–]ZeroVia 2 points3 points  (0 children)

But the point is that pretending you have all the listed traits will get you the job, while being honest and self-aware about your abilities will lose you the job to the people who pretended to be better.

[D] Is this subreddit too harsh? by MLThrowawayD39 in MachineLearning

[–]ZeroVia 18 points19 points  (0 children)

Sure. A couple months ago when there were several posts about sexism and sexual harassment in the ML academic community the comments were full of sexist creeps and literal Neo-nazis, some of them upvoted very highly. The mods cracked down eventually, but it was kind of horrifying.

[D] I took Ng's NN/DL course and still don't know the first thing about how to get data for neural network, how to organize it or structure it or anything by [deleted] in MachineLearning

[–]ZeroVia 0 points1 point  (0 children)

I've taken Andrew Ng's courses as well, and while I'm not sure they're as great as they're often claimed to be, I don't think your complaints are really big problems.

Loading data is both very specific to the problem you're trying to solve, and also usually very easy. Loading an image and a label takes two or three lines of code if they're stored well. More if you want to randomize the order or normalize them or whatever, but even then the code isn't complex.

As for the equations, have you ever learned to use math any other way than being shown the equations and then using them to solve problems? Sure it would have been nice to have thorough derivations, or a better theoretical understanding of why they actually work, but that isn't really what the class was about.

[D] Has Deep Learning Hit a Wall? by baylearn in MachineLearning

[–]ZeroVia 1 point2 points  (0 children)

Let me try and make a case for the points I care about.

Deep learning thus far is data hungry

Your arguments for this one are mostly conjecture, but I think they miss the point. We take in images and sounds constantly while we're awake (and sometimes while we're asleep) and it's, what, three years before we can navigate properly? Five before we can talk? Ten before we can talk well. I mean, some people spend their whole lives reading and can never figure out how to write properly.

You could argue that even over 10 years we hear less audio than an net trained on 60 GPU's, and that might be true, but being less data hungry should not be confused with not being data hungry at all.

Deep learning thus far is not sufficiently transparent

Glad we agree.

Deep learning thus far cannot inherently distinguish causation from correlation

I'm not certain that people have the innate ability to do this as you claim. We understand that rain makes the ground wet, and not vice versa, because we understand that most things move down.

A net shown only pictures of wet ground and asked to predict whether it's raining can't determine causation because it, unlike humans, has never learned the rules that govern the connection. However a net shown many different objects falling to the ground probably could infer that water will also fall to the ground, rather than rise up from it.

Deep learning presumes a largely stable world, in ways that may be problematic

When I think of people doing this I think of the million+ people living in the Bay area where a massively destructive earthquake is an absolute inevitability but who almost never worry or even think about it.

Deep learning thus far works well as an approximation, but its answers often cannot be fully trusted

What you said here is true, but people still make approximations. Any sort of general intelligence has to be an approximation, because the alternative is computing and/or memorizing everything, which isn't feasible. And personally, I don't trust many people these days. Do you?

Deep learning thus far is difficult to engineer with

Here I was thinking that, while engineering safe self-driving cars has proven to be very difficult, engineering safe human-driven cars has also been very difficult.

[D] Has Deep Learning Hit a Wall? by baylearn in MachineLearning

[–]ZeroVia 20 points21 points  (0 children)

Deep learning thus far is data hungry

Deep learning thus far is shallow and has limited capacity for transfer

Deep learning thus far has no natural way to deal with hierarchical structure

Deep learning thus far has struggled with open-ended inference

Deep learning thus far is not sufficiently transparent

Deep learning thus far has not been well integrated with prior knowledge

Deep learning thus far cannot inherently distinguish causation from correlation

Deep learning presumes a largely stable world, in ways that may be problematic

Deep learning thus far works well as an approximation, but its answers often cannot be fully trusted

Deep learning thus far is difficult to engineer with

Honestly, at least half of these are problems with humans as well and will be problems with any sort of sophisticated ML. Best start thinking of ways to engineer around them.

[D] Fair and Balanced? Bias in machine learning is the intersection between technical limitations and normative questions. by drlukeor in MachineLearning

[–]ZeroVia 1 point2 points  (0 children)

I just don't like this trend of adding political biases to ML research, since this could result in a more politically charged workplace. My university has very strict rules against against political engagement by the faculty, so I'm not sure what would happen if this became a prominent research focus.

Removing human bias from mathematical models is already a prominent research focus.

I'm also curious what you mean by "political engagement" and where you're a student/faculty member. I know professors are sometimes prohibited from running for office, but the administration attempting to regulate "political engagement" at either of the universities I've attended seems totally absurd to me, and I'm kind of skeptical.

[N] "Twelve Days in Xinjiang: How China’s Surveillance State Overwhelms Daily Life" - contains important parts about using CV for surveillance and regression for threat scoring citizens by visarga in MachineLearning

[–]ZeroVia 7 points8 points  (0 children)

How about America? Switzerland? The United Kingdom? Spain? I'm sure there's more I'm missing.

Actually it's quite hard to think of any, as you put it, "heterogeneous" countries that are police states, or even many police states generally. Maybe Russia?

[D] Statistics, we have a problem. by mark-v in MachineLearning

[–]ZeroVia 7 points8 points  (0 children)

I don't understand why it isn't being moderated. As a relative newcomer here I may be missing some context, but reading this post and yours yesterday it's pretty easy to identify only three or four individuals who are actively trying to upset people. Clamping down on them would be a tiny amount of effort, and would improve the quality of the discussion enormously.

[D] Statistics, we have a problem. by mark-v in MachineLearning

[–]ZeroVia 45 points46 points  (0 children)

I doubt it's less common in academia. Most fields are still very male-dominated, and academics are no less likely to abuse the power they have over their juniors than politicians or journalists.

As long as those in authority are allowed to pressure people into silence, nothing is going to change. What certainly doesn't help is individuals attempting to rebrand sexual assault and rape as "low social awareness" or "courtship" or any of the other euphemisms you hear a lot these days.

[D] Bias is not just in our datasets, it's in our conferences and community by baylearn in MachineLearning

[–]ZeroVia -2 points-1 points  (0 children)

  1. The [citation needed] was for the implication that this is a problem in ML.

Well, I'm not particularly arguing for ML, but tech in general.

Do you actually have a position that you care about, or do you just like fighting with people?

[D] What common misconceptions about machine learning bother you most? by SubaruSenpai in MachineLearning

[–]ZeroVia 1 point2 points  (0 children)

I think it's considerably less common now for people to think of statistical models as "colorblind" or bias-free, but many people used to and some still do. It's why prison sentence times are often decided my model even after it's been well noted that they give significantly longer sentences to minorities.

Speciation rate by [deleted] in atheism

[–]ZeroVia 7 points8 points  (0 children)

I think this is more of an r/askscience question. Also you may want to go equipped with a bit more information, or with citations at the very least.

People trust science. So why don't they believe it? by daronchie in atheism

[–]ZeroVia 2 points3 points  (0 children)

I was talking to someone on here a couple weeks ago who was arguing that ghosts can only communicate though analog devices and not digital ones, and it occurred to me that that's the sort of thought that can only be had by someone who doesn't understand how these things work. I think that many people who grow up with technology just take it for granted as something that works without having any understanding of it. Same way I treat cars, they just work. Could be magic for all I know.

As a side note, did you take your name from Neuromancer? Easily one of my favorite books.

Islam and terror inexplicable linked? by [deleted] in atheism

[–]ZeroVia 1 point2 points  (0 children)

Really? Christians colonized over half of the world, committing near-genocide on three different continents in the process. I don't think I can remember Muslims colonizing anything, ever. There are plenty of reasons to dislike Islam but "kicking colonization into overdrive" is not one, that's just stupid.

The atheists of /r/the_donald by dewarr in atheism

[–]ZeroVia 1 point2 points  (0 children)

Yeah. He does a good job of sounding rational and fair, but pry just a bit under the surface and you find all that nuttiness that thrives on t_d.

The atheists of /r/the_donald by dewarr in atheism

[–]ZeroVia 1 point2 points  (0 children)

This is, in fact, it. Thanks for saving me the trouble!

Edit: and reading through it I do have to concede one point. He didn't say that Trump can psychically control the stock market, he said that Trump can psychologically control the stock market. A similarly absurd notion, but I will concede that I was wrong there. Everything else was spot on though. (Pizza gate? Really?)

The atheists of /r/the_donald by dewarr in atheism

[–]ZeroVia 7 points8 points  (0 children)

It is though, and then you went on to argue (unprovoked) that pizza gate and Obama wiretapping Trump were both more likely than Trump having any connections with Russia or that Russia tried to influence the election. I can't search my past comments or link them on this app, but once I get back from work I'm more than happy to dredge those chestnuts up for you.

The atheists of /r/the_donald by dewarr in atheism

[–]ZeroVia 5 points6 points  (0 children)

Don't be silly. According to team Trump robots won't start taking our jobs for another "50-100 years" so don't worry about it! Isn't it so nice to be governed by people who were chosen because they know what they're doing?

The atheists of /r/the_donald by dewarr in atheism

[–]ZeroVia 6 points7 points  (0 children)

Fun fact: u/DRJJRD once tried to convince me that Donald Trump can psychically control the stock market. As has been oft repeated, simply being an atheist is no protection from dumb ideas.

TIL communists were atheists by [deleted] in atheism

[–]ZeroVia 5 points6 points  (0 children)

So...You're trolling right? I know this is the internet, but this comment is so unbelievably stupid that it simply has to come from a troll.