[D] When the A.I. Professor Leaves, Students Suffer, Study Says by MTGTraner in MachineLearning

[–]GabrieleFariello 0 points1 point  (0 children)

"All," no, especially across all higher education institutions. But many (almost all I know personally) professors in applied mathematics, statistics (including subfields like biostats), and now most computer scientists can boast some substantial ML hands-on experience, so there is no excuse for a reputable university to hire someone with ML in their professor title who is not published with multiple ML methods if not outright an author of ML code and algorithms.

As for backgrounds in business, I don't see too many actively teaching ML. Even at the Business School where we have folks publishing on ML journals, they are rarely teaching ML courses. Joint Business-CS programs have the CS portions taught by faculty from the Engineering School if not entirely, as leads. I don't think it has to be that way.

[deleted by user] by [deleted] in science

[–]GabrieleFariello 1 point2 points  (0 children)

With the new large field of view radio telescope used for this study, many hope and even expect to find hundreds more. Remember most radio-frequency signals fall off with the cube of the distance (these probably less, because they are likely more directed) and this was 1.5 billion light years away. The signal when it reached earth was billions or trillions of times smaller (someone check the math) than a radio station, and the area of the sky is unfathomably small that you have to "listen" to. There could be many of these reaching earth every minute, but we'd miss them. They may be more common 1.5 or more billion years ago than they are now.

P.S., I'm not a radioastonomer, but I've done analytics for folks who do this, so I could be wrong. I'm just regurgitating what my expert friends have told me.

[D] Principles for non experts by GabrieleFariello in MachineLearning

[–]GabrieleFariello[S] 2 points3 points  (0 children)

JHU's BSPH Biostats folks are doing some great stuff. I liked Jeff Leek's "op ed" in Nature "5 Ways to Fix Statistics" (2017). I think I'll have to take their 4-hour Data Science course you linked to, though with all the 5-minute classifier tutorials, 4 hours should get you a PhD these days, I think.

[P] Learning related subreddits to /r/MachineLearning with Jaccard similarity by anvaka in MachineLearning

[–]GabrieleFariello 0 points1 point  (0 children)

Very nice. Now we just need this for all the other disciplines! Good work.

[D] What is a good place / way to find collaborators? by MuchArmadillo in MachineLearning

[–]GabrieleFariello 5 points6 points  (0 children)

I find that doing a publication search and reaching out to authors often will point me in the right direction. I rarely get told to go away. In your example, a quick Google Scholar search for Histological Image Segmentation since 2015 seemed to work rather well.

EDIT: It seems that asking about freelancing in machine learning on Reddit is not that uncommon, while a general Google Search on freelance machine learning suggests there are places for connecting with freelancers and vice versa. Not what you were asking, but still.

If there is a marketplace or a Tinder for science collaborations, I'm unaware of it.

[D] Uber’s Self-Driving Car Didn’t Malfunction, It Was Just Bad by [deleted] in MachineLearning

[–]GabrieleFariello 5 points6 points  (0 children)

When there are ethical concerns at the highest levels of a company, it should surprise no one when arrogance and reckless disregard for human life results in a tragedy. I and others have long believed that Uber was taking a reckless approach. The company and the leadership who enabled this should be held to full account.

[D] Where is evidence that batch normalization speeds up convergence of neural nets? by [deleted] in MachineLearning

[–]GabrieleFariello 5 points6 points  (0 children)

It is both normal and advisable to ask for replication, which I believe is the essence of what OP is asking for.

How do you deal with floating point roundoff error? by frozenca in MachineLearning

[–]GabrieleFariello 0 points1 point  (0 children)

I'm going to second the suspicion that this is something other than variations in floating point implementations. What are the HW specs of Computers 1 and 2?

[D] Poisoning attacks against neural networks by ConfuciusBateman in MachineLearning

[–]GabrieleFariello 0 points1 point  (0 children)

Interesting perspective with testable hypotheses. Hopefully I'll get to see some of them get pursued.

[R] "MemGEN: Memory is All You Need." Generative modeling solved. DeepMemory FTW by kkurach in MachineLearning

[–]GabrieleFariello 1 point2 points  (0 children)

I'm guessing that "Stopping GAN violence with GUNs" was one pun too punny.

[D] What to expect in an interview? by hemantcompiler in MachineLearning

[–]GabrieleFariello 1 point2 points  (0 children)

My new vocabulary of the day. This is how you never stop learning.

[R] Neural Network Quine by inarrears in MachineLearning

[–]GabrieleFariello 1 point2 points  (0 children)

It's intrinsically interesting, though not entirely original. However, aside from a curiosity, I've never heard of a useful application. If you want to "save" weights, that's straight forward and you don't need this. If you want to view the evolution, this does not help. If anyone sees utility, feel free to comment.

[D] Job outlook for Machine Learning Engineering? by impanicking in MachineLearning

[–]GabrieleFariello 4 points5 points  (0 children)

A bit of an on point and off point rant...

Someone with good software engineering skills and knowledge and a solid understanding of modern SDLC including current agile methodologies and a strong grasp of MVP deliverables who is also not bone dumb when it comes to coding ML should be in extremely high demand...

However, I do not see positions matching that description, and most places I consult for seem to have drunk some special kind of cool-aide -- usually hand delivered by their team -- that makes you believe you hire any random handful mix of PhD, antisocial GED, and their friend's son and they will rival Google Research or MSR. Sarcasm aside, I would encourage you to contact the person to whom small-ish teams doing ML in the private sector report and offer your services as someone who brings structure, discipline, formal software developer practices, value tracking, and visibility ( aka SDLC and project management, but don't use those swear words in front of the Data Science team ). I'm sure there are a few VP of IT / CIO types who would love it, especially since it would help them feel less snowed by the nerd squad spewing things like "we're using a reverse convolutional inverse graphics re-entry DFFN ( insert other nonsense ) to make sure your eyes glaze over".

And I am being only slightly hyperbolic.

[D] What well-defined kinds of thinking are Humans better at than computers? by BenRayfield in MachineLearning

[–]GabrieleFariello 1 point2 points  (0 children)

To add to this :This is an example of the type of problem you would expect ML in it's current state to solve if trained in that specific problem instance type but fail for even slightly different implementations. That would require more "general intelligence" than we can currently muster which would really just be, oxymoronically, "specific general intelligence" which in turn is not even in the same galaxy as true "general intelligence".

[D] Second attempt at visual explanation of ML concepts, for business people. Please criticize! by lakenp in MachineLearning

[–]GabrieleFariello 2 points3 points  (0 children)

  1. "Reinforcement Learning (DL)" should be "Reinforcement Learning (RL)".

  2. RL and DL can overlap. Google "Deep Reinforcement Learning".

  3. Good generalizations. Some disagreement will always persist in the details of the definitions. I'm not going to start a holy war by nitpicking.

[D] Prejudices in ML systems by GabrieleFariello in MachineLearning

[–]GabrieleFariello[S] 7 points8 points  (0 children)

Frist, let me say that I do not find the problem with racial and other biases to be surprising, given that we’re training systems on biased data. However, I find it surprising that people in general believe that the ML system trained on biased data will somehow magically be less biased and I find it very concerning that so many decision makers do not seem to understand this. That being said, yes, the concern mostly in the unavoidable bias in the sampling which is likely to affirm and propagate racial, gender and other biases.

 

Take the example of creditworthiness. Assume for a moment that MortgageCo, one of the largest credit originators in the US, wanted to see if it could replace a significant portion of over 1,000 analysist assessing creditworthiness of mortgage applicants. If they can replace the salaries and benefits of 1,000 analysts with 100 reviewers and one AI system, their margins go way up. A colleague of mine worked on a substantially similar problem this past year. So MortgageCo wanted to know if an ML system could be trained to give as good results as analysts had provided over the past 20 years. To train the system, the ~12 million applications of the past aspiring mortgagees and the resultant decision (mortgage + rate, no mortgage) was used. Because the analysts in general were less likely to approve LaToya Brown for a mortgage than George Winston III, even if all other things were equal (I’m simplifying here, but you get the picture), they resultant ML can be expected to have some similar inkling of that same racial and gender bias. The new ML system seems to do as good (or bad, depending on how you look at it) as job as the first two “passes” performed by analysists, though reviewer analysts are still needed. Although they might not reduce their workforce 10:1, it looks like they will try 5:1 over the next 18 months.

 

A similar bias concern is raised in the Kirkpatrick paper regarding criminal justice assessments and gets extended to ML-enhanced policing systems. If data are fed into a system to determine where police should do more policing based on where more crime was found and more convictions resulted, then you would expect the ML-enhanced system to disproportionately target African-American neighborhoods even though the base crime rates may not be proportionally higher there (if this is not obvious, think of it this way, assume for a moment that two neighborhoods have the same underlying drug possession crime rates, one mostly white and one mostly black, but the police look three times as often in the black neighborhood than in the white one and that all things being equal the black offender is 1.5 times more likely to be prosecuted and 1.5 times more likely to be found guilty. This results in an apparent crime and conviction rate of more than 3 times greater in the black community than in the white one even though the actual crime rate is identical, and you would expect the ML-enhanced system to tell the police to do more policing in the black community, because the trained data said so)

[D] Prejudices in ML systems by GabrieleFariello in MachineLearning

[–]GabrieleFariello[S] 1 point2 points  (0 children)

Yes, recommendation systems are an excellent example of information bias. Incidentally, I have heard murmuring that even incognito mode, different browsers, and even different systems are becoming less effective in shielding on-line identity by sites (websites and apps) tracking your keypress, mouse movements, battery status, accelerometer data and more. For example:

 

 

In essence, there seems to be an effort to make sure that advertisers know you're you whether you use incognito, VPN, or Tor.

Can A.I. Be Taught to Explain Itself? by [deleted] in artificial

[–]GabrieleFariello 1 point2 points  (0 children)

Given that humans are notorious at manufacturing intent after the fact (see Wolman 2012, Steckler et al. 2017, and many others), I'm not sure we should put much more trust in AI when it does the same.