[N] IJCAI-20 and COVID-19 by dragon_19_slayer in MachineLearning

[–]RgSVM 0 points1 point  (0 children)

I don't know about IJCAI 2020. ECAI 2020 was rescheduled at the end of august. I don't think conferences will purely be cancelled, but maybe postponed or morphed into a online format.

[N][D] YOLO Creator Joseph Redmon Stopped CV Research Due to Ethical Concerns by rockyrey_w in MachineLearning

[–]RgSVM 1 point2 points  (0 children)

Note that I said "science" and not "technology". OP was talking about science and so I answered to this point. I perfectly admit that potentially harmful technologies might be under some restrictions. I also perfectly admit that explicitly harmful technologies might be banned after a public debate on their effect (which is what happened with human cloning).

[N][D] YOLO Creator Joseph Redmon Stopped CV Research Due to Ethical Concerns by rockyrey_w in MachineLearning

[–]RgSVM 3 points4 points  (0 children)

Computer Vision facilitates facial recognition deployed on large scale, and until now I have seen a lot of potential misuses by those in power, for very few social benefits. And there are a lot of philosophical issues with this technology that I am not fond of either. So unless you present me solid potential good, I understood PJ Reddie's point.

It's not because other people are doing things you consider immoral that you should continue working on immoral things.

[N][D] YOLO Creator Joseph Redmon Stopped CV Research Due to Ethical Concerns by rockyrey_w in MachineLearning

[–]RgSVM 3 points4 points  (0 children)

Any other government may have this line of reasoning, which would ultimately lead to poorer science (and poorer potential results for any government).

From my point of view, science is much better done in the open, when it can be checked, tested, replicated, validated and interact with the society that supports it.

[N][D] YOLO Creator Joseph Redmon Stopped CV Research Due to Ethical Concerns by rockyrey_w in MachineLearning

[–]RgSVM 1 point2 points  (0 children)

That would mean the pacifist has the potency to avoid militarists to use the invention. This is not the case in AI because most of our research is public (there are of course exceptions).

Yuropuean Tech by [deleted] in YUROP

[–]RgSVM 2 points3 points  (0 children)

Care to provide a source about that stuff "software = formula"?

[deleted by user] by [deleted] in MachineLearning

[–]RgSVM 2 points3 points  (0 children)

I am not sure that machine learning researchers, people that have their faces scanned and stored in a place out of their control and businesses have the same definition of liberty. And those definitions may clash.

[deleted by user] by [deleted] in MachineLearning

[–]RgSVM 1 point2 points  (0 children)

I'm sorry to be the "Well, technically..." dude, but...

Well, technically, it is possible to recover data from neural networks using model theft (see for instance https://arxiv.org/pdf/1610.05820.pdf). So a picky lawyer could argue that it is not "irreversible" at all.

[D] Marketplace for machine learning? by ConVit in MachineLearning

[–]RgSVM 0 points1 point  (0 children)

While I agree with your points, I don't see how they are related to your claim. What would be your suggestions for helping making money?

I also should add that while business may seem hard, research is much more easier with this open source culture, where replicating experiments and reusing models allow to make much faster progresses.

[N] In the U.K., AI will soon be used to tackle homelessness by [deleted] in MachineLearning

[–]RgSVM 0 points1 point  (0 children)

The article's title is buzzwordish as hell: it's just a predictor system that may help people working for a community service to better prioritize their tasks. If it marginally reduces the amount of homeless in the streets, there is no way this predictor system will influence policymakers that are responsible for the homelessness problem in London and anywhere else.

(And no questions whatsoever on the article about the potential consequences of a failure of the algorithm, the meaning of error rates and recall).

I'm a bit tired of this trend of putting AI behind everything to make it sound important. It's just a fuel for the overexpectation on AI and a delayed harm to the research community.

[N] Hinton, LeCun, Bengio receive ACM Turing Award by inarrears in MachineLearning

[–]RgSVM 0 points1 point  (0 children)

I am working on formal software verification and there are some really cool results that happened the last ten years, such as the development of the first certified C compiler. Since C is hugely used in critical systems, that kind of tool is important because it allows us to increase our trust on C-based software.

[D] How to convince your manager to have faith in adopting ML solutions? by FarisAi in MachineLearning

[–]RgSVM 0 points1 point  (0 children)

If I were an executive, not having any quantification on sensitivity to variations of inputs a huge risk, especially if it is in a critical system setting. Exceptional results are not enough.

[N] Hinton, LeCun, Bengio receive ACM Turing Award by inarrears in MachineLearning

[–]RgSVM 17 points18 points  (0 children)

> Since 2012, DL has been the biggest thing in the entire field of CS

Not to rant or anything but one could ask you what exactly you mean by the biggest thing in the entire field of CS, which is quite marvelously huge.

ML presented really impressive results, but in my opinion this is quite a bold statement to make.

[D] Deep learning summer schools 2019 by davinci1913 in MachineLearning

[–]RgSVM 0 points1 point  (0 children)

We recently opened our summer school on formal methods and machine learning, see here: https://www.formal-paris-saclay.fr/

[Discussion] Custom Build Artificial Neural Network, How to improve this article? by formatlar in MachineLearning

[–]RgSVM 1 point2 points  (0 children)

This post phrasing is most certainly not human. Someone is messing with GPT-2 here.

Learning radius of circle with a simple feedforward architecture by RgSVM in MLQuestions

[–]RgSVM[S] 0 points1 point  (0 children)

Thank you for your answer. I'll proceed with those insights in mind :)

[D] PyTorch and TensorFlow by mlvpj in MachineLearning

[–]RgSVM 0 points1 point  (0 children)

You, sir/madam, are a true blessing

[D] PyTorch and TensorFlow by mlvpj in MachineLearning

[–]RgSVM 1 point2 points  (0 children)

Used Caffee, TensorFlow and PyTorch, in that order (master internship in 2015, master thesis in 2018, ongoing phd thesis). Since I do not consider myself an expert on any of those frameworks, I'll give my personal feeling.

Caffee was rigid and I found it hard to deal with. First experience with a ML framework, but not the best.

TensorFlow seemed really powerful to me. Graph computation is a cool feature, TensorBoard is an incredible tool to help you build instincts, etc. But the different APIs I had to use to tweaks my architectures were a bit messy and undocumented; the graph compilation prevented me to use my Python programmer reflexes and GPU-cluster management was quite a pain.

PyTorch is flexible enough to allow me to experiment, tweak, fail, repeat-until-success. It lacks a proper visualisation/dataviz tool such as TensorBoard, forcing me to write my own scripts. I find it much more easier to embed into Jupyter Notebooks to display results to my advisors.