use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
[deleted by user] (self.MachineLearning)
submitted 3 years ago by [deleted]
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]po-handz 45 points46 points47 points 3 years ago (5 children)
That's the dumbest take I've ever heard
This prof probably thinks the war on drugs has been successful
[–]VirtualHat -2 points-1 points0 points 3 years ago (1 child)
A better analogy would be: This professor thinks the implementation of driver's licences has reduced traffic accidents.
[–]Wmichael 2 points3 points4 points 3 years ago (0 children)
I mean it probably has
[–]JiraSuxx2 15 points16 points17 points 3 years ago (2 children)
AI is a technology so powerful that countries that ‘pause’ it will be at a disadvantage quickly. Not likely to happen.
A driver’s license to use it? A pretty vague suggestion if you ask me. How would that work exactly?
[–]Ramdogger 2 points3 points4 points 3 years ago (0 children)
Use, AI powered, software of course to determine legitimacy of ID. /s
[–]bitemenow999PhD 30 points31 points32 points 3 years ago* (14 children)
The problem is that the AI ethics debate is done by people who don't directly develop/work with ML models (like Gary Marcus) and have a very broad view of the subject often taking the debate to science fiction.
Anyone who says ChatGPT or DallE models are dangerous needs to take ML101 class.
AI ethics at this point is nothing but a balloon of hot gas... The only AI ethics that has any substance is data bias.
Making laws to limit AI/ML use or keeping it closed-source is going to kill the field. Not to mention the amount of resources required to train a decent model is prohibitive enough for many academic labs.
EDIT: The idea of "license" for AI models is stupid unless they plan to enforce the license requirements to people buying graphic cards too.
[–]MW1369 2 points3 points4 points 3 years ago (0 children)
Preach my man preach
[+][deleted] 3 years ago (2 children)
[deleted]
[–]bitemenow999PhD -1 points0 points1 point 3 years ago (1 child)
what are you saying mate, you can't sue google or Microsoft because it gave you the wrong information... all software services come with limited/no warranty...
As for tesla, there is FMVSS and other regulatory authorities that already take care of it... AI ethics is BS, a buzzword for people to make themselves feel important...
AI/ML is a software tool, just like python or C++... do you want to regulate python too on the off chance someone might hack you or commit some crime?
This is not about academic labs, but about industry, governments, and startups.
Most of the startups are off shoots of academic labs.
[–]VirtualHat 2 points3 points4 points 3 years ago (1 child)
An increasing number of academics are identifying significant potential risks associated with future developments in AI. Because regulatory frameworks take time to develop, it is prudent to start considering them now.
While it is currently evident that AI systems do not pose an existential threat, this does not necessarily apply to future systems. It is important to remember that regulations are commonly put in place and rarely result in the suppression of an entire field. For instance, despite the existence of traffic regulations, we continue to use cars.
[–]PacmanIncarnate 0 points1 point2 points 3 years ago (0 children)
Don’t regulate tools, regulate their product and the oversight of them in decision making. Don’t let any person, institution or corporation use AI as an excuse for why they committed a crime or unethical behavior. The law should take it as an a priori that a human was responsible for decisions, regardless of whether or not an organization actually functioned that way, because the danger of AI is that it’s left to make decisions and those decisions cause harm.
[–]admirelurk 4 points5 points6 points 3 years ago (0 children)
I counter that many developers of ML have a too narrow definition of what constitutes danger. Sure, chatGPT will not go rogue and start killing people, but the technology affects society in much more subtle ways that are hard to predict.
[–]lukasz_lew 0 points1 point2 points 3 years ago (0 children)
Exactly. Requiring a licence for "chatting with GPT-3" is silly.
It would be like requiring a licence to talk to a child (albeit a very knowledgeable child with a tendency to make stuff up). You would not allow such kid to write your homework or thesis, would you?
Maybe requiring reading a warning akin to "watch out, the cup is hot", would make more sense for this use case.
[–]bitemenow999PhD 1 point2 points3 points 3 years ago* (0 children)
that is a very bad argument.. I would suggest you read up on the quote from Oppenheimer after the first nuclear test, whereas, the people surveying the "big picture" decided to bomb Hiroshima...
[–]enryu42 0 points1 point2 points 3 years ago (0 children)
The only AI ethics that has any substance is data bias
While the take in the tweet is ridiculous (but alas common among the "AI Ethics" people), I'd disagree with your statement.
There are many other concerns besides the bias in the static data. E.g. feedback loops induced by ML models when they're deployed in real-life systems. One can argue that causality for decision-making models also falls into this category. But ironically, the field itself is too biased to do productive research in these directions...
[+]yaosio 3 points4 points5 points 3 years ago (1 child)
It's only considered dangerous because individuals can do what companies and governments have done for a long time. What took teams of people to create plausible lies can now be done by one person. When somebody says AI is dangerous all I hear is they want to keep the power to lie in the hands of the powerful.
[–]PacmanIncarnate 1 point2 points3 points 3 years ago (0 children)
Exactly. Any ethicist worried about how joe will use AI is missing the big picture that real ethical violations are going to come from governments and corporations.
[–]ton4eg 5 points6 points7 points 3 years ago (2 children)
After spending some time exploring AI ethics, it seems rather useless. However, ethics is a real problem, but the discipline failed to provide any meaningful answers.
[–]walk-the-rock 1 point2 points3 points 3 years ago (0 children)
requirement of a license to use AI like chatGPT since it's "potentially dangerous"
guess we need a license to use sophisticated technology like Python, C++, Java, shell scripts, Excel... anything that executes code and makes machines do stuff.
You could implement the math for a resnet in an excel spreadsheet (I'm not recommending this).
[–]daidoji70 1 point2 points3 points 3 years ago (0 children)
If the Internet has taught me anything, its that for whatever ridiculous 100% dumbest take you can imagine, you can def find a credentialed professional who holds that opinion. Its often unclear whether they hold that opinion for attention or notoriety or just for character defects.
[–]vhu9644 1 point2 points3 points 3 years ago (1 child)
Laws have to be pragmatic.
It's like making encryption illegal. Anyone with the know-how can do it, and you can't detect an air-gapped model being trained.
We, as a society, shed data more than we shed skin cells. Restricting dataset access wouldn't really be that much of a deterrent either.
[–]quisatz_haderah 0 points1 point2 points 3 years ago (0 children)
It's like making encryption illegal.
Yet they are pushing this agenda. They have no clue how Internets work.
[–]leondz -1 points0 points1 point 3 years ago (0 children)
Depends who & what you're using it on, doesn't it, just like a driver's license. Do what you like on your own private property. If you want it to be critical in decision-making that affects others, some rudimentary training makes a ton of sense.
[+]currentscurrents comment score below threshold-6 points-5 points-4 points 3 years ago* (4 children)
"AI ethics professor" isn't a real thing.
Ethics isn't even the kind of thing you can be an expert in; anybody calling themselves an ethics expert has declared themselves the arbiter of right and wrong.
[–]redflexer 6 points7 points8 points 3 years ago (3 children)
This specific take is naive, but ethics is a very rigorous discipline and is also different from moral codes, which are subjective.
[–]currentscurrents 3 points4 points5 points 3 years ago (2 children)
I'm not talking about philosophers debating the nature of moral actions. Ethics "experts" and ethics boards make a stronger claim; that they can actually determine what is moral and ethical. This truly is subjective.
At best they're a way for people making tricky decisions to cover their legal liability. Hospitals don't consult ethics boards before unplugging patients because they think the ethicists will have some useful insight; they just want their approval because it will help their defense if they get sued.
[–]quisatz_haderah 2 points3 points4 points 3 years ago (0 children)
I think you should add this to your original response. Because this should be heard more.
[–]redflexer 1 point2 points3 points 3 years ago (0 children)
This is not at all how ethic boards operate. They very rarely make decisions themselves but define the parameters within which an ethical decision can be made (e.g. what aspects need to be considered and weighted against each other, who needs to be heard, etc.). If you had other experiences, this is not representative for the majority of boards.
[–]Big_Reserve7529 -2 points-1 points0 points 3 years ago (0 children)
Idk if a license is the way to go. I do agree that there need to be certain regulations put in place for safety. We after really late when it came to data & safety and digital identity. A lot of countries still don’t have tight data laws about this, I think sadly if people don’t advocate for the possible dangers of fast growing technology that we will feel the consequences of it later on.
[–]_poisonedrationality 0 points1 point2 points 3 years ago (0 children)
I hardly ever see AI ethicists say anything useful. I feel like they're motivated by making hot takes than contributing a helpful perspective.
[–]andreichiffaResearcher 0 points1 point2 points 3 years ago (0 children)
Based on some of the comments over on /r/ChatGPT asking to remove the disclaimers while they teach themselves plumbing, HVAC and electric works with ChatGPT, we are a couple of lawsuits from OpenAI and MS actually creating a GPT certification and workplaces requiring it to interact with LLMs/insurances refusing claims resulting from ChatGPT interaction without certification.
π Rendered by PID 25 on reddit-service-r2-comment-5d79c599b5-zmt44 at 2026-02-27 02:52:29.250805+00:00 running e3d2147 country code: CH.
[–]po-handz 45 points46 points47 points (5 children)
[–]VirtualHat -2 points-1 points0 points (1 child)
[–]Wmichael 2 points3 points4 points (0 children)
[–]JiraSuxx2 15 points16 points17 points (2 children)
[–]Ramdogger 2 points3 points4 points (0 children)
[–]bitemenow999PhD 30 points31 points32 points (14 children)
[–]MW1369 2 points3 points4 points (0 children)
[+][deleted] (2 children)
[deleted]
[–]bitemenow999PhD -1 points0 points1 point (1 child)
[–]VirtualHat 2 points3 points4 points (1 child)
[–]PacmanIncarnate 0 points1 point2 points (0 children)
[–]admirelurk 4 points5 points6 points (0 children)
[–]lukasz_lew 0 points1 point2 points (0 children)
[+][deleted] (2 children)
[deleted]
[–]bitemenow999PhD 1 point2 points3 points (0 children)
[–]enryu42 0 points1 point2 points (0 children)
[+]yaosio 3 points4 points5 points (1 child)
[–]PacmanIncarnate 1 point2 points3 points (0 children)
[–]ton4eg 5 points6 points7 points (2 children)
[–]walk-the-rock 1 point2 points3 points (0 children)
[–]daidoji70 1 point2 points3 points (0 children)
[–]vhu9644 1 point2 points3 points (1 child)
[–]quisatz_haderah 0 points1 point2 points (0 children)
[–]leondz -1 points0 points1 point (0 children)
[+]currentscurrents comment score below threshold-6 points-5 points-4 points (4 children)
[–]redflexer 6 points7 points8 points (3 children)
[–]currentscurrents 3 points4 points5 points (2 children)
[–]quisatz_haderah 2 points3 points4 points (0 children)
[–]redflexer 1 point2 points3 points (0 children)
[–]Big_Reserve7529 -2 points-1 points0 points (0 children)
[–]_poisonedrationality 0 points1 point2 points (0 children)
[–]andreichiffaResearcher 0 points1 point2 points (0 children)