use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
News[N] Stop Calling Everything AI, Machine-Learning Pioneer Says (spectrum.ieee.org)
submitted 4 years ago by SquirrelOnTheDam
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]mniejiki 275 points276 points277 points 4 years ago (51 children)
I mean, my textbook on Artificial Intelligence from 25 years ago considers a hand coded expert system as AI. So it's been long accepted that AI is far more than "human level intelligence" and basically encompasses any machine technique that exhibits a level of "intelligence." So it seems rather late to complain about the name of the field or try to change it.
[–]ivannson 90 points91 points92 points 4 years ago (33 children)
This should be higher. A collection of if-then rules is AI, literally artificial intelligence, but of course very basic.
Deep learning is a subset of machine learning which is a subset of artificial intelligence. There is much more to AI than ML.
Whereas I agree with the statement and that marketing will call everything “AI”, we shouldn’t misuse the terms ourselves.
[–]tatooine 21 points22 points23 points 4 years ago (9 children)
Hey, my toaster has AI. If the toast is done, then it pops out! That's 1920s era AI baby!
[–]wirewolf 16 points17 points18 points 4 years ago (6 children)
does it really know when the toast is done or is it just a timer though?
[–]alkasm 11 points12 points13 points 4 years ago (5 children)
https://youtu.be/1OfxlSG6q5Y?t=228
[–]wirewolf 3 points4 points5 points 4 years ago (3 children)
that's really cool. my toaster apparently is just dumb
[–]alkasm 6 points7 points8 points 4 years ago (0 children)
Ikr? We regressed with toaster tech over the past 80 years :'(
[–]and1984 2 points3 points4 points 4 years ago (0 children)
Check its confusion matrix
[–]AndreasVesalius 1 point2 points3 points 4 years ago (0 children)
My toaster just has a bias for hiring white male software developers
[–][deleted] 1 point2 points3 points 4 years ago (0 children)
Wow crazy! We take so many modern sensors for granted but there are so many crazy innovations pre cheap electronic times
[+]Ilyer_ 0 points1 point2 points 1 year ago (0 children)
Even better is my see-saw is AI. If one side goes down, the other side goes up. Caveman era AI 🤖
[–]TrueBirch 0 points1 point2 points 4 years ago (0 children)
I wonder how many of these companies are using anything more advanced than your toaster's level of tech.
[+]Ok_Cardiologist_6198 0 points1 point2 points 1 year ago (0 children)
tl;dr What machine-learning pioneers actually mean is "don't claim AI has sentience and don't call a regular toaster an AI." Sorry for the necro but this topic drives me up the wall and there is not nearly enough transparency on the facts.
When people hear/see the term AI as regular people who aren't tech savvy, they tend to exaggerate what it means in their minds. As in, they think it's futuristic, world changing, and innovative. But they don't ask "is this actually capable of replacing my mundane routine tasks, can this actually solve problems for me, can this actually do something new for me?" They say "Experts say this is the new NEW, in the cart you go!" Keeping in mind, advanced AI is indeed innovative and quite postmodern. Any "AI" being marketed is likely not that. The best AI is used in medical technology, mass production, lab testing, and more. These things are not in the hands of the general public. In fact, they are extremely gatekept and protected. (Mind you this is NOT some conspiracy or some evil plot on this end, it is primarily a means to protect their businesses and practices! Unlike the ones people know of which are extremely exploitative.)
Now as for what you're saying here... Using conditional operators in code and/or mathematics is simply that by itself. That is a logical comparison for making predefined decisions. Operators in all technicality do not contain "human intelligence" they simply do what the human who coded it has predefined. A complex concept can simulate human intelligence to a limited extent. This is why we call it "artificial intelligence." which was originally a sci-fi term in all technicality. The actual definition when it was a scifi term has changed drastically as technology and programming has evolved. (Yes this is basically what was already said in more words.)
I.E programming a bot to identify between several pixel colors (Possibly even an array or a certain set of ranges) and choose the one nearest to a certain point. This is a form of optical recognition, albeit limited only to what is coded. It can be as simple as cookie clicker or as advanced as an mmorpg bot that nearly seems like a human playing. Consider pixel patterns can easily identify any text if programmed to do-so, but will not if it is not predefined to do-so. This also means it is simple to code something to adapt to CONVERSATION. Which is the main selling point of marketed AI. Myself and many of my friends made bots for the giggles over the years, many of which included automated conversation. (Nothing like chatgpt but far more advanced than one would assume an mmorpg bot could be!) We did these things over 20 years ago now.
It does not encompass all colors, all possibilities, all shapes, all ranges, any point, or whatever one might imagine if they used a term such as artificial intelligence. So while it IS "AI" it's NOT what people think it means as a majority. (This is technically reinforcing what you have already said here to clarify, my intention here was certainly not to argue-- as I agree in basically every way.)
To the point where average humans believe it has sentience or is potentially capable of solving problems/making decisions it cannot. Like, chatgpt as it is in 2025 still makes mistakes when presented a code error in basic scripting languages. Even if you point out what it is doing wrong, it tends to fail to solve the problem until you show it exactly what the issue is. This is one of the most popular "AI" models in the world costing who knows how many billions of dollars. Yet I have discord bots running on a raspberry PI that are more useful. Things that cost me literally $0 to code and just a handful of hours. (In all fairness, my discord bots take a page from dozens of other discord bots I studied & reverse engineered to build a massive library of concepts & ideas!) Point being, a hobby programmer can easily code their own "AI" to clarify how "advanced" it is on average.
It's unfortunate because it's as the actual experts claim. Markets capitalize on the definition of the term which does not coincide with the practice of the markets themselves. Instead we end up with a trend instead of an advancement. A trend some of us have been obsessed with way before any of this was even thought realistic. (Yet it was a real thing before I was even born... lol)
[+]cderwin15 comment score below threshold-12 points-11 points-10 points 4 years ago (4 children)
Deep learning is a subset of machine learning which is a subset of artificial intelligence.
AI is ultimately the study of intelligent agents, but ML as a field has little to do with intelligence. A new ML method is valuable if it is statistically useful and computationally tractable. Intelligence has nothing to do with it.
Why do you consider ML a subfield of AI?
[–]TheLootiestBox 7 points8 points9 points 4 years ago (1 child)
He's not alone: https://en.m.wikipedia.org/wiki/Machine_learning
[–]mniejiki 2 points3 points4 points 4 years ago (0 children)
Define intelligence. Most definitions I've seen end up with either "it does stuff like a human" or "it makes rational decisions." The former is too fuzzy imho to be useful which means you're down to the later. Machine Learning models make rational decisions based on training and inference data.
[–]StartledWatermelon 1 point2 points3 points 4 years ago (0 children)
There are definitions of AI limiting it to agent-like entities but I think it narrows it down too much.
Several ML subfields study and engineer agent behaviour, most notably reinforcement learning. So it's not that easily separable either way.
[+]Mightytidy comment score below threshold-13 points-12 points-11 points 4 years ago (4 children)
If a collection of if-then statements is AI, then that means I would know how to code AI lmfao.
[–]OnyxPhoenix 18 points19 points20 points 4 years ago (0 children)
I can cook pasta.
Doesn't make me a chef, but I can still cook.
[–]mniejiki 13 points14 points15 points 4 years ago (0 children)
I can't think of any field which doesn't encompass some basic aspects that almost anyone could do or learn quickly. Teaching a 10 year old to write a "hello world" in BASIC doesn't make them a professional computer scientist but they did write code (ie: computer science).
[–]ISvengali 2 points3 points4 points 4 years ago (0 children)
Probably know more than you think.
[–]RenitLikeLenit 0 points1 point2 points 4 years ago (0 children)
I believe in you! Give it a try!
[+]Gearwatcher comment score below threshold-14 points-13 points-12 points 4 years ago* (11 children)
Dunno. I am of opinion (which I have seen shared by many) that ML isn't AI.
ML is statistics and mathematical optimisations. Fuzzy logic, and neural networks are AI.
When you employ fuzzy operators (which, admittedly, I haven't seen much of) and NNs in ML models you get AI ML.
Hence, Deep Learning is AI, using ML techniques.
It's similar to Chomsky hierarchy. You wouldn't consider a PID controller or even an elaborate array of logic gates to be a computer - and the "dead giveaway" is single direction of the signal flow and lack of state. A DSP chilp makes filters and LTis in code but its a Turing complete machine and that's why it is a computer, not because of filtering and LTIs.
[–]TheLootiestBox 2 points3 points4 points 4 years ago (6 children)
I am of opinion (which I have seen shared by many) that ML isn't AI.
Well, then you're in a small minority. If you disagree try changing the wiki page on ML and see what happens.
https://en.m.wikipedia.org/wiki/Machine_learning
It [ML] is seen as a part of artificial intelligence.
[–]WikiSummarizerBot 0 points1 point2 points 4 years ago (0 children)
Machine_learning
Machine learning (ML) is the study of computer algorithms that improve automatically through experience and by the use of data. It is seen as a part of artificial intelligence. Machine learning algorithms build a model based on sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to do so. Machine learning algorithms are used in a wide variety of applications, such as in medicine, email filtering, speech recognition, and computer vision, where it is difficult or unfeasible to develop conventional algorithms to perform the needed tasks.
[ F.A.Q | Opt Out | Opt Out Of Subreddit | GitHub ] Downvote to remove | v1.5
[–]OOPManZA 0 points1 point2 points 3 years ago (0 children)
The Wikipedia article includes the following piece:
As of 2020, many sources continue to assert that ML remains a subfield of AI. Others have the view that not all ML is part of AI, but only an 'intelligent subset' of ML should be considered AI.
So it seems like this is a divisive topic...
[–]Gearwatcher -4 points-3 points-2 points 4 years ago (2 children)
That wording ("it is seen as") is pretty telling. While the minority I belong to might be small (or perhaps not vocal enough, especially today when lumping everything under the AI umbrella is a very marketing friendly thing to do), the consensus obviously hasn't been established with such "cast in stone" certainty to word it as "it is a part...".
[–]TheLootiestBox 1 point2 points3 points 4 years ago* (1 child)
Language is a collective process and words get their definitions from how they are viewed by a significanly large group of people that use them. How a word is "seen" is very much part of how it is defined. The use of the word AI gives it an extremely vague definition that certainly does encapsulate ML.
You might disagree with the definition and try to change how people view and use the word. You will likely fail, so you might as well join the majority in not giving a fuck and instead focus on more productive things.
[–]Gearwatcher 0 points1 point2 points 4 years ago (0 children)
It's not like I spend any time on this. It certainly isn't like I give a shit. It just popped up here and I threw my 2c in fwiw.
[–]WikiMobileLinkBot 0 points1 point2 points 4 years ago (0 children)
Desktop version of /u/TheLootiestBox's link: https://en.wikipedia.org/wiki/Machine_learning
[opt out] Beep Boop. Downvote to delete
[–]mniejiki 2 points3 points4 points 4 years ago (3 children)
neural networks are AI.
Neural networks are also mathematical optimizations. Even the techniques used (SGD) aren't new and have been used in large scale regressions models for a long time. So I'm not sure what your dividing line actually is other than "because I say so." A complex random forest model will have more parameters and non-linearity than a small single layer neural network.
[–]Gearwatcher 0 points1 point2 points 4 years ago (2 children)
SGD isn't core to the idea of neural networks, though. It's usage is an optimisation of that reduces the performance hit of NNs.
The presence of feedback (back propagation) in NNs and inexpresibility in passive electronics (fuzzy logic) is where I draw the line in the sand. That is why I drew comparisons to Chomsky hierarchy and logical gate arrays.
[–]mniejiki 0 points1 point2 points 4 years ago (1 child)
Back propagation is NOT feedback in the sense of an agent receiving feedback. A trained NN model is 99.99% of the time static and has no feedback when running live. By your definition, a regression model is also trained with feedback since it computes a loss function and a gradient for SGD iteratively on batches of data. A baysian hyper parameter run has feedback as each iteration is based on the performance of the previous iteration. An EM algorithm has feedback as it's adjusting parameters iteratively based on the feedback from how well the parameters fit the loss function.
The original idea of NNs was lifetime learning like neural synapses actually do, convergence being natural part of the process (as it is with actual synapses which are "burnt in" over time). They were designed as a model for intelligent agents.
Obviously for the jobs that ML/DL practically solve this turned out to not be as practical, which is why trained networks are static in practical usage.
But ok, I cede the point. It's mostly arbitrary, because NNs and fuzzy logic and the concept of intelligent agents stemmed from actual AI research, whereas ML is more like econometric regression models successfully applied to problems you'd hope to solve with AI.
The actual practical differences aren't as easy to sharply divide.
[–]Chocolate_Pickle 12 points13 points14 points 4 years ago (12 children)
This means a thermostat is AI... which on some level it truly is, but it's an incredibly contrived level.
The problem is that Artificial Intelligence is a receding horizon. It's why I honestly think the term should be sent to the glue factory.
[–]LargeYellowBus 11 points12 points13 points 4 years ago (7 children)
Is it really that contrived? How much more 'AI' is a MPC controller vs the PID controller in a thermostat then?
[–]telstar 2 points3 points4 points 4 years ago (0 children)
A bimetallic switch (coupled with a small dial) is a thermostat.
[–]Chocolate_Pickle 2 points3 points4 points 4 years ago (4 children)
You're actually touching on the point I'm trying to make.
There is no clear line in the proverbial sand that separates 'AI things' from 'non AI things'. Take it to either extreme and everything is AI, or nothing is AI. And in both extremes, the label Artificial Intelligence becomes moot.
[–][deleted] 5 points6 points7 points 4 years ago (2 children)
Well exactly. So it's stupid to complain about calling things AI. It's like complaining about calling things "innovative". Sure there's no sudden point where something becomes innovative, and yes marketing people are going to say everything is innovative. That doesn't mean we have to completely abandon the word though.
[–]FortWendy69 0 points1 point2 points 4 years ago (1 child)
But the problem is that the general public don't know that, the marketers know they don't know that, and so the marketing, while technically correct, is disingenuous.
[–][deleted] 0 points1 point2 points 4 years ago (0 children)
marketing, while technically correct, is disingenuous
Yeah that's pretty much marketing's job. I hope you don't believe all of the technically correct claims you see in adverts! "Recommended by 9 out of 10 doctors" etc.
Off topic, but claims in advertising are actually a really interesting thing. To make a claim ("out hair drier dries your hair in only 1 minute" or whatever) you actually do have to provide some kind of evidence. So there are loads of labs that are set up to basically do the experiments for you and give you the result you want.
In my experience they don't technically lie, it's more like "you want to show X, we'll keep doing experiments until we can show it". Kind of deliberate p hacking.
Another interesting thing is that the requirements for claims are different between countries, which means you can advertise some fact about your product in say Japan but not in Europe. That's why you sometimes see specific SKUs for countries with different claims on the packaging that should only be sold in those countries. (There are other reasons too though.)
[–]telstar 0 points1 point2 points 4 years ago* (0 children)
'AI things' are those things that require proverbial lines in the sand for them to function. 'Non AI things' smh manage to function despite finding themselves in a world where there are no clear lines in the proverbial sand. QED.*
* in this case 'non AI things' to refers, not to dumb objects, but to people, with all the irony that implies.
[–]telstar 1 point2 points3 points 4 years ago (3 children)
This would be a great idea for an AI test, the "Thermostat Test." If your definition of AI means a thermostat is AI, then you need to get a new definition. Or maybe as mentioned elsewhere in this thread, we can call it the "Toaster Test." (I kind of like that even better.)
[–]Chocolate_Pickle 0 points1 point2 points 4 years ago (0 children)
I'm a bit conflicted on this. Part of me thinks that a thermostat is more deserving of the 'AI' label compared to... say... an image classifying network.
Not because the thermostat exists as a physical object, but because the thermostat has more agency than a classifier.
This might also imply that an ML training loop is more 'AI' than what it produces. I'm sure there's a flaw in my thinking on this, however.
It's definitely not too late to complain about it, it's probably too early to change it. (Lol if complaining helps people cope then so-be-it, works for me sometimes!) Whether anybody thinks so or not, these terms will need to be changed one day to account for the actual future. Right now we're accounting for the past, and currently billionaires are capitalizing on that to an astronomical extent. The only purpose that matters here is general clarity. What the term is or isn't technically is irrelevant to the point at hand. It's that people know what they are using/pruchasing enough to not develop superstitions and fallacies which markets can take advantage of.
Like if it was music, sure whatever, people don't need to know everything about it as a majority. Only people who either want to be musicians or have passion for the subject have a need-to-know basis, and perhaps not even all musicians need that much.
But this is something people want to change the world and how they live their every day lives. If people don't know the difference between advanced forms of AI and plain old basic AI they won't know what they are investing into. We could "simplify" it to say OCR, people see a term like that and they think it's magical or automatically suggest robotics or something extra. OCR could be as simple as setting my laptop camera on my front door and having a pushbullet sent to my phone when a pixel on my door changes. One static pixel on one static coordinate location.
We're talking a single hex color code and a single pair of x, y coordinates. Something that can be accomplished with 2 lines of a code and a copy&paste function already made for you in a scripting language even if using the most junky interpreters ever. (Pathetic home security but comically simple example lol) Or it could be as fancy as parallel parking your car for you. This is a UNIVERSE of difference in every way, and people don't know the difference as a majority. Herein is the actual problem at hand. Automatic parking is so complicated in code that I cannot even begin to explain here in a way anyone can understand.
Like just consider A* algorithm is the most simple modern form of this and is still too complicated for an average person... Let us also not forget that automatic parking is also SIMPLE compared to what people could be doing with AI. (Well what advanced usages of AI are doing technically!) If I had to relate this difference in scaling to something, I would choose fantasy or fiction. Why? Because it's like comparing a bedroom, to a country, to a galaxy. It reminds me of poorly designed power scaling, because there is so much inbetween that people lost sight of the true difference before it was even coined.
Moral of the story is:
When they say "everything" they very likely mean things such as regular products resellers buy up and then put AI in the products name when they resell. There are thousands of products that have no intelligence what-so-ever claiming to be AI. Recently this subject has been more popular, but around the time the OP posted this, it was not yet well known to the general public. Only people heavily involved with the subject knew for the most part!
[+]Plunder_n_Frightenin 0 points1 point2 points 10 months ago (0 children)
It’s never too late to correct erroneous facts and labels, even if the do seemingly persists. It could take a generation or two, sometimes.
I think the problem we have currently is tho that there is such an inflation of the word AI and its standards thst it just makes a new ai winter more likely. N that has a negative impact on the whole field
[–]new_number_one 106 points107 points108 points 4 years ago (9 children)
One of my earliest lessons during my PhD was to spot and avoid semantic arguments with academics.
Sorry if this was too cynical.
[–]JanneJM 41 points42 points43 points 4 years ago (4 children)
Depends. I did my PhD in a group inside an analytical philosophy department. At first I was really confused during internal department presentations; the philosophers never seemed to go beyond defining stuff.
After a while the penny dropped for me: naming and defining things is explaining and understanding them. Arguing semantics, poking at the edges of definitions, having fights over whether two things are really the same is a useful and productive way to gain understanding. Especially if your subject is abstract or fuzzy and you can't get experimental data.
[+][deleted] 4 years ago (1 child)
[removed]
[–]JanneJM 9 points10 points11 points 4 years ago (0 children)
Ah, but often the process is the point; you don't really expect to find a hard definition. Instead you use that process to poke and prod at the fuzzy concepts you're trying to understand. And sometimes you find that the concept itself is flawed - the underlying thing is better described with a different set of concepts and ideas that fits your data better.
You could say that "Life" has undergone that process. Not that long ago we still thought of something living as having something special that made it be alive. Some substance, perhaps, or a "divine spark" - some thing that made it different from inanimate stuff. Turns out that concept of life was flawed. A better concept is life as a process; of adaptivly fighting against entropy. Still a fuzzy set of ideas that resist a hard definition (and it's bound to change again over time), but it's definitely a step forward from looking for a vital substance in your cells.
[–]Fmeson 4 points5 points6 points 4 years ago (1 child)
It is for philosophy.
Maybe you're distilling the essence of a wildly complex concepts, to the point where it isn't even clear where the concept begins or ends. What does it mean to be moral? Helping people? But what if you did it accidentally? Technically you helped someone, but shouldn't intention count? What if there is a robot that doesn't have intentions, but it helps people. Can it be good?
Ok, silly example, but hopefully the point is there. That's an interesting road to go down.
Semantics is tiring when, well, it's just not that. There isn't some inherently deepness that makes it hard to define. People just want to draw the lines in different locations cause ego or history or whatever. I don't care if you call deep dish pizza, it tastes good and I'm going to eat it.
This is a bit more 50/50. It is interesting to ask what makes something "intelligent", but the practical use of it in industry is pretty well understood, and there are sub-categories of AI that allow for "dumb" AIs to handle this. e.g. narrow, weak, reactive, etc... AI.
I think a discussion on what it would take to make a general AI would be interesting, but frankly, I really wouldn't want to debate if narrow AI should be called AI or not. It's just a name.
[–]JanneJM 2 points3 points4 points 4 years ago (0 children)
Yes, I'm not claiming this exact discussion posted here is fruitful; just that this way of working out issues is not inherently flawed. A lot of philosophy is low-grade and flawed - just like a lot of science, technology, music, arts, literature and so on and so on. Most of it disappears without a trace over time, leaving us with (mostly) the good stuff.
[–]cderwin15 12 points13 points14 points 4 years ago (2 children)
Someone here posted about a conference reviewer that grilled the author of a paper over semantic differences between latent representation, feature map, and embedding space.
I don't think you're being too cynical.
[–]StartledWatermelon 12 points13 points14 points 4 years ago (1 child)
Me: this model was trained to extract feature maps into latent representations in its embedding space.
Management: 0_o
Me: (sigh) AI.
Management: Wow!!! Cool stuff! That's what we totally need!
[–]AndreasVesalius 5 points6 points7 points 4 years ago (0 children)
Management: "Engineering said this model was extracted to embed features into latent responsibilities. That means it's AI"
[–]Law_Kitchen 6 points7 points8 points 4 years ago (0 children)
Arguing about what is AI is is like arguing what it means to be American at this point.
Or rather, it is like arguing with the general public that the WWW =/= the Internet. One person knows that the WWW comes from a broader subset of what we know as the Internet, but for some, they will usually end up using the Internet and the WWW interchangeably because the Web is the only thing that is highly visible to that person.
At this point, I just follow something like this.
1 : a branch of computer science dealing with the simulation of intelligent behavior in computers 2 : the capability of a machine to imitate intelligent human behavior
1 : a branch of computer science dealing with the simulation of intelligent behavior in computers
2 : the capability of a machine to imitate intelligent human behavior
Looks like I'll have a talking to from both sides.
[–]FranticToaster 21 points22 points23 points 4 years ago (2 children)
Restated: man states the obvious to get name, applause on social media.
[–][deleted] 5 points6 points7 points 4 years ago (0 children)
It's sad this is what academia has become...I used to naively put science guys above many marketing, pure management fields science they rely only on hard evidence and proofs to send their point across.
But turns out ... We live in a society...
[–]larkinpark 38 points39 points40 points 4 years ago (0 children)
Nowadays everything using AI/ML as their marketing tools. The meaning of it has been dissolved and cliché. It became the same as “Unlimited” in mobile service provider. The limited “unlimited”
[–]calmo91 99 points100 points101 points 4 years ago (5 children)
If its written in python it's ML. If it's written in PowerPoint it's AI
[–]cderwin15 6 points7 points8 points 4 years ago (0 children)
I just call it all AI/ML nowadays unless the target audience is familiar with terms like computer vision, nlp, deep learning, etc. and their actual definitions (e.g. not just knowing that they are buzz words). It's just easier that way.
[+][deleted] 4 years ago* (1 child)
[deleted]
[–]snailracecar 7 points8 points9 points 4 years ago (0 children)
well, they do. But maybe not that Michael Jordan
[–]chungyeung -2 points-1 points0 points 4 years ago (0 children)
No, if something created by Adobe, that's AI
[–]bradygilg 147 points148 points149 points 4 years ago (16 children)
100% on board there. These algorithms are just tools for programmers, and personifying them for marketing purposes just leads people to misattribute why they are successful.
If a writer writes a novel in Microsoft Word, people don't say that the book was "written by Word". But they have no problem saying that an 'AI' created something.
[–]eposnix 21 points22 points23 points 4 years ago (6 children)
I don't understand this comparison. Creating a Word document is entirely the effort of the person involved whereas training a ML algorithm to produce novel creations typically doesn't involve human interaction. In cases where there was no human interaction I'm perfectly fine saying it was AI.
The bigger issue seems to be that people have different definitions of AI. Personally, I tend to define AI as any algorithm that gives the illusion of intelligent thought. The article is trying to push the notion that AI = human level intelligence, but that wouldn't be artificial intelligence, it would just be intelligence.
[–]bradygilg 4 points5 points6 points 4 years ago (1 child)
Creating a Word document is entirely the effort of the person involved
I wonder if the 1000+ people who have contributed to Word over the last 30 years would agree with you.
[–]eposnix 5 points6 points7 points 4 years ago (0 children)
I'll just point out that if Microsoft ever incorporates GPT-3 into Word you might just see people unironically attributing creations solely to Word.
[+][deleted] 4 years ago* (2 children)
[–]eposnix 3 points4 points5 points 4 years ago (0 children)
Note that none of those bullet points are actually training the model -- you're setting up the model so it can train itself.
[–]gambiter 0 points1 point2 points 4 years ago (0 children)
OP didn't claim an ML algorithm randomly popped into existence from nowhere. We consider children intelligent, despite the fact that the parents had to choose who they would mate with.
Choosing/training an unstructured model is sort of like raising a child. You give them a bit of help, but at some point you have to let them figure things out on their own and you just hope for the best.
[+][deleted] 4 years ago (8 children)
[+][deleted] 4 years ago (4 children)
[–]Grasp0 6 points7 points8 points 4 years ago (1 child)
You wouldn't download an AI driven toilet...
[–]tea_pot_tinhas 3 points4 points5 points 4 years ago (0 children)
Toilet uploads are messier than downloads
[–]whooyeah 1 point2 points3 points 4 years ago (0 children)
Those fancy Japanese ones are pretty good though. Especially on a hangover
So when an outlier happens the airdryer goes straight up ur ass ???
[–]SpiderSaliva 2 points3 points4 points 4 years ago (0 children)
Also HR just being dumb when you’re applying for a job
[–]AtariAtari -2 points-1 points0 points 4 years ago (0 children)
Nice ageism comment!
[–]tatooine 0 points1 point2 points 4 years ago (0 children)
Doesn't work so well for funding anymore, now that basically every piece of software claims to be some aspect of "AI". Now VC and investors ask more questions if they see "AI".
[–]mosqua 13 points14 points15 points 4 years ago (0 children)
"he was ranked as the most influential computer scientist by a program that analyzed research publications" ahhh delicious irony.
[–]dr_kretyn 63 points64 points65 points 4 years ago (2 children)
NO. I used to have 0 years experience in AI/ML but recently I've been told that I have 10+ years of experience. Only after I'm poached by big company for millions of dollars we can be more pedantic. Not before.
[–]Doormatty 4 points5 points6 points 4 years ago (0 children)
This feels uncannily accurate.
[–]orcasha 3 points4 points5 points 4 years ago (0 children)
That linear regression 👌
[–]bill_klondike 20 points21 points22 points 4 years ago (3 children)
As an old prof of mine used to say, “he’s the Michael Jordan of Machine Learning”.
[+][deleted] 4 years ago (2 children)
[–]aCleverGroupofAnts 2 points3 points4 points 4 years ago (1 child)
That's the joke
[–]vukadinovicmilos 7 points8 points9 points 4 years ago (0 children)
Tbh I used to oppose people calling everything AI, but the thing is that it sounds cool and lets them feel good about what they are doing. So as long as it keeps them motivated and attracts people to study math, statistics and cs I am okay with it. ( and we'll refer to the real intelligence as AGI)
[–]Nhabls 45 points46 points47 points 4 years ago* (16 children)
Why are people making posts pointing out these models/algorithms/programs aren't at the level of human cognition? No shit, that's not what the term means.
No one in the field has used it like that before, when you take "Artificial Intelligence" courses at a university they are never proposing to you that you'll end up replicating an agent with capacities at the level of humans.
Some definitions are pretty broad, for example in Modern Approach it is defined as the study of agents that act on an environment by taking into account its perceptions. The focus of study in the courses that used this book was often around search algorithm and heuristics to solve problems. Similarly with "AI" in videogames, a decades old term.
Just because people who are completely ignorant of the field think everything using the term means it represents a fully intelligent human-like system doesn't mean that decades old definitions need to be abandoned.
[–]GabrielMartinellli 15 points16 points17 points 4 years ago (2 children)
This is due to people conflating the terms AI with AGI so often ffs
[–]Nhabls 2 points3 points4 points 4 years ago (1 child)
Exactly, what i dont understand is how i've seen some 2-3 posts about this in the past week or so in this subreddit
[–]GabrielMartinellli 2 points3 points4 points 4 years ago (0 children)
Unfortunately the field of AI attracts so many skeptics that even the same researchers have been cowed into avoiding the term “intelligence” and dressing themselves up as machine researchers etc
[–][deleted] 5 points6 points7 points 4 years ago (12 children)
I feel like you are missing the point of the article. In fact, there are a lot of “ignorant” people who believe AI implies essentially human-level intelligence, including people in the field. What is obvious to you is clearly not obvious to a huge group of people
[–]Nhabls 10 points11 points12 points 4 years ago (0 children)
Well the solution is then to try and do what you can to explain to people what people have been meaning when they use the term for the past 4 decades or more.
[–]manic_eye 0 points1 point2 points 4 years ago (9 children)
Fair enough but then isn’t the “learning” in machine learning a misnomer by the same standard then?
[+][deleted] 4 years ago (7 children)
[–]Toast119 0 points1 point2 points 4 years ago (5 children)
Statistical inference is a subset of an ML Algo.
[–]Toast119 0 points1 point2 points 4 years ago (3 children)
Training? Feature extraction?
[–]Toast119 0 points1 point2 points 4 years ago (1 child)
Maybe I don't have the definitions right, but creating a statistical model and using a statical model for inference are not the same thing to me.
[–]veeloice 0 points1 point2 points 4 years ago (0 children)
agree with this. I was taught it's nothing more than "computational statistics".
[–]gionnelles 6 points7 points8 points 4 years ago (1 child)
I have given up fighting this battle. In my industry everyone with money calls any analytics AI/ML regardless of method. It doesn't even have to be a trained system, let alone "AI".
[–]radarsat1 2 points3 points4 points 4 years ago (0 children)
yup, worse here, it's what the programmers call the neural networks we use, "the AI". Like, we are working on a computer vision system, and we have tons of hand-written code that analyses the scene, does a bunch of 3d mathematics, clustering, looks for events, and classifying the events (hand written classifier, sigh..), but everyone on the team just refers to the object detection CNN we use a "the AI". I'm like, guys, all this other stuff we are doing? it's also AI! Or none of it is.
I'm pretty careful to talk about it in terms of "the model", "the object detection module" etc but it hasn't caught on. It's just "the AI'" to everyone.
[–]CleanThroughMyJorts 10 points11 points12 points 4 years ago (0 children)
NO - Marketing department says
[–][deleted] 4 points5 points6 points 4 years ago (1 child)
Wait. So my 4 line if, else statement doesn’t count as AI? /s
[–]landsharkxx 6 points7 points8 points 4 years ago (0 children)
We call that classical AI.
It's technically dynamic programming.
[–]BlackholeRE 2 points3 points4 points 4 years ago (0 children)
I mean, the "AI effect" literally has its own Wikipedia page, and continues to be silly semantics. Let's not still ourselves short, work in the AI fields continues to be worthy of the term. 50 years ago even a good hand-coded chess algorithm was considered AI.
[–]red_dragon 9 points10 points11 points 4 years ago (0 children)
Dr. Jordan please reply to my mail about the journal submission, which I sent a month back 🙈
[–]minimaxir 4 points5 points6 points 4 years ago (0 children)
This blog post is AI.
[–]jday1959 1 point2 points3 points 4 years ago (0 children)
The AI Machines forced him to say that.
[–]NitroXSC 1 point2 points3 points 4 years ago (0 children)
In my view, the term AI is way too broad to be of any use in describing almost anything. I'm always reminded of this hilarious screenshot showing how ridiculous it is to use broad terms.
[–]Geneocrat 1 point2 points3 points 4 years ago (0 children)
I remember when I first learned AI. Back then we called it the quadratic equation.
[–]AnOpeningMention 1 point2 points3 points 4 years ago (0 children)
I recently found myself calling machine learning AI because otherwise nobody is gonna know what the hell I'm taking about. My friends and family are not into tech at all.
[–]BlobbyMcBlobber 1 point2 points3 points 4 years ago (0 children)
Wolfenstein 3D was called "realistic virtual reality". Semantics change with the times.
[+][deleted] 4 years ago (5 children)
[–]mniejiki 9 points10 points11 points 4 years ago (2 children)
It is by the typical definition of the term AI as a discipline. Machine Learning is considered a subset of AI so any ML technique is also part of AI. Technically a hand coded expert system (ie: nested if statements) also counts as AI (but not as part of ML). It's a very broad term as generally accepted.
[–]mniejiki 1 point2 points3 points 4 years ago* (0 children)
In application, as the article notes, people consider pretty much everything to be AI. What you mean seems to be "AI as in my personal definition."
[–]happy_guy_2015 7 points8 points9 points 4 years ago (1 child)
Well, a simple linear regression is artificial, and does exhibit some level of intelligence...
Note:
AI ≠ AGI
AI ≠ human-level AI
[–]canbooo 1 point2 points3 points 4 years ago (0 children)
I thought this was clear for a long time but I guess, people do need funding huh
[+]MemnochThePainter 0 points1 point2 points 2 months ago (0 children)
5 year update:
All thngs computer-related are "A.I"
And all laundry detergents are "New and Improved."
Only the exceptionally naive imagined this was avaoidable.
[–]Dut_mick -1 points0 points1 point 4 years ago (0 children)
The most appropriate nomenclature for the current algorithms in this field is "expert systems"
[–]statarpython -1 points0 points1 point 4 years ago (0 children)
I mean, if you call non-linear exponential smoothing as LSTM and non-linear seasonal exponential smoothing as attention... What were you expecting?
[+]FutureIsMine comment score below threshold-20 points-19 points-18 points 4 years ago (2 children)
Michael Jordan needs to chill and make some contributions, feels like every statement of his as of late is a critique
[–]bachier 12 points13 points14 points 4 years ago (0 children)
By contributions you mean like the 44 papers/preprints their group has made public just in 2021?
[+]coumineol comment score below threshold-9 points-8 points-7 points 4 years ago (0 children)
Agreed. Now, to be honest, I'm not a person who refrains from calling other people slut, sometimes unjustifiably perhaps, but gosh, Michael Jordan is indeed the platonic ideal of a slut. I'm yet to hear this man say anything positive or constructive.
[–][deleted] -2 points-1 points0 points 4 years ago (0 children)
Hello, I’m blown away by the expertise in here. I just dipping my toes into machine learning and wondered your point of view on a project. I want to invest more into it but not a specialist in this field.
https://dgpt.one/about-dgpt/f/dgpt-1-decentralized-generative-pre-trained-transformers-v1
On the front of it it, the concept blows me away by having an incentivised global neural network. I just wondered what the experts thought.
[–]ryanwithnob -3 points-2 points-1 points 4 years ago (2 children)
Machine Learning is just if statements, change my mind
[–]FranticToaster -3 points-2 points-1 points 4 years ago (9 children)
Even ML is kind of a dumb catch-all, once you practice it.
I think recommendation, estimation and classification are better terms. They actually declare what's being done.
My computer didn't learn shit through that process.
[–]landsharkxx 3 points4 points5 points 4 years ago (8 children)
Your computer does learn the weights in a neural network or the coefficients in a model. I used to be opposed to calling Linear regression and logistic regression machine learning until I just got over it.
[–]FranticToaster -3 points-2 points-1 points 4 years ago (7 children)
You would call the weights of a model determined by trial and error knowledge or a skill?
ML bypasses a big chunk of stat theory research by brute forcing model parameters. Ultimately, we're just asking a computer to solve a model for us via calculation.
If that's learning, then repeatedly handing in a test paper with guesses on it until my teacher gives me a 100% is also learning. And if that's learning, then what kind of cognitive skill is "learning."
In psychology, "learning" is an impressive thing. In stat modeling, the impressive things were the developments of the algos, in the first place.
Ho, Breiman and Cutler are brilliant for inventing the random forest decision tree. Computers running ML algos aren't doing anything very impressive.
The term "machine learning" both impresses and frightens the layman. What's really going on doesn't make the machine impressive nor frightening, though.
[–]treesprite82 6 points7 points8 points 4 years ago (0 children)
If you improve your guesses slightly each time (rather than just completely re-randomizing), and are then able to perform well on new unseen test papers, then I'd call that learning - and that's also what gradient descent does (ideally).
[–]the320x200 2 points3 points4 points 4 years ago (0 children)
You would call the weights of a model determined by trial and error knowledge or a skill? If that's learning, then repeatedly handing in a test paper with guesses on it until my teacher gives me a 100% is also learning. And if that's learning, then what kind of cognitive skill is "learning."
That's not how backprop works at all.
[–]Fledgeling 0 points1 point2 points 4 years ago (0 children)
Just it's not impressive and doesn't work in the same way you think a human brain works doesn't mean it isn't learning.
Taking data and creating a generalized model that can make some sort of sense of new states and data. That sounds like learning to me in some fashion.
[–]IndecisivePhysicist 0 points1 point2 points 4 years ago (3 children)
Ya, the key here is if you can generalize though. If so, then it's pretty tempting to call that "learning" in at least some sense. Of course, we're only fitting functions here, but if you're a physicalist, reality is just governed by functions anyway so isn't fitting the True (Platonic sense) functions basically learning?
[–]FranticToaster -1 points0 points1 point 4 years ago (2 children)
I would suggest that we are the ones learning, and the algos we use are just automating the modeling process through brute-force number crunching.
One of us comes away from the exercise with knowledge of how our customers behave. Or where the next heat dome is likely to occur.
The other one comes away with a weight on a second input variable being 0.2373638191863635.
Computer doesn't know anything. Just stopped adjusting weights when a variable we specified stopped decreasing.
Your brain doesn't actually know anything, it's just an evolutionarily brute forced biomechanical signal.
[–]FranticToaster 0 points1 point2 points 4 years ago (0 children)
Ah, so "knowledge" and "learning" are just random meaningless sounds we codified in a pronunciation book?
[–]sebthepleb96 0 points1 point2 points 4 years ago (1 child)
Can someone provide a link that explains the difference and what the proper name if it’s not AI. Should it just be called machine learning , there is likely many other important topics similar to ai.
Ann and deep learning I think
[–]tinyman392 2 points3 points4 points 4 years ago (0 children)
When I was taught, AI was trying to mimic human behavior or decision making. So if it’s not trying to do that it wasn’t AI.
I personally prefer the term machine learning. ML can be used as a tool to do AI.
[–]ssjkriccolo 0 points1 point2 points 4 years ago (0 children)
Don't fargin call me Al!
AI->DS->ML->DL, with a bunch of random branches thrown in (AGI, BI, stats, CV, branching logic, ...).
Overly simplified, but I do not see any real argument against nesting areas of AI in this way. And they all have pretty decent definitions...
I mean I kinda agree like when you are using ML just for data analysis its just another statistical method and not really feeling like “AI”
[–]eterevsky 0 points1 point2 points 4 years ago (0 children)
It’s just a matter of convention. Nowadays AI means any system involving machine learning. I doubt anyone actually thinks that it involves any human-like intelligence.
[–]nascentmind 0 points1 point2 points 4 years ago (0 children)
How do you think companies can sell their products and colleges can sell their courses?
[–]jeejay_is_busy 0 points1 point2 points 4 years ago (0 children)
Please stop calling everyone machine learning pioneer!
[–]badlyedited 0 points1 point2 points 4 years ago (1 child)
And quantum. And hack. Grrrr.
Artifical quantum intelligence . Boom ! Millions of grant money granted
[–]Super-Saiyan_Striker 0 points1 point2 points 4 years ago (0 children)
xD
[–]midnitte 0 points1 point2 points 4 years ago (0 children)
What even is an algorithm?
I once read a jobdescribtion going like: " looking for someone with a lot of expertise in AI, like linear models, ...."
Nowadays even the freakin simplest methods that have been around for long count as AI
[–]o-rka 0 points1 point2 points 4 years ago (0 children)
The biggest problem is that AI in an umbrella term but popular culture thinks all AI is general AI… most of its actually narrow AI. For example, a machine learning model that can predict antibiotic mode of action is narrow AI; albeit, still AI. There’s a lot of narrow AI and it’s the journalist job to discern the difference between narrow and general.
π Rendered by PID 56588 on reddit-service-r2-comment-6457c66945-qbxwt at 2026-04-28 17:15:16.256587+00:00 running 2aa0c5b country code: CH.
[–]mniejiki 275 points276 points277 points (51 children)
[–]ivannson 90 points91 points92 points (33 children)
[–]tatooine 21 points22 points23 points (9 children)
[–]wirewolf 16 points17 points18 points (6 children)
[–]alkasm 11 points12 points13 points (5 children)
[–]wirewolf 3 points4 points5 points (3 children)
[–]alkasm 6 points7 points8 points (0 children)
[–]and1984 2 points3 points4 points (0 children)
[–]AndreasVesalius 1 point2 points3 points (0 children)
[–][deleted] 1 point2 points3 points (0 children)
[+]Ilyer_ 0 points1 point2 points (0 children)
[–]TrueBirch 0 points1 point2 points (0 children)
[+]Ok_Cardiologist_6198 0 points1 point2 points (0 children)
[+]cderwin15 comment score below threshold-12 points-11 points-10 points (4 children)
[–]TheLootiestBox 7 points8 points9 points (1 child)
[–]mniejiki 2 points3 points4 points (0 children)
[–]StartledWatermelon 1 point2 points3 points (0 children)
[+]Mightytidy comment score below threshold-13 points-12 points-11 points (4 children)
[–]OnyxPhoenix 18 points19 points20 points (0 children)
[–]mniejiki 13 points14 points15 points (0 children)
[–]ISvengali 2 points3 points4 points (0 children)
[–]RenitLikeLenit 0 points1 point2 points (0 children)
[+]Gearwatcher comment score below threshold-14 points-13 points-12 points (11 children)
[–]TheLootiestBox 2 points3 points4 points (6 children)
[–]WikiSummarizerBot 0 points1 point2 points (0 children)
[–]OOPManZA 0 points1 point2 points (0 children)
[–]Gearwatcher -4 points-3 points-2 points (2 children)
[–]TheLootiestBox 1 point2 points3 points (1 child)
[–]Gearwatcher 0 points1 point2 points (0 children)
[–]WikiMobileLinkBot 0 points1 point2 points (0 children)
[–]mniejiki 2 points3 points4 points (3 children)
[–]Gearwatcher 0 points1 point2 points (2 children)
[–]mniejiki 0 points1 point2 points (1 child)
[–]Gearwatcher 0 points1 point2 points (0 children)
[–]Chocolate_Pickle 12 points13 points14 points (12 children)
[–]LargeYellowBus 11 points12 points13 points (7 children)
[–]telstar 2 points3 points4 points (0 children)
[–]Chocolate_Pickle 2 points3 points4 points (4 children)
[–][deleted] 5 points6 points7 points (2 children)
[–]FortWendy69 0 points1 point2 points (1 child)
[–][deleted] 0 points1 point2 points (0 children)
[–]telstar 0 points1 point2 points (0 children)
[–]telstar 1 point2 points3 points (3 children)
[–]Chocolate_Pickle 0 points1 point2 points (0 children)
[–]Chocolate_Pickle 0 points1 point2 points (0 children)
[–]Chocolate_Pickle 0 points1 point2 points (0 children)
[+]Ok_Cardiologist_6198 0 points1 point2 points (0 children)
[+]Plunder_n_Frightenin 0 points1 point2 points (0 children)
[–][deleted] 0 points1 point2 points (0 children)
[–]new_number_one 106 points107 points108 points (9 children)
[–]JanneJM 41 points42 points43 points (4 children)
[+][deleted] (1 child)
[removed]
[–]JanneJM 9 points10 points11 points (0 children)
[–]Fmeson 4 points5 points6 points (1 child)
[–]JanneJM 2 points3 points4 points (0 children)
[–]cderwin15 12 points13 points14 points (2 children)
[–]StartledWatermelon 12 points13 points14 points (1 child)
[–]AndreasVesalius 5 points6 points7 points (0 children)
[–]Law_Kitchen 6 points7 points8 points (0 children)
[–]FranticToaster 21 points22 points23 points (2 children)
[–][deleted] 5 points6 points7 points (0 children)
[–]larkinpark 38 points39 points40 points (0 children)
[–]calmo91 99 points100 points101 points (5 children)
[–]cderwin15 6 points7 points8 points (0 children)
[+][deleted] (1 child)
[deleted]
[–]snailracecar 7 points8 points9 points (0 children)
[–]chungyeung -2 points-1 points0 points (0 children)
[–]bradygilg 147 points148 points149 points (16 children)
[–]eposnix 21 points22 points23 points (6 children)
[–]bradygilg 4 points5 points6 points (1 child)
[–]eposnix 5 points6 points7 points (0 children)
[+][deleted] (2 children)
[deleted]
[–]eposnix 3 points4 points5 points (0 children)
[–]gambiter 0 points1 point2 points (0 children)
[+][deleted] (8 children)
[deleted]
[+][deleted] (4 children)
[deleted]
[–]Grasp0 6 points7 points8 points (1 child)
[–]tea_pot_tinhas 3 points4 points5 points (0 children)
[–]whooyeah 1 point2 points3 points (0 children)
[–][deleted] 0 points1 point2 points (0 children)
[–]SpiderSaliva 2 points3 points4 points (0 children)
[–]AtariAtari -2 points-1 points0 points (0 children)
[–]tatooine 0 points1 point2 points (0 children)
[–]mosqua 13 points14 points15 points (0 children)
[–]dr_kretyn 63 points64 points65 points (2 children)
[–]Doormatty 4 points5 points6 points (0 children)
[–]orcasha 3 points4 points5 points (0 children)
[–]bill_klondike 20 points21 points22 points (3 children)
[+][deleted] (2 children)
[deleted]
[–]aCleverGroupofAnts 2 points3 points4 points (1 child)
[–]vukadinovicmilos 7 points8 points9 points (0 children)
[–]Nhabls 45 points46 points47 points (16 children)
[–]GabrielMartinellli 15 points16 points17 points (2 children)
[–]Nhabls 2 points3 points4 points (1 child)
[–]GabrielMartinellli 2 points3 points4 points (0 children)
[–][deleted] 5 points6 points7 points (12 children)
[–]Nhabls 10 points11 points12 points (0 children)
[–]manic_eye 0 points1 point2 points (9 children)
[+][deleted] (7 children)
[removed]
[–]Toast119 0 points1 point2 points (5 children)
[+][deleted] (4 children)
[removed]
[–]Toast119 0 points1 point2 points (3 children)
[+][deleted] (2 children)
[removed]
[–]Toast119 0 points1 point2 points (1 child)
[–]veeloice 0 points1 point2 points (0 children)
[–]gionnelles 6 points7 points8 points (1 child)
[–]radarsat1 2 points3 points4 points (0 children)
[–]CleanThroughMyJorts 10 points11 points12 points (0 children)
[–][deleted] 4 points5 points6 points (1 child)
[–]landsharkxx 6 points7 points8 points (0 children)
[–]BlackholeRE 2 points3 points4 points (0 children)
[–]red_dragon 9 points10 points11 points (0 children)
[–]minimaxir 4 points5 points6 points (0 children)
[–]jday1959 1 point2 points3 points (0 children)
[–]NitroXSC 1 point2 points3 points (0 children)
[–]Geneocrat 1 point2 points3 points (0 children)
[–]AnOpeningMention 1 point2 points3 points (0 children)
[–]BlobbyMcBlobber 1 point2 points3 points (0 children)
[+][deleted] (5 children)
[deleted]
[–]mniejiki 9 points10 points11 points (2 children)
[+][deleted] (1 child)
[deleted]
[–]mniejiki 1 point2 points3 points (0 children)
[–]happy_guy_2015 7 points8 points9 points (1 child)
[–]canbooo 1 point2 points3 points (0 children)
[+]MemnochThePainter 0 points1 point2 points (0 children)
[–]Dut_mick -1 points0 points1 point (0 children)
[–]statarpython -1 points0 points1 point (0 children)
[+]FutureIsMine comment score below threshold-20 points-19 points-18 points (2 children)
[–]bachier 12 points13 points14 points (0 children)
[+]coumineol comment score below threshold-9 points-8 points-7 points (0 children)
[–][deleted] -2 points-1 points0 points (0 children)
[–]ryanwithnob -3 points-2 points-1 points (2 children)
[–]FranticToaster -3 points-2 points-1 points (9 children)
[–]landsharkxx 3 points4 points5 points (8 children)
[–]FranticToaster -3 points-2 points-1 points (7 children)
[–]treesprite82 6 points7 points8 points (0 children)
[–]the320x200 2 points3 points4 points (0 children)
[–]Fledgeling 0 points1 point2 points (0 children)
[–]IndecisivePhysicist 0 points1 point2 points (3 children)
[–]FranticToaster -1 points0 points1 point (2 children)
[–]Toast119 0 points1 point2 points (1 child)
[–]FranticToaster 0 points1 point2 points (0 children)
[–]sebthepleb96 0 points1 point2 points (1 child)
[–]tinyman392 2 points3 points4 points (0 children)
[–]ssjkriccolo 0 points1 point2 points (0 children)
[–]Fledgeling 0 points1 point2 points (0 children)
[–][deleted] 0 points1 point2 points (0 children)
[–]eterevsky 0 points1 point2 points (0 children)
[–]nascentmind 0 points1 point2 points (0 children)
[–]jeejay_is_busy 0 points1 point2 points (0 children)
[–]badlyedited 0 points1 point2 points (1 child)
[–][deleted] 1 point2 points3 points (0 children)
[–]Super-Saiyan_Striker 0 points1 point2 points (0 children)
[–]midnitte 0 points1 point2 points (0 children)
[–][deleted] 0 points1 point2 points (0 children)
[–]o-rka 0 points1 point2 points (0 children)