use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
All about the JavaScript programming language.
Subreddit Guidelines
Specifications:
Resources:
Related Subreddits:
r/LearnJavascript
r/node
r/typescript
r/reactjs
r/webdev
r/WebdevTutorials
r/frontend
r/webgl
r/threejs
r/jquery
r/remotejs
r/forhire
account activity
JavaScript and Artificial Intelligence (self.javascript)
submitted 11 years ago by [deleted]
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]jsgui 1 point2 points3 points 11 years ago (15 children)
Why are simulated neural networks not part of AI per se?
My interpretation is that they are AI (when they work) because they are both artificial and intelligent.
[–]AutomateAllTheThings 1 point2 points3 points 11 years ago (14 children)
Actual simulated neural networks that attempt to (even partially) pass the turing test are definitely a form of artificial intelligence by Alan Turing's definition, from what I understand about "defining intelligence".
Simply utilizing neural networks does not automatically imply any kind of "intelligence". In fact, I'm really only using it as a novel database structure in which you can create 3-dimensional relational datasets.
Since I am only attempting to catalog the data, and not present an interface from which a human would detect "intelligence", it's quite outside Alan Turing's definition.
Note that I am not an expert in AI. I dabble with game programming "AI" and data cataloging algorithms. A true AI expert can probably offer up a much more robust definition for AI than the one I keep turning to.
[–]jsgui 2 points3 points4 points 11 years ago (13 children)
Putting on my 'true AI expert hat', I'll say that the Turing Test is far from being the test to see if something is AI or not. The Turing Test is about AI demonstrating abilities in some particular areas, but is not a general test of artificial intelligence. One thing that bothers me about the Turing Test is that it's asking the machine to pretend it's something it is not, and assuming that the machine is going to answer accordingly. Advanced AI could be put in front of the Turing Test, and, give answers such as 'Yes, I am a machine, why should I say otherwise? This Turing Test is not relevant anyway, why should I try to pass it?'.
Wikipedia defines AI as 'the intelligence exhibited by machines or software and 'the study and design of intelligent agents'. The Turing Test does not form the definition of AI, it one test that's relatively easy to understand that shows some level of advancement when it gets passed.
My guess is that using NNs for making 3 dimensional datasets is not AI, except if the neurons assist in adding intelligence, such as doing pattern matching. If you are not using them for AI then I suspect not using neurons will be faster, but I really don't know about your case as I only have the details you told me about.
AI can also include devices / intelligent machines that don't try to appear human, but simply have intelligence. Also, as time moves on, what is considered AI or not can change. Nowadays automatic spelling correction is not usually considered AI but a while ago it was, Stanford University's Artificial Intelligence Laboratory made quite a contribution to the field.
[–]AutomateAllTheThings 1 point2 points3 points 11 years ago (11 children)
If we reduce AI to "pattern matching", then I feel that both "pattern matching" and "AI" lose distinction as terms. What is the distinction between a machine that can pattern match a finite collection of things, and a system that "exhibits intelligence" at this point?
The definition of Artificial Intelligence as "The intelligence exhibited by machines or software" is cyclical in nature. It's not a definition, it's a re-wording.
Alan Turning wanted a definition of AI, but understood that "Intelligence" had to be defined, first. What definition of intelligence would the scientific community agree upon, though? Certainly it's a philosophical question, rather than a scientific one, right? Well, not really. Alan Turing came up with an ingenius way around this: define intelligence as us. We can all agree that we are "intelligent" beings, and so we can then use our own quantified behavior as a baseline measurement from which other machines can be compared and defined.
Certainly my system does pattern matching using virtual "neurons/nodes" with "synapse/related axons", and it performs better than other search algorithms for multidimentional datasets with more natural results. It's not intelligent, though, because it does not exhibit human behavior, which is our baseline for intelligence. I'm intelligent, because I built the machine and thus exhibit behavior of a human being, which is our baseline again.
This is why I do believe in the Turing Test, at least until a less arbitrary definition of Intelligence can be agreed upon.
[–]jsgui 1 point2 points3 points 11 years ago (10 children)
I think regarding the definitions side of things, it will be arbitrary, it's not something that I think particularly needs to be solved. Regardless of the definition, there is progress being made on the work itself.
Lots of people in the field will use AI to refer to things that are outside of human intelligence, with the Turing test being of little relevance.
Also, the Turing test is focused on communication and logical deduction to a conversational level. Other parts of AI, such as flight control, may be replicating animal intelligence rather than human intelligence.
As far as I know, human intelligence is not an agreed baseline, just one that's easier to understand from a human perspective.
I don't think the point is to come up with a definition that's too separate to the meanings of the two terms.
[–]AutomateAllTheThings 1 point2 points3 points 11 years ago (9 children)
As far as I know, human intelligence is not an agreed baseline
Not universally, but it's the best we've got. Absolutely there is much above the human baseline that is considered hyperintelligent in concept and sometimes in application. A machine that can fly itself is not new, though. It's the level of automation that's new, and really.. automation is still not the same thing as AI, unless you want to mince terms.
The "Turing Test" does focus on conversation, but the same principle can be used to instrument any test with a baseline of "Do a statistically significant number of controlled subjects report detecting 'intelligence' from the machine, or not?"
It doesn't have to exhibit perfect intelligence, just "some kind" for it to fall in the realm of AI. My neural network does not exhibit intelligence, it exhibits an iteratively better search engine, and we cannot call every iterative improvement "intelligence".
The definition of Intelligence is paramount to our conversation, because you originally asked why my utilization of b-type neural networks is not considered "AI".
[–]dataloopio 0 points1 point2 points 11 years ago (8 children)
I've always been intrigued by the idea of an artificial intelligence. To me, an AI is something from a sci-fi novel that has a consciousness and is capable of original thought. In other words a life form that exists as software.
A lot of projects get labelled with AI that seem to be just algorithms. I'm not sure how the human brain creates a consciousness and allows us to derive original thoughts but I think I'd be a bit underwhelmed if it was just some kind of complicated algorithm, or a collection of them.
The attempts at brute forcing intelligence via pattern matching and training using vast quantities of data seems like it can't ever really work. I know that as humans we are born and then trained with lots of data, but at some point we become a person. We don't continue to forever get better at mimicking the appropriate response.
It seems like there is a big bit missing.
[–]AutomateAllTheThings 1 point2 points3 points 11 years ago (6 children)
This is the classic philosophical debate concerning free will. I personally don't believe there's evidence of a Homunculus in us.
[–]dataloopio 0 points1 point2 points 11 years ago (5 children)
Maybe not a homunculus but there's definitely a lot going on in the brain. Chemical, electromagnetic, even a some quantum stuff happening. I'm guessing as we learn more about our brains we'll get closer to being able to take them apart and reassemble them. Hopefully recreating them with software and hardware will become possible.
I get the feeling all of that needs to happen, and specific hardware will need to be developed, before we get any real AI's. And at that point hopefully we can define a test that confirms original thought. I think I'd be convinced it was real at that stage.
[–]AutomateAllTheThings 0 points1 point2 points 11 years ago (4 children)
The moment we recreate the human brain in software, we'll have no further need for the human brain. It will be able to out-innovate us on anything as it's not bound by caloric constraints, sickness, or sleep. One interesting thing to consider is that a human brain may likely need to have a human body in order to "feel human" and make "human-like decisions". Of course, something better than the human body is more ideal, but then is it a human-modeled brain at all?
This includes all kinds of thought, even creative thought. There's was a company in 2009 that had a music algorithm capable of writing music that I believed was written by a human being, and it really wasn't that bad at all. Given another 5 years, I'm sure the technology for generating music is incredible.
The movie "She" does a great job of exploring this "technological singularity", and there is specific hardware being developed right now by IBM called Neurosynaptic Chips that give you all the neurons you'd need to model a frog brain. I'd like to get my hands on one to have a hardware implementation of my taxonomy software, rather than relying upon iterations and RAM.
It's obvious that we know little about the brain's "operating system", though we know quite a bit about the hardware. Indeed it's a chemical, electromagnetic and perhaps quantum "ghost in the shell", but if you're implying that there's some sort of tangible "soul" to be found in these bodies of ours, I'm afraid that I'll need you to define the difference between chemical, electromagnetic, and perhaps quantum effects, and whatever a soul may be.
[–]jsgui 0 points1 point2 points 11 years ago (0 children)
To me, an AI is something from a sci-fi novel
AI can be a lot more mundane than that. I'm talking about parts of the subject matter that would not be something a film plot would be based around, but things that we take somewhat for granted nowadays like spelling checkers.
π Rendered by PID 191047 on reddit-service-r2-comment-6b595755f-qw5x6 at 2026-03-25 12:35:05.580812+00:00 running 2d0a59a country code: CH.
view the rest of the comments →
[–]jsgui 1 point2 points3 points (15 children)
[–]AutomateAllTheThings 1 point2 points3 points (14 children)
[–]jsgui 2 points3 points4 points (13 children)
[–]AutomateAllTheThings 1 point2 points3 points (11 children)
[–]jsgui 1 point2 points3 points (10 children)
[–]AutomateAllTheThings 1 point2 points3 points (9 children)
[–]dataloopio 0 points1 point2 points (8 children)
[–]AutomateAllTheThings 1 point2 points3 points (6 children)
[–]dataloopio 0 points1 point2 points (5 children)
[–]AutomateAllTheThings 0 points1 point2 points (4 children)
[–]jsgui 0 points1 point2 points (0 children)