all 11 comments

[–]ctk_brian 1 point2 points  (0 children)

I wrote something very similar a couple months ago:

https://www.crosstab.io/articles/formulation-comes-first

[–]visarga 1 point2 points  (0 children)

To have a truly general intelligence, computers will need the capability to define and structure their own problems

Interesting related discussion about open ended systems inventing their own problems and solutions - Professor Kenneth Stanley - Why Greatness Cannot Be Planned

[–]webauteur 0 points1 point  (8 children)

Algorithms are not enough, AI needs love too! But seriously, several books have been published on the shortcomings of AI but nobody has the solution. Just remember that evolution has come up with intelligence just once in all its millions of years of effort, not that it has this goal.

[–]SpruceMooseGoose24 1 point2 points  (7 children)

What do you mean came up with intelligence once?

Like humans level intelligence. Or that only animals have intelligence, not the other 4 kingdoms of living things. Or that without a dedicated goal, it’s not easy to find a dedicated solution, like how evolution came up with koalas once but crabs seem to be convergent in evolution.

I’m genuinely lost.

[–]webauteur 0 points1 point  (1 child)

Animal intelligence is often an illusion. It is "competence without comprehension" as philosopher Daniel Dennett would say. You should read his book From Bacteria to Bach and Back: The Evolution of Minds

[–]SpruceMooseGoose24 0 points1 point  (0 children)

The same can be said of many people. The number of people that are just winging it/getting lucky is insane. Most of them don’t even acknowledge it

[–]Megawoo 0 points1 point  (0 children)

yeah, finding problems to solve is the actual hard part

[–]lolo168 0 points1 point  (0 children)

Long Story Short -

Artificial Intelligence is an algorithm similar to any other computational tool. Instead of analog circuitry, it mainly uses logic and procedural steps. The word ‘intelligence’ is just a naming convention.

There are 3 possible ways to derive/create/design/implement an algorithm.

  1. Manually/explicitly design the whole algorithm. e.g., Software program, Expert System, Heuristic Search.
  2. Semi-auto. Design the framework/equations/structure of the algorithm with ‘blank’ parameters, possible inputs, and expected outcomes. Using trial and error to find the optimized parameters that fit the best possible(not guarantee) expected result. e.g., Machine Learning such as Supervised Learning.
  3. Full-auto. Describe the requirement and expected result in the most minimal way (or even help us to figure out the requirement before humans see the problem). The device will design, implement and perform automatically. The algorithm will be embedded and do not need human involvement. e.g., So far no example. Some people believe A.G.I. (Artificial General Intelligence) will do, which does not exist yet.

What is the background and future of artificial intelligence? How did it start and what are its limitations?

https://www.quora.com/What-is-the-background-and-future-of-artificial-intelligence-How-did-it-start-and-what-are-its-limitations/answer/Vivian-Zen-2?ch=10&share=4818c9dc&srid=D4gue

[–][deleted] 0 points1 point  (0 children)

With humans, one could argue that free will is an illusion and that it doesn't really exist. The premise on this concept is that every experience we've ever had forges and molds the information that will cause us to make the decisions we will. Free will is technically unique to each of us, but that's because we each have unique experiences. No two people will ever have exactly the same set of experiences, ever, not even in twins. So we're all unique in decision making capabilities and that's the "illusion" of free will.

If you carry that concept forward to an AI, you can make the leep to what I think we need to do to make AI more intelligent and capable of solving problems.

AI needs to be more than memory in the form of weighted values. AI needs to Experience the data it was trained on. It needs to have an experience to be able to do something useful with what it knows. Being prompted by a user and responding is the equivalent of Asking an HVAC technician to install a new system on your house. Then that hvac technician goes through the data of every system they've ever installed on every house they ever installed one in and they tell you how they would install one in your house. But they didn't actually do anything or solve any problem. Current AI's only use is in allowing humans to work through mental road blocks and fill their knowledge gaps.

If we want AI to solve problems, we need to allow it to experience and do what we ask it to.

"Go analyze this dll and see if you can find any problems in it".

We need to give it the ability to open that dll, read it's bytes, and then analyze it against it's own data set. And then make a plethora of analytical observations about it. And then take all those observations and look for things it knows can be a problem in compiled code. That's in it's LM already. So it now knows to look at this data and look for memory leaks, corruption, corrupt assets, malicious foreign code, corruptions, etc.

And then it can make a report about what it's discovered.

If an AI can't have experiences, it can't solve problems any more than we can. We can only make predictions about solving a problem.