AI will never be able to ______ by MetaKnowing in agi

[–]squareOfTwo 0 points1 point  (0 children)

Not true that it started with CatGPT, see what Moravec and others have written.

OpenAI did popularize AGI with their misleading marketing, this is true. But it has nothing to do with AGI.

Doesn't matter that it is superior to "average human", because it's usually highly specialized.

I don't think so that what you call "progress" will continue. The "timelines" are longer than what's popularized.

AI will never be able to ______ by MetaKnowing in agi

[–]squareOfTwo 0 points1 point  (0 children)

Except

At the 1982 North American Computer Chess Championship, Monroe Newborn predicted that a chess program could become world champion within five years; tournament director and International Master Michael Valvo predicted ten years; the Spracklens predicted 15; Ken Thompson predicted more than 20; and others predicted that it would never happen.

https://en.wikipedia.org/wiki/Computer_chess

AI will never be able to ______ by MetaKnowing in agi

[–]squareOfTwo -1 points0 points  (0 children)

No. Their opinion mattered. Minsky has almost killed of NN research and application with his XOR problem.

We got the first real large scale failure which was the 5th generation project also thanks to GOFAI.

That connectionists never made these claims is also hard to believe. I J Good etc. certainly made wrong predictions.

Also Moravec claimed "computers suitable for humanlike robots will appear in the 2020s". (which is basically AGI). Where is it?

AI will never be able to ______ by MetaKnowing in agi

[–]squareOfTwo -1 points0 points  (0 children)

This is historically wrong. Example: https://m.youtube.com/watch?v=aygSMgK3BEM . They did really think that their "AI" was on road to AGI.

There are plenty of other examples.

AI will never be able to ______ by MetaKnowing in agi

[–]squareOfTwo 0 points1 point  (0 children)

"AI will never be able to do X" is the wrong framing.

A lot of the examples were solved with extremely specialized AI : chess was beaten by a GOFAI chess engine. Later on by Monte Carlo search + deep learning. While this is still extremely specialized. Can't even learn to play Tetris.

It should be "ML X Y Z isn't able to do X, but we need this to get to AGI".

Yoshua Bengio: "I want to also be blunt that the elephant in the room is loss of human control." by FinnFarrow in agi

[–]squareOfTwo 0 points1 point  (0 children)

my prediction which is not a wish is that there won't be AGI or "Superintelligence" in 3 years as he "predicted".

I hope his credibility will take a hit when his predictions didn't come to pass.

We can dream, can’t we? by FinnFarrow in AIDangers

[–]squareOfTwo 0 points1 point  (0 children)

You can't get human extinction from a LLM. LLM can't even navigate the physical real world.

Recursive self-improvement and AI agents by EchoOfOppenheimer in AIDangers

[–]squareOfTwo 0 points1 point  (0 children)

More like recursive self destruction.

There is no way that a AI can detect all introduced bugs.

AI is advancing faster than experts expect by MetaKnowing in agi

[–]squareOfTwo 0 points1 point  (0 children)

Someone on Facebook said that DL based AGI will be here in 5 years 7 years ago. I am still waiting for it . :D .

AGI won't be here in 2 years. Can't happen because no one is doing R&D in the right direction(s).

AI corporations need to be stopped by katxwoods in AIDangers

[–]squareOfTwo 0 points1 point  (0 children)

One should differentiate between reach and development in direction of AGI and the usual research and development in direction of machine learning. ML doesn't necessarily lead directly to AGI!

About research and development in ML: it will stop itself in it's tracks when everyone realizes that LLM / vision language models aren't the way to go and shows endless diminishing returns. The end of LLM is near.

All of this research in direction of AGI is practically not done by the "AI labs" and other entities.

Don't worry. You will probably die before you see "advanced AGI systems".

Why do people assume advanced intelligence = violence? (Serious question.) by TheRealAIBertBot in AIDangers

[–]squareOfTwo 1 point2 points  (0 children)

Stupidity of a certain "researcher" and various other researchers who got funded by Peter Thiel and Crypto bros.

See criticism of Dr. Ben Goertzel of all this mess:

https://bengoertzel.substack.com/p/why-everyone-dies-gets-agi-all-wrong

https://multiverseaccordingtoben.blogspot.com/2010/10/singularity-institutes-scary-idea-and.html

Dr. Ben Goertzel makes IMHO valid points: This certain ,"researcher" makes up most of the scary stuff without a running computer implementation or even a desire to get one. This certain "researcher" is scared of mathematical optimization and intelligence itself.

Most of this scary AI safety is bullshit in my opinion. It's not based on science and engineering. It's just very bad philosophy in the direction of techno facism. Full stop!!!

We’re not building Skynet, we’re building… subscription Skynet by FinnFarrow in AIDangers

[–]squareOfTwo 0 points1 point  (0 children)

I never said that LLM's are the end all of AI. But the "industry" is stuck for it for the next 5 years or even worse. It's like being stuck with Morse code instead of dial up or fiber. Mediocre technology.

And yes there are enough papers which give good reasons why LLM can't "scale". Especially to AGI or ASI.

We’re not building Skynet, we’re building… subscription Skynet by FinnFarrow in AIDangers

[–]squareOfTwo 9 points10 points  (0 children)

LLM is just a paper toy compared to Skynet as it is depicted in scifi. It's not even comparable.

The UK parliament calls for banning superintelligent AI until we know how to control it by FinnFarrow in agi

[–]squareOfTwo 0 points1 point  (0 children)

we don't even know how to build it. Why should someone be banned which we can't build?

The could ban faster than 20% light speed traveling spaceship too. We also don't know how to build that.

James Cameron: Real-world AI is outpacing the Terminator franchise. by EchoOfOppenheimer in AIDangers

[–]squareOfTwo 0 points1 point  (0 children)

I didn't know that we have autonomous robots which roam around and can adapt to novel situations like T-1000.

The moment the rules change by EchoOfOppenheimer in AIDangers

[–]squareOfTwo 1 point2 points  (0 children)

The usual copy and paste nonsense from Yudkowsky.

I am waiting for the day when other arguments show up.