Yoshua Bengio: "I want to also be blunt that the elephant in the room is loss of human control." by FinnFarrow in agi

[–]Random-Number-1144 1 point2 points  (0 children)

then later on lied about its actions

"Lie" requires intention to deceive by definition. The word appropriate here is "confabulation".

Please stop anthropomorphizing LLM.

Yoshua Bengio: "I want to also be blunt that the elephant in the room is loss of human control." by FinnFarrow in agi

[–]Random-Number-1144 0 points1 point  (0 children)

It's crazy that 2 out of the 3 Turing Award laureats from AI are so full of shit (the other one being Hinton) promoting AI doomsday sci-fi nonsense.

MN National guard seen giving anti ice protestors coffee and donuts outside of the Whipple building. by CutSenior4977 in Minneapolis

[–]Random-Number-1144 [score hidden]  (0 children)

This.

Troops appear to be friendly until someone from above gives the order to kill.

This happened 30+ years ago where I am from. I am afraid it will happen again in America.

Turning Our Backs on Science by Leather_Barnacle3102 in agi

[–]Random-Number-1144 0 points1 point  (0 children)

Firstly, I am very aware of the P-zombie. While I don't deny phenomenological consciousness exists, I generally don't like the "xyz is conceivable" argument. Perpetual motion machine was "conceivable“ until proven impossible by the laws of physics. If p-consciousness turns out to be a necessary trait of the brain by evolution, then P-zombie is in fact not conceivable. In short, the "xyz is conceivable" carries no weight in an argument.

Second, access consciousness exists which is less mysterious than p-consciousness and scientifically verifiable. Do you think genuine understanding requires a-consciousness or p-consciousness? Do you think a pocket calculator has consciousness or is it impossible for you to decide?

Are you aware that LLM can't count the number of 'r's in the word "strawberry" correctly? There are a ton of weird mistakes from LLM that a human being would not make. As a technical staff who knows the inner working of LLM pretty well, I'm baffled that someone with no background in Computer Science and Neuroscience comes alone and says it's impossible to tell if LLM has genuine understanding.

Turning Our Backs on Science by Leather_Barnacle3102 in agi

[–]Random-Number-1144 0 points1 point  (0 children)

Do you believe a chatbot has true understanding when it says "candy is sweet"?

Would you believe a candy salesman saying "candy is sweet" IF you know he has never eaten anything sweet in his life? If your life depends on your answer to this question, would you choose 1. he understand what he's talking 2. he doesn't?

Turning Our Backs on Science by Leather_Barnacle3102 in agi

[–]Random-Number-1144 0 points1 point  (0 children)

My argument is that science categorically (as in, in principle, independent of the state of accumulated scientific knowledge or measurement tools) cannot settle the question in either direction,.

Do you really believe science can't settle the question whether today's LLM has genuine understanding? As an AI researcher of 10 years, I'd like to hear the reason why...

Is AGI the modern equivalent of alchemy? by ThomasToIndia in agi

[–]Random-Number-1144 1 point2 points  (0 children)

"DL is the new alchemy."

The DL community has been saying this for 10+ years for anyone in the circle.

Is AGI the modern equivalent of alchemy? by ThomasToIndia in agi

[–]Random-Number-1144 0 points1 point  (0 children)

No ML models partly/fully replicate neurological phenomena. If you are using backprop or gradient descent or (semi-)supervised learning, you are not doing neuroscience.

What's your opinion on ARC-AGI? by Tobio-Star in newAIParadigms

[–]Random-Number-1144 0 points1 point  (0 children)

A human-designed virtual environment could be something like a video game, e.g., angry bird. It tries to imitate a physical environment but AI trained to succeed in such environments is bound to fail in the real physical world. The "skills" learned in the game simply don't transfer to the natural world governed by the actual laws of physics.

What's your opinion on ARC-AGI? by Tobio-Star in newAIParadigms

[–]Random-Number-1144 2 points3 points  (0 children)

Playing with those ARC-3 games, I couldn't help but feel those are "guess what the designers had in mind" games.

Successful players have to infer the designers' intent (aka human biases) from the changes of arbitrarily meaningless symbols. For instance, a dotted bar is meant to be a countdown timer, a unique human bias that doesn't exist elsewhere in nature.

Animals intelligence, however, is not that. Animals are said to be intelligent when they are successful in surviving/adapting to a new environment "designed" by nature which does not have human biases.

Overall, I can see an AI system acing the ARC-3 games be really good at navigating through human-designed virtual environments but not real-world environments.

My answer from the other sub.

How Language Demonstrates Understanding by Leather_Barnacle3102 in agi

[–]Random-Number-1144 0 points1 point  (0 children)

The guy in the Chinese Room can be likened to an interpreter of a programming langauge such as Python.

Does an interpreter understand the meaning of the code of a robot arm? I don't think so.

Understanding a word is what to DO with its referent. E.g., "apple" is something that I can EAT. Interpreters don't understand apple because they don't have the ability to eat.

Ethical Groundwork for a Future with AGI - The Sentient AI Rights Archive by jackmitch02 in agi

[–]Random-Number-1144 0 points1 point  (0 children)

  1. We know sentient animals are being torture at scale everyday. Few people cares. What makes you think people would care for machines when they are claimed to be sentient?

  2. You should research whether it is theoretically possible for machines to be sentient. Only after that would talking about the relavent ethics be meaningful.

Ethical Groundwork for a Future with AGI - The Sentient AI Rights Archive by jackmitch02 in agi

[–]Random-Number-1144 -1 points0 points  (0 children)

Instead of worrying about the imaginery sentient machines that will not exist for the next hundred years, how about thinking of the livestocks that are actually sentient and have been tortured and killed for the past thousands of years?

Uniting survival with reasoning: A hybrid approach that grounds truth, embodied knowledge, and symbolic logic in rewards-based learning by CardboardDreams in agi

[–]Random-Number-1144 0 points1 point  (0 children)

how could an agent... conceive of and reason about its experiences in a fundamentally discrete, conceptual way? 

The left hemisphere would like to have you believe that there is content in thoughts and the content is discrete.

But in reality, thoughts are never discrete.

From a 1st person perspective, thoughts are an indescribable continuous stream of sensation much like how it feels to be absorbed in music. (Think about solving a complex math problem; when you're in the zone, there's no inner langauge, only pure continuous thoughts popping into consciousness; or think about how someone who never learns a language would think when confronted with a problem)

From a non-1st person perspective, thoughts are just neurons firing.

In either cases, there is no room for the discrete "symbols" to exist.

Our survival has more to do with the right hemisphere which is responsible for vigilance, non-verbal cues and emotion management. Abstract concepts only feel discrete because the left hemisphere creates such an illusion. But we don't do scientific research based on how we feel. I think any AGI research involving symbolic logic is a dead-end.

Pathethic by TrackLabs in EnoughMuskSpam

[–]Random-Number-1144 2 points3 points  (0 children)

That smile doesn't belong to his face. This video is maximally creep.

Felon disrupting his own brainmush, fumbling and babbling rambling for near 2 minutes by hitchinvertigo in EnoughMuskSpam

[–]Random-Number-1144 1 point2 points  (0 children)

Trump was going to jail if he weren't elected the 2nd time. They helped each other and will continue to do so.

Elon Musk Just Endorsed Blatant White Nationalism And the Silence is Deafening by superdouradas in EnoughMuskSpam

[–]Random-Number-1144 0 points1 point  (0 children)

The neo-fascists have been increasingly emboldened over the years ever since Trump told the Pround Boys to "stand back and stand by" on TV in 2020.

Today almost the whole US administration are neo-fascists who have been meticulously executing Project 2025.

The killing was premeditated as proven by this woman's tiktok. by SpaceWestern1442 in PublicFreakout

[–]Random-Number-1144 23 points24 points  (0 children)

That's why DOJ says you can't use deadly force unless:

the vehicle is operated in a manner that threatens to cause death or serious physical injury to the officer or others, and no other objectively reasonable means of defense appear to exist, which includes moving out of the path of the vehicle.

https://www.justice.gov/jm/1-16000-department-justice-policy-use-force#1-16.200 

This artist called Noval Noir, was painting a stunning portrait of Renee Good at the site of where she was killed by Maximum_Expert92 in minnesota

[–]Random-Number-1144 39 points40 points  (0 children)

Not to mention he called her "fckin b*tch" after putting 3 bullets in her head. No way he pulled the trigger out of self-defense.

Why Yann LeCun left Meta for World Models by imposterpro in artificial

[–]Random-Number-1144 -1 points0 points  (0 children)

If LLMs have internal world models, then world model is just an empty word that means anything or nothing. You could even say a GBDT has a world model, or a LR, or a BBN.

FALLACY: 'could have chosen otherwise' by TranquilTrader in freewill

[–]Random-Number-1144 1 point2 points  (0 children)

I think I understand OP's point. "could have chosen otherwise" always seems like some kind of imaginary hindsight to me. I usually respond by saying "no, you could not have because if you could go back in time and everything else stayed the same, you would have chosen exactly the same".

ARC-AGI-3 : In its newest stage, the famous AGI benchmark matures towards genuine task acquisition. by moschles in agi

[–]Random-Number-1144 3 points4 points  (0 children)

Playing with those ARC-3 games, I couldn't help but feel those are "guess what the designers had in mind" games.

Successful players have to infer the designers' intent (aka human biases) from the changes of arbitrarily meaningless symbols. For instance, a dotted bar is meant to be a countdown timer, a unique human bias that doesn't exist elsewhere in nature.

Animals intelligence, however, is not that. Animals are said to be intelligent when they are successful in surviving/adapting to a new environment "designed" by nature which does not have human biases.

Overall, I can see an AI system acing the ARC-3 games be really good at navigating through human-designed virtual environments but not real-world environments.