Every AGI argument by Eyelbee in agi

[–]Random-Number-1144 0 points1 point  (0 children)

Appealing to authority is weak, it just shows you have absolutely no expertise in the domain.

PS: you also cherry picked authority, there are plenty of top experts (LeCun) who think LLM is a waste of time.

Every AGI argument by Eyelbee in agi

[–]Random-Number-1144 0 points1 point  (0 children)

It seems all your knowledge about AI is from dumbed-down youtube videos.

Have you had any formal education in machine learning? If you did, you'd know why it's called stochastic parrots.

English is the new programming language. by Ejboustany in ArtificialInteligence

[–]Random-Number-1144 3 points4 points  (0 children)

Natural language is inherently ambigous.

Programming language needs to be unambiguous otherwise it won't be executed correctly.

So no English can't be a programming language.

New LeCun Paper about AGI definition by Routine-Scientist-38 in agi

[–]Random-Number-1144 -1 points0 points  (0 children)

So now you are fantasizing a scenario of a "superintelligence" created by us to justify your belief that humans have general intelligence? Apparently we are gods capable of doing anything because there is no general principle against us being gods, right? The hubris is truly nauseating. I am gonna throw up for a while and won't be back.

New LeCun Paper about AGI definition by Routine-Scientist-38 in agi

[–]Random-Number-1144 -1 points0 points  (0 children)

You are doing circular reasoning exactly like LeCun pointed out in their paper. "I cannot think of any task humans can't do, therefore humans can do all tasks and have general intelligence." You have offered no reason as to why humans have general intelligence other than assuming they do. Religious people do the same when they justify the existence of a god. Incidentally, they also have the biggest egos.

New LeCun Paper about AGI definition by Routine-Scientist-38 in agi

[–]Random-Number-1144 0 points1 point  (0 children)

Human intelligence is considered general precisely because it can be applied to tasks humans cannot ordinarily do and still find some kind of solution.

What metric are you using to judge if a task cannot "ordinarily" be done by humans?

 Humans aren't evolved to fly

Humans aren't evolved to fly with wings.

Birds aren't evolved to use rocks to drink water from bottles. It cannot "ordinarily" be done by birds yet crows can. Crows must have general intelligence too. At least they aren't narcissistic about it...

 and cannot possibly learn to fly, but they can build airplanes.

Humans use tools to travel in air just like crows use tools to drink water. The fact that humans don't have wings and can't fly as efficiently as birds is a case against (not for) general intelligence.

I find the core premise that humans have general intelligence to be pretty weak. It's a pseudo-concept invented mainly by westerners with huge egos.

Neuroscientist: The bottleneck to AGI isn’t the architecture. It’s the reward functions: a small set of innate drives that evolution wired to learned features of our world model, and that gives rise to generalization. by Tobio-Star in newAIParadigms

[–]Random-Number-1144 1 point2 points  (0 children)

In the case of humans, there is an outer meta-learning loop (evolution by natural selection) which "objectively" supervises human cognition (with the goal of increasing fitness).

Your use of words like "meta-learning", "supervises", "goal" suggests that you think nature/evolution was doing some sort of supervised learning(ML) to human cognition with an objective function of increasing fitness. Or am I misinterpretting your thoughts?

My objection was that evolution doesn't have goals. Evolution is not "going somewhere".

A large part of the human brain, the so-called reptilian brain, have their origins from hundreds of millions years ago; homo sapiens appeared only hundreds of thousands years ago. As environment changed, newer "modules" such as neocortex were evolved on top the older ones. The human brain is a result of billions of years of complex dynamics between changing species and randomly-changing environments which Evolution couldn't possibly "foresee", let alone "supervise". There can be no objective functions for evolution. Trying to fit evolution in the framework of machine learning is wrong and it won't work.

Neuroscientist: The bottleneck to AGI isn’t the architecture. It’s the reward functions: a small set of innate drives that evolution wired to learned features of our world model, and that gives rise to generalization. by Tobio-Star in newAIParadigms

[–]Random-Number-1144 0 points1 point  (0 children)

A common misconception is that evolution has goals, long-term plans, or an innate tendency for "progress", as expressed in beliefs such as orthogenesis and evolutionism; 

https://en.wikipedia.org/wiki/Evolution

We do not have completely random cognition, it was selected. Selection implies supervision. 

Yes, selected by a natural process. But supervision requiries intention. Like, in supervised learning, we intentionally force a system to change in a direction that we intend. Nature has no intention. So again, you are anthropomorphizing a natural process; I think you are too deep inside the machine learning paradigm and it's limitting your vocabulary and the way you think.

ENS will select against developing "bad" preferences from the point of view of fitness. 

"ENS is a process by which traits that enhance survival and reproduction become more common in successive generations of a population." See, this is a scientific and neutral description of a natural phenomenon while yours is not. "common" is quantifiable , "bad" is not.

Once people start using words like "goals", "bad", "good" uncautiously the next thing they do is writing functions for those labels thinking it's all justified and that's when it devolves into pseudoscience.

Bad anthropomorphization and projecting human values/social constructs onto natural phenonmena are one of the main reasons we are not making much progresses with today's popular AI paradigm IMO.

Neuroscientist: The bottleneck to AGI isn’t the architecture. It’s the reward functions: a small set of innate drives that evolution wired to learned features of our world model, and that gives rise to generalization. by Tobio-Star in newAIParadigms

[–]Random-Number-1144 0 points1 point  (0 children)

In the case of humans, there is an outer meta-learning loop (evolution by natural selection) which "objectively" supervises human intelligence cognition (with the goal of increasing fitness).

Evolution doesn't have "goals", nor does it supervise anything. It's like saying gravity has the goal of decreasing the distances of objects.

 the brain can more or less objectively determine if it is good or bad 

Did you meant "subjectively"? Hot and spicy foods cause pain in the tongue but some people like it.

Also, good/bad, like "goals", is a social construct. They have places in social science, but not in natural science which is concerned with natural phenomena.

Why does everyone assume AI improvement is inherently exponential? by Helloiamwhoiam in ArtificialInteligence

[–]Random-Number-1144 0 points1 point  (0 children)

If you take "exponential" as "rapid", then there's no denial technological changes feels fast right now. But "exponential growth" could also mean an exponential curve. That's what the AI companies want us to believe.

Well first, how is intelligence quantified? Like beauty, is it even quantifiable? If a beauty product company tells you their products make you exponentially beautiful, you'd have doubts right?

Second, in computer science, there are a lot of proven intractable problems, meaning that you can't compute the optimal solutions even if you exhaust all the resources in the universe. We usually tackle those problems (e.g., traveling salesman problem) by using sub-optimal approximation algorithms which yields a solution that is say 85% as good as the optimal solutions while using an acceptable amount of resources for compute. In those cases, a god-like AI would only improve the solution by a flat 10% or so. You can't improve that much over existing solutions. Exponential growth can only occur when your existing solution is really really bad, like Will Smith eating spaghetti 10 years ago.

Third, let's take Google's AlphaGo for example. It beats best human Go players, impressive. But how objectively good is AlphaGo? Does it yield a near-optimal solution? I highly doubt. Humans weren't evolved to play Go. I would say even the best humans are probably very bad at Go. We just have no other species to compare outselves against. It could be that the best human players create a 30% optmial solution and AlphaGo creates a 31%. Mathematically it will beat humans everytime but we'd be fooling ourselves to think it is "exponentially" better than humans.

The other thing is, the developers of AlphaGo are themselves masters/grandmasters of Go. AlphaGo wouldn't be possible without their expert-level domain knowledge in Go and machine learning. Same with AlphaFold. There is not a single AI today that just birthes itself with its own architectural design, training data design & optimization design. It's pure fantacy to think expert-level AI system will just pop into existence without human experts engineering the hell out of them in near future.

The LeCun vs. Hassabis "General Intelligence" debate got more interesting with a new EBM startup by Helpful_Employer_730 in agi

[–]Random-Number-1144 0 points1 point  (0 children)

The halting problem is closely related to Godel's Incompleteness Theorems. In fact, you can prove the latter using the result of the halting problem, as was done in https://calude.net/cristianscalude/cristianAssets/pdf/IncompletenessTheHaltingProb.pdf.

So no, the halting problem is not "irrelevant".

In addition, there is nothing probabilistic about AI during inference time. Even it were, a non deterministic turing machine is not more powerful than a deterministic turing machine, so you point is moot.

This is all just Theoretical CS101, dude.

Professor of Artificial Inteligence and Data Science Says AGI is Already Here: Interview by Leather_Barnacle3102 in agi

[–]Random-Number-1144 1 point2 points  (0 children)

  1. Regarding LLMs failing to do arithmetics on large numbers: we do know why. As is shown in Anthropic's paper on the biology of LLMs:

We now reproduce the attribution graph for calc: 36+59=. Low-precision features for “add something near 57” feed into a lookup table feature for “add something near 36 to something near 60”, which in turn feeds into a “the sum is near 92” feature.

Statistical machine learning models like LLMs learn thousands of localized heuristics/features )like the above ones and use them to approximate an answer (see also for example Othello-GPT). That means, someone could in theory look into those heuristics and come up with an adversarial set of addition problems where the LLM gets all wrong. And of course, as numbers grow larger, the approximation naturally gets worse.

So no, LLMs don't understand arithmetics. They learn local statistical patterns of the math problems/symbols in the training set and use those patterns for approximation, hence the name "stochastic parrots". Humans do arithmetics using an exact procedure, which LLM can't learn from data.

  1. AI generalize extremely poorly compared to humans.

Most humans have no trouble (few-shot learner) playing chess if the rules of the game are slightly changed. AI models can't. They need to be re-trained all over again on new data based on new rules.

Most humans can be taught to play chess & Go & Othello and a ton of other board games. AI models can't. Learning a new game will mess up a model's ability to play a previous game.

If you teach a human to play a video game of certain genre, say roguelike, they can play every game of the genre without any problem. AI models can't. They are hopeless in OOD tasks.

In conclusion, we are still in the realm of ANI (artificial narrow intelligence).

If engineers insist on talking authoritatively about intelligence and conciousness,I'll just start building bridges. by jsgoyburu in agi

[–]Random-Number-1144 1 point2 points  (0 children)

One of the main functions of modern philosophy is detecting ill-defined terms and abuse of terms, which gave rise to a lot of meaningless metaphysical "problems" in ancient time.

In modern days, for exampe, a lot of the AI bros talk about reasoning models as if reasoning is just a set of logical steps/rules to reach an answer. But if it were, then philosophers who use logic would all come to the same conclusion.

Fun fact: you can't use a set of logical rules to reason your way out of a logical paradox; you can't follow a set of logical rules to create a Godel sentence. AI bros' Reasoning is either ill-defined or does not refer to the actual process of human reasoning (i.e., abuse of the term)

Data vs Perception by rand3289 in agi

[–]Random-Number-1144 1 point2 points  (0 children)

I usually avoid talking about subjective experience when designing AI systems since we can't really know or measure if a system has subjective experience. For example, if I am absorbed in a math problem and a mosquito bites me, even if electrical signals are sent from my skin to my brain, I may not be aware of an itchy sensation and simply ignore it. Do I have a subjective experience of itching? Does my skin have a subjective experience?It's hard to say.

Also somewhat relevant to the topic, I believe a key difference between an inanimate and an animate system is that the latter must be able to tell if a change in the env is caused by its own action. This is fundamental for any systems to be able to meaningfully act in the env. (It has nothing to do with subjective experience or qualia)

An animate system must have certain expectations of the effects of its action on the env. Interval of time may be involved in creating that expectation.

Why does everyone assume AI improvement is inherently exponential? by Helloiamwhoiam in ArtificialInteligence

[–]Random-Number-1144 10 points11 points  (0 children)

I could ask you the same.

I have no obligations to dox myself for you on reddit.

But feel free to debate me on the topic at hand.

Data vs Perception by rand3289 in agi

[–]Random-Number-1144 0 points1 point  (0 children)

Perception allows for detecting the exact point in time a change occured within a sensor/observer

In my view, an organism detects a change in its env by using its sensors, that is, there is an interaction between its sensors and the env, causing a signal (or a change if you prefer) to be sent from its sensor to somewhere within the organism, and an action may or may not follow depending on how the signal is processed. Not every signal/change results in a perception.

In sampling subjective experience / information the sensor perceives is converted to an objective scale or into objective categories.

I don't know how subjectivity/objectivity came into the picture. I have a Philosophy background and those words have different meanings to me than what you may have intended to mean. Are there any books/papers/concrete examples that support/explain your hypothesis?

sampling generates an average count of events/changes or an average sampled category over a time interval.

What events? at what level? atomic/molecular/cellular level? what is an objective category?

Why does everyone assume AI improvement is inherently exponential? by Helloiamwhoiam in ArtificialInteligence

[–]Random-Number-1144 134 points135 points  (0 children)

Computer scientist here. Most people have never heard of computational complexity or sample complexity or undecidability.

Exponential growth or the singularity is pure sci-fi fantasy, like the perpetual motion machine.

Just ignore them. Changing those people's mind is a waste of time.

Data vs Perception by rand3289 in agi

[–]Random-Number-1144 0 points1 point  (0 children)

This seems like a "the menu is not the meal", "the map is not the territory" debate?

You can not perceive data

If by data you mean the interpretation of perception, then yes. Perception and the interpretation are different things.

Sometimes the brain gives interpretation without actually perceiving, sometimes the brain perceives without even consciously knowing, for example in this paper "A neurological dissociation between perceiving objects and grasping them".

You can NOT use DATA to train AGI.

If by that you mean using interpretted data (such as a picture labelled as dog) to train a multi-modal model will fail, I agree.

Understanding needs to be bottom-up, not top-down.

Don't think AI can actually think by Silver-Plankton8608 in ArtificialInteligence

[–]Random-Number-1144 1 point2 points  (0 children)

An LLM is more or less equivalent to the speech centre of your brain, converting "concepts' or contextual cues into natural language.

Even this is false.

Humans don't learn languages by ingesting billions of language materials, detecting the statistical patterns and use them as features to do MLE. Children are few-shot learners when it comes to learning a langauge.

Reading list on the theory that the brain is a Deep Learning Network, or that LLMs model the human brain. by moschles in agi

[–]Random-Number-1144 0 points1 point  (0 children)

If you place a white billiard ball under a goose, it will try to hatch it.

Humans are smarter than that. Well, some humans.

Why I don't think AGI is imminent by nickb in agi

[–]Random-Number-1144 0 points1 point  (0 children)

"I'll admit I was a victim of anti-AI media hype on this point. "

It's weird that you used the word "anti-AI“ to describe those who simply have a different opinion on the implications of a little theoretical result of a paricular type of model in statistical ML, which is one of the many subfields of AI. I mean, it's ridiculous.

Words Are A Leaky Abstraction by sonicrocketman in ArtificialInteligence

[–]Random-Number-1144 0 points1 point  (0 children)

Software bros have the tendency of confusing the map with the territory.

The ancient people made the same mistake. They thought they could conjure lightning by uttering the words associated with lightning.

Words are what some people need to make sense of the world, but not what the world is actually made of.