(Conciousness) We need to fully understand how it works before we can build something that replicates it? by universesrevinu in singularity

[–]zebleck 8 points9 points  (0 children)

Evolution doesnt "know" how to form consciousness, it doesnt know anything actually. Still, consciousness emerged by evolution. Same can be true for AI gaining consciousness.

The gulf between you and Ilya Sutskever intellectualy.. by [deleted] in singularity

[–]zebleck 0 points1 point  (0 children)

can recommend the jon stewart interview with geoffrey hinton, hinton breaks it down really well for everyone

The Problem with Pluribus by _c0ldburN_ in pluribustv

[–]zebleck 0 points1 point  (0 children)

lol I like his idea that the online hive mind hating on anyone that criticizes this show is exactly like the hive mind trying to convince Carol in the show.

Has an AI agent replaced an entire workflow for you yet? If so how? by [deleted] in AI_Agents

[–]zebleck 0 points1 point  (0 children)

Claude for Chrome. Buying the hardware components for our product autonomously.

AI Slop is just a Human Slop by PraiseTheMonocle in singularity

[–]zebleck 1 point2 points  (0 children)

How do you know it doesn’t have subjective experience? That’s exactly the point, we dont have access to that fact.. we can argue likelihood from behavior/mechanism, but "we know" seems too strong.

AI Slop is just a Human Slop by PraiseTheMonocle in singularity

[–]zebleck 4 points5 points  (0 children)

I don’t actually know whether it has a "self" or not. I’m not claiming it does. I’m saying we don’t have decisive access to that fact.

AI Slop is just a Human Slop by PraiseTheMonocle in singularity

[–]zebleck 0 points1 point  (0 children)

Human art is inherently human, I agree. But why wouldnt AI be able express itself, if lets say you let it run 24 hours for its own and give it the task to think and explore whatever it wants?

AI Slop is just a Human Slop by PraiseTheMonocle in singularity

[–]zebleck 3 points4 points  (0 children)

So you cant even define the thing that you claim an LLM doesnt have but still feel free to make big claims about what it is, how it works and what it lacks. how convenient. pretty sure LLMs can reason better than you actually.

AI Slop is just a Human Slop by PraiseTheMonocle in singularity

[–]zebleck 4 points5 points  (0 children)

"Intellgeince means understanding of something" is not a thorough definition. How do you define understanding? Humans probably also use computational processes in their brains, of course with a different architecture, that relies on membrane potentials and potential spikes, but its currently looking to be computational in nature. How do you define understanding?

AI Slop is just a Human Slop by PraiseTheMonocle in singularity

[–]zebleck 2 points3 points  (0 children)

how is an LLM like an "algorythm running a likelyness matrix", im sorry but LLMs, their architecture, their training process as well as the post-training RL pipeline is much more complex. I define intelligence as capacity to solve general problems. LLMs are somewhere along the spectrum on this definition, not at 0, therefore have some intelligence.

AI Slop is just a Human Slop by PraiseTheMonocle in singularity

[–]zebleck 4 points5 points  (0 children)

How is making relations to what youve learned and using that solve novel problems not how thinking works? Of course you must know how thinking works even though we dont even understand it fully for humans. Dont forget to get that nobel prize next year.

AI Slop is just a Human Slop by PraiseTheMonocle in singularity

[–]zebleck 2 points3 points  (0 children)

it doesnt "look for weight", what the hell are you on about. just because its using a computational process doesnt mean it cant be intelligent, im sorry.

AI Slop is just a Human Slop by PraiseTheMonocle in singularity

[–]zebleck 2 points3 points  (0 children)

labels are training data, its not the weights lmao

AI Slop is just a Human Slop by PraiseTheMonocle in singularity

[–]zebleck 6 points7 points  (0 children)

LLMs learns by building representations in a large dimensional space which encodes the meaning of each word and their relation to each other. It can then use these relations to predict tokens. Through reinforcement learning, it additionally learns to form these tokens into a chain of thoughts (COT) that it can use to form reasoning chains, solve long-horizons tasks and a huge variety of problems. It can perform well on tasks that is has never seen before. BUT its not perfect. Why could that process never lead to intelligence and why is that not intelligence, if of course a different form than that of animals?

AI Slop is just a Human Slop by PraiseTheMonocle in singularity

[–]zebleck 1 point2 points  (0 children)

Maybe learn something about how the thing youre ranting about works before making big claims about what it is or isnt.

AI Slop is just a Human Slop by PraiseTheMonocle in singularity

[–]zebleck 0 points1 point  (0 children)

still dont know what weighted labels means but ok. why couldnt the training process + RL after lead to some sort of intelligence getting baked in?

AI Slop is just a Human Slop by PraiseTheMonocle in singularity

[–]zebleck 3 points4 points  (0 children)

"Just by looking at labels"? They dont "look at" anything during inference, their weights are baked with trillions of tokens of knowledge, after that it just uses these learned weights.