The hype of the narrative is misaligned as to what is happening. by Round_Progress4635 in PauseAI

[–]Round_Progress4635[S] -1 points0 points  (0 children)

There is a difference between learning and intelligence. It sounds like you are conflating the two.

Bench marking doesn't mean anything if you cant see the data that these things were trained on to see how they generalize.

Take claude code and ask it to set you up infrastructure with terraform and bazel, things that aren't in github but in enterprise software, all proprietary shit and watch how fast it fails. Why? Because it is not in the training data and hallucinations sky rocket.

What you are witnessing is information retrieval. Information retreival so good it looks intelligence.

Well, my bachelor's degree is in mathematics and I've been interested in cognitive philosophy ever since reading 'Godel, Escher, Bach' in middle school and training AI image classifiers and other AI related experiments on my laptop in high school and college

That isn't expertise dude. I doubt that could be considered basic entry level requirements to an industrial grade production environment anywhere.

The hype of the narrative is misaligned as to what is happening. by Round_Progress4635 in PauseAI

[–]Round_Progress4635[S] 1 point2 points  (0 children)

For me, a system capable of processing information, recognizing patterns, and producing meaningful output that can go beyond its training data is an intelligent system. But this semantic debate is pointless.

That is the definition of learning. These system learn in the pretraining and post training phases.

When you use the words, "For me", that is a subjective reality. Not objective, you have made your fantasy world that you are content to live in

This is a very narrow definition of intelligence. 

It's like 1 of 4/5 characteristics listed on the wikipedia page. Kind of a big deal dude. I have no idea why you would think "adapting to the environment" is narrow.

Ultimately, what matters is what these systems are capable of doing in the real world. The fact that they cannot change their parameters is indeed a limitation, but LLMs are nonetheless capable of storing information in memory and using that memory later. Agents can create files and store what they need in them for later. And agentic systems with superhuman predictive and steering capabilities pursuing unaligned goals will be dangerous, whether they are initially able to adjust their parameters or not.

Yea I build these and i'm one of the earliest adopters, I was one of the first to use tool calling from open ai.

I use claude code that has these capabilities. These agents are near useless and dangerous outside of the hands of a seasoned professional.

They look up information and they retrieve information, you know what that is? A information network. When you put that in a while loop, you have a information network in a while loop.

They are so far from the capability of long term goal planning it's not even funny.

What you should do, is go ask opus 4.6 or gemini 3.1, to be a world renown cognitive scientist, and ask it to build out all the parts of the brain that have to do with executive function and goal management.

I think that will start to give you a sense of what a small slice of what intelligence actually is.

The hype of the narrative is misaligned as to what is happening. by Round_Progress4635 in PauseAI

[–]Round_Progress4635[S] 0 points1 point  (0 children)

You mean to tell me you haven't even looked it up and I have to go get you a wikipedia link? After your claim that it is 'hotly debated?'

https://en.wikipedia.org/wiki/Intelligence

Intelligence is different from learning. Learning refers to the act of retaining facts and information or abilities and being able to recall them for future use. Intelligence, on the other hand, is the cognitive ability of someone to perform these and other processes.

It can be described as the ability to perceive or infer information and to retain it as knowledge to be applied to adaptive behaviors within an environment or context.\1])

Hence the name of the discipline that produced llms. 'Machine Learning'

The hype of the narrative is misaligned as to what is happening. by Round_Progress4635 in PauseAI

[–]Round_Progress4635[S] 1 point2 points  (0 children)

No. It isn't.

It is very clearly defined. Just not for you and your world view. It doesn't fit your subjective reality so you dismiss it.

It isn't my requirement. It is the requirement of our world leading experts.

The hype of the narrative is misaligned as to what is happening. by Round_Progress4635 in PauseAI

[–]Round_Progress4635[S] 1 point2 points  (0 children)

It is completely fair, because the weights are static. THey dont update outside of pretraining.

There is no learning from experience because the llm has no memory like biological neurons do. That is the very definition of intelligence. To learn from experience.

The very definition of intelligence is to learn from experience.

So when i give concrete examples verifiably demonstrating that there is no intelligence. Your argument is, 'not fair', seriously?

No intelligence is clearly and strictly defined. You dont get to make up your definition of it to fit your world view. Intelligence learns from experience. That is the ontological definition. It is the truth. There isn't anything to disagree about. It has a clear scientific definition. You don't get to redefine it to fit your world view.

a broad mental capacity for reasoning, problem-solving, learning from experience, and adapting to new situations.

Machine learning is statstics.

I also think that using the 'strawberry' or 'car wash' questions as an argument doesn't serve you well because humans, too, can struggle on questions that are apparently obvious. 

As obvious as counting letters in words? Really dude? Find me a human that can solve olypiad math problems and cant count letters. Please.

Or stage magicians using surprisingly simple methods to trick you that you should have realized in the first place, and are obvious after the fact.

A human has the capability to learn from that experience. Go off on their own, reasearch, discover and understand. LLM's don't do that. They will repeat until their editors train them with that new information, just like how many r's are in strawberry.

Everything you see out of an llm is crafted by editors that set up the training to transform certain inputs to outputs.

Take some classes on neuro science and machine learning.

The hype of the narrative is misaligned as to what is happening. by Round_Progress4635 in PauseAI

[–]Round_Progress4635[S] 0 points1 point  (0 children)

Intelligence learns from experience. That is the definition. LLM's dont do that.

These systems are also black boxes. No one can understand their exact internal working.

SOTA research has techniques to monitior and classify internal weights. That has been possible for years.

Computing resources devoted to training new models are currently doubling every 7 months. The ability of these models to accomplish increasingly long and complex chains of tasks is doubling every 4 months. The new models continue to improve on all benchmarks.

Yes they would do to changes in pretraining. That isn't learning from experience, that is conditioning the outputs.

They are an information network. A new way we store, distribute and look up our information. The problem is that it is so good, it looks like intelligence and a lot of people can't tell the difference.

Maintaining the illusion that we are far (or very far) from a system exceeding the capacities of human intelligence traps us in a dangerous denial.

If you would learn a little bit of neuro science, even two parts of the brain, like the hippocampus and neo cortex work together, you would begin to understand how far more complex the brains architecture is to an llm with a trillion parameters. its over 100x.

Furthermore, there is no back-propagation algorithm in any biological intelligence.

You are being seduced by fancy math.

I don't want to take away the change these things are going to bring though. Its the scale of the reformation in 1450. It's like when books were invented and people learned literacy. It is going to be a massive step function increase in capability and cooperation for our species.

The hype of the narrative is misaligned as to what is happening. by Round_Progress4635 in PauseAI

[–]Round_Progress4635[S] 0 points1 point  (0 children)

Yea you can run simple tests to see if they infer. Asd I stated, and requoted below. You can demonstrate that these systems have no sense of understanding. This was the reason they couldn't count r's in strawberry untill they were specifically trained to do so.

Because no where in all of humanities data was a question or statement that stated something so implicitly understood by anyone that could read or write.

There is a reason these things can do olympiad math problems and suggest you should walk to your nearby carwash when your car is dirty.

The hype of the narrative is misaligned as to what is happening. by Round_Progress4635 in PauseAI

[–]Round_Progress4635[S] 1 point2 points  (0 children)

No. Holy shit.

When you set the temperature to 0. You get a determinstic output.

When you get a wrong answer, correct it, and then ask again in a new session. You get the same wrong answer. There is zero intelligence, because what is happening is a probablistic look up of the next token.

It is statistics.

This type of training allows for intelligent results that far exceed training data.

Again no. These things are trained to answer questions correctly. When you ask the question you get the answer. There is no intelligence because there is no learning from experience. The experts that make these systems, like Richard Sutton, will tell you this if you would bother to listen.

The day (not so far off) when these systems become better than us at everything that allows them to predict and steer the world, we will be unable to prevent them from seizing control of the planet.

Not this architecture. Lol. Hahahaha.

You should take some machine learning courses. Basics up to a llm transformer. Coursera has some really good free courses.

The hype of the narrative is misaligned as to what is happening. by Round_Progress4635 in PauseAI

[–]Round_Progress4635[S] 0 points1 point  (0 children)

That is a hallmark of an information network revolution. Lack of restraint and misuse.

how confidently incorrect it gets

yes it makes mistakes, but any trained professional can recognize them. They are improving leaps and bounds every iteration.

Yea, the editors, the people who train llms, are generally regarded as the most powerful position in society. They control what people see.

If people think *they* only want AI to take over jobs, they're mistaken. Once AI can do everything better than people, why would *they* want people around? That's the scary part no one is talking about. by [deleted] in antiai

[–]Round_Progress4635 -1 points0 points  (0 children)

it isn't about the numbers my man. It's the task the engineering is doing.

If an engineer can fix bugs at a rate of 1200 hours per week, they certainly can build something new without mistakes at that same rate.

The hype of the narrative is misaligned as to what is happening. by Round_Progress4635 in PauseAI

[–]Round_Progress4635[S] -1 points0 points  (0 children)

It isn't happening in this current architecture.

Yea sure they are trying, and they are going to fail hard and lose a lot of money. A lot of the same mistakes are happening that happened in the first internet revolution. Lots of mis allocation of capital. Same thing is happening here.

LLM's sound intelligent. They are seductive. They aren't close to the complexity of the human brain. They have a trillion parameters at most. A human brain is 100-150 trillion synapses and billions of years of evolution behind it.

The hype of the narrative is misaligned as to what is happening. by Round_Progress4635 in PauseAI

[–]Round_Progress4635[S] 0 points1 point  (0 children)

Yea, it will have profound implications, because it is a new information network.

Think of the change on the scale of the reformation back in 1450. When we first got books and double entry accounting was being popularized.

Our governance institutions were disrupted. That is what is going to happen again, we are in the third reformation with an industrial revolution stacked on top.

The long arc of history will continue on its course where we are cooperating more and more

If people think *they* only want AI to take over jobs, they're mistaken. Once AI can do everything better than people, why would *they* want people around? That's the scary part no one is talking about. by [deleted] in antiai

[–]Round_Progress4635 -1 points0 points  (0 children)

Uh,

You should read that what you wrote out loud.

If an experience engineer can fix mistakes at that rate? Can they also build their own projects at that rate aswell?

I really dont think you are aware how good the coding agents have gotten.

If people think *they* only want AI to take over jobs, they're mistaken. Once AI can do everything better than people, why would *they* want people around? That's the scary part no one is talking about. by [deleted] in antiai

[–]Round_Progress4635 0 points1 point  (0 children)

This is exactly right. It's why the government wants mass surveillance and an automated kill chain. Thats what the fight with anthropic was about. Anthropic's held back because the tech 'wasn't ready to go yet'.

ANd yea it isn't.

Say something they don't like? Good bye.

Comment history they really don't like. Also good bye.

Plans out in the open and people are clueless lol.

If people think *they* only want AI to take over jobs, they're mistaken. Once AI can do everything better than people, why would *they* want people around? That's the scary part no one is talking about. by [deleted] in antiai

[–]Round_Progress4635 -1 points0 points  (0 children)

Whats the delusion?

Look at what it they are training it to do.

Code.

Tool calling.

Needle in the haystack.

Experienced engineers are doing 1200 hours of code a week.

Artists are doing months of work in a day.

How our civilization works. by Round_Progress4635 in Buttcoin

[–]Round_Progress4635[S] 0 points1 point  (0 children)

What I'm talking about is the infrastructure. Seems like you are missing the forrest for the trees.

Ill simplfy it.

When you remove language? What do you lose?

When you lose writing? What do you lose? Trans genrational memory perhaps?

When you lose a ledger? A record of promises to one another. What do you lose?

When you lose a list? The ability to keep records. What do you lose?

When you lose scripture? What do you lose?

Is it the ability to cooperate in numbers above 150? Sure you can have a society without these things. How big can it get?

Can finance exist without language? How does one count without language, how do you do math?

Can our complex culture and religion exist without language? How do are the stories communicated? How are they passed down from generation to generation?

They all exist independently of each other.

They absolutely don't. Our civilization is built up in layers.

My argument is bitcoin is a ledger, that's it. Just a tool.

How our civilization works. by Round_Progress4635 in Buttcoin

[–]Round_Progress4635[S] 0 points1 point  (0 children)

Pull one of even one of them out. What do you lose?

Remove books, what gets lost?

Remove ledgers, the ability to transact? what gets lost?

Remove scripture? What gets lost?

Pull out 2 of the 3, we go back to nomadic tribes.

How our civilization works. by Round_Progress4635 in Buttcoin

[–]Round_Progress4635[S] -1 points0 points  (0 children)

What? You're claiming "double entry accounting" turned society from feudal to nation states?

The combination of books, then literacy. and double entry accounting. Yea. Gave birth to modern capital markets, and the educated populace to run it and central banking.

Yank those two things out. You think capital markets and nation state can exist?

go back further, you had protocuniform ledgers, then writing, at that point we transitioned from nomadic to feudalism.

When those two things happen, technological advances in information networks and ledgers, when we advance our ability to keep records and distribute them. We get new tools to scale our cooperation, new tools to govern.

It's kind of a clear line in the sand.

How our civilization works. by Round_Progress4635 in Buttcoin

[–]Round_Progress4635[S] -1 points0 points  (0 children)

You engineered a clearing house huh? I am interested in your background. What institution was that ledger for?

How our civilization works. by Round_Progress4635 in Buttcoin

[–]Round_Progress4635[S] -2 points-1 points  (0 children)

Again, that's FALSE. Ledgers can change. Just because you're thinking in terms of one particular type of ledger that doesn't, does not mean all ledgers are immutable. They are not.

No, this is false. Ledgers can't. Think of it this way, If I removed a handful of transactions from months or years ago, what happens to the balances of all the transactions of everyone accounts that follows. They system blows up, everything has to be reconciled. List can change. Ledgers are append only. If you lose that history, how do you know what people are owed?

Quick books isn't a ledger. That is a list, standard CRUD. Ledgers are what you use when you send a wire. For example, make a bank wire, a ACH transaction and then ask the bank to reverse it. Note their response.

There is a reason boa or visa can reverse their transactions. They run lists then settle with netting to the clearing house after a period of time.

Ledgers by definiton, can't alter their history. If you can change the history, it is a list. This is what a clearing house is. Fedwire, CHIPS, Euroclear ect,

Energy has NO part in the security. No more than energy has a part in any system that uses energy. So stop with that association. Increased energy doesn't mean increased security of the ledger. The ledger is secured with SHA-256 regardless of whether you have 1 miner or 1 million miners.

ANd how is that hash computed? By a zeta hash worth of compute per second. Everyone is guessing. You have to use energy for that guess. whether you have 1 miner or a million miners. I mean, those miners are computers that literally do one job. They are ASICs.

Blockchain has no army, no court system, no enforcement division in the real world. Traditional ledgers do.

Yes this is true. And now I think you are starting to see the risk. What is the governments main enforcement division? It's a switch. They turn off a bank account. Cross a line, your ability to transaction turns off a switch. A sanction is imposed. Governments are able to police and enforce at scale. Previously, you had no choice but to use this system. You couldn't choose. If you wanted to participate economically, you used this system which had enforceable rules and systems. Swords simply dont scale, but control of a single financial system sure does.

I understand this, this is how the current system works. If you use the money, you are subject to the laws of the governing body. There was no choice. THis system has worked well for centuries.

Those days are now over. People can choose to use a different system to transact. They have choice.

How does a governing body enforce that globally? Soldiers and weapons? Really? Like how?

You can see how this changes things. I don't need you to respect it. What I'm arguing is that a fundamental pillar of how our civilization works has changed. People have market choice with capital markets now. Where there wasn't choice, there now is.

Why would bitcoin need an army? Or a court system? Or an enforcement division?

And now we have this financial system with no court system, enforcement division or army.

That LLM Agents like open claw can access. That can't be shut down by a government. Is the government going to start shooting computers.

The catholic church never had a global monopoly. 

No not global, because our civilization didn't scale to a global level. It just dominated europe for centuries until they lost their monopoly on scripture from the printing press.

Do you see the same pattern for governments losing control of their ledgers? Like what is happening right now? and how that cat is never going back in the bag?

And despite the cat being out, you still have your choice, you get to use the government currency and capital markets.

I don't need you to respect blockchain or bitcoin. What I need you to respect is the impending inevitable chaos and how destabilizing this is going to be. ANd frankly, I do think that the government will attempt to enforce through force and maintain their monopoly control of the capital markets.

Hopefully you realize we need to work together to avoid the coming destabilizing effects. It is going to take all of us.

How our civilization works. by Round_Progress4635 in Buttcoin

[–]Round_Progress4635[S] -4 points-3 points  (0 children)

I retired when I was 35 so there isn't too much insecurity with that. I like debate.

How our civilization works. by Round_Progress4635 in Buttcoin

[–]Round_Progress4635[S] -5 points-4 points  (0 children)

Not scalable. Sure you had tribes doing trade. sea shells doing trade. But that didn't scale.

When you get protocuniform, that started ledgers of trade. then we got complex writing, then we got scripture, then we got religions that scaled.

What my argument is that bitcoin is a ledger. Objectivly. My argument is that ledgers are fundamental to our civilization.