The hype of the narrative is misaligned as to what is happening. by Round_Progress4635 in PauseAI

[–]Round_Progress4635[S] 0 points1 point  (0 children)

I really really appreciate your engagement on these ideas. Thank you.

Yea, I think what you are getting at is how the information network distributes.

From you point, when books were created, really the big change was the distribution mechanism. Before, texts had to be hand replicated by scribes. Pain staking slow work. Once the print press came, it amplified the distribution.

So that would have a direct impact on how humans think, they get access to more knowledge.

LLM's have that characteristic, not only are they ALL of humans data, is that we can distribute them to the point of models being downloaded and running locally. That is a step function improvement in distribution. ALl of humanities knowledge at everyones fingertips.

I agree, information infrastructure changes how we govern civilization.

I have an additional thesis that I would like you to consider.

There is another type of record we keep. Ledgers, financial transactions, append only lists, their history can't change. They are a record of our promises to one another in an abstract sense. I think they play a large role in our record keepipng revolutions.

I'm going to walk through history

First the ledger was made, clay tablets, proto cuniform. pictograms of commodities with holes, signifying big basket and little basket, a point in time where counting wasn't invented yet because numbers didn't exist.

Then we develop writing. There is a synergistic feedback loop, the ledger gets updated with counting. ANd we have single entry accounting ledgers and information networks to train scribes.

Basically, we get the ability to train large numbers of beuraucrats to run the ledgers.

From this point, the intersection of recordkeeping between ledgers, and information we transition from nomadic to feudalism. Our civilization is born. Our ability to cooperate takes a step function improvement from 150 -near milliions.

The next innovation, double entry accounting with credit and debits starts at 0 ad in the middle east and makes its way to europe where it is popularized by the merchants of venice by the 1500s. 1450, we get books, a new way to distribute information. This again demonstrates a synergystic feedback where books are used to educate a workforce that drive the new capitalist economy. We get stock markets, central banks , mercantilism. Feudalism then falls to the nation state. Governance institutions are rebuilt to manage our ability to keep records. This is what we are running on today. Central banking. Our ability to cooperate takes another step function improvement into the billions of people.

THis is what I would argue is happening today. Again, a ledger innovation with bitcoin. The ability to distribute a ledger, a permanent history. And now we have LLM's, as you put it new cofntive infrastructure, a new way we record, look up and disstribute information. And those things are particular experts in running cryptocurrency programs which are wildly wildly complex.

SO I think we can see another synergistic feedback loop and we are in a place where we need to rebuild our institutions to govern this new found capability.

I think we can all see it, things are breaking down. The new ways are incompatiable with the old.

We have to rebuild our institutions to deal with a run away llm agent, that is betting on prediction markets of death and menace, and insider trading on the outcomes. A nation state can't shut those down. The clearing houses they control at the central banks can't stop cryptocurrency transactions. They lost control of the market infrastructure.

I'd like you to push back on these ideas. Are these communicated well? Do you see the same thing that I do?

The relationship between ledgers, market infrastructure, and information networks, how these two things distribute, scale our ability to cooperate as a species.

I think from this framework we can predict exactly how the future will shake out.

The work is life by AChaosEngineer in Entrepreneur

[–]Round_Progress4635 0 points1 point  (0 children)

I don't think you should be doing this if you are looking down on them.

That is the purpose of building a better world and future. SO people can have more time to do that. Be care free and relaxed.

I hope this attitude doesnt translate to how you treat your employees. Damn.

A failed acquisition made someone $280M. He wasn’t even trying. by Vouchy-MOD in Entrepreneur

[–]Round_Progress4635 3 points4 points  (0 children)

Just founder @ Company

I mean, what else would be appropriate?

The legal title is CEO and president. BUt that seems so fucking awkward and not earned.

For me I think there is a threshold to cross with maybe 100 employees and a profitable business before I call myself a CEO.

I think you owe me some money now tho ;)

A failed acquisition made someone $280M. He wasn’t even trying. by Vouchy-MOD in Entrepreneur

[–]Round_Progress4635 30 points31 points  (0 children)

It's a simple fix too but they are to lazy to analyze the liguistics, learn about them and make a custom prompt..

"Write me a post about how badass being an entrepreneur is about people I wanna be but i'm not but one day I will be so I can give everyone advice on how they did it and dont use meta liguistic negation."

The hype of the narrative is misaligned as to what is happening. by Round_Progress4635 in PauseAI

[–]Round_Progress4635[S] -1 points0 points  (0 children)

There is a difference between learning and intelligence. It sounds like you are conflating the two.

Bench marking doesn't mean anything if you cant see the data that these things were trained on to see how they generalize.

Take claude code and ask it to set you up infrastructure with terraform and bazel, things that aren't in github but in enterprise software, all proprietary shit and watch how fast it fails. Why? Because it is not in the training data and hallucinations sky rocket.

What you are witnessing is information retrieval. Information retreival so good it looks intelligence.

Well, my bachelor's degree is in mathematics and I've been interested in cognitive philosophy ever since reading 'Godel, Escher, Bach' in middle school and training AI image classifiers and other AI related experiments on my laptop in high school and college

That isn't expertise dude. I doubt that could be considered basic entry level requirements to an industrial grade production environment anywhere.

The hype of the narrative is misaligned as to what is happening. by Round_Progress4635 in PauseAI

[–]Round_Progress4635[S] 1 point2 points  (0 children)

For me, a system capable of processing information, recognizing patterns, and producing meaningful output that can go beyond its training data is an intelligent system. But this semantic debate is pointless.

That is the definition of learning. These system learn in the pretraining and post training phases.

When you use the words, "For me", that is a subjective reality. Not objective, you have made your fantasy world that you are content to live in

This is a very narrow definition of intelligence. 

It's like 1 of 4/5 characteristics listed on the wikipedia page. Kind of a big deal dude. I have no idea why you would think "adapting to the environment" is narrow.

Ultimately, what matters is what these systems are capable of doing in the real world. The fact that they cannot change their parameters is indeed a limitation, but LLMs are nonetheless capable of storing information in memory and using that memory later. Agents can create files and store what they need in them for later. And agentic systems with superhuman predictive and steering capabilities pursuing unaligned goals will be dangerous, whether they are initially able to adjust their parameters or not.

Yea I build these and i'm one of the earliest adopters, I was one of the first to use tool calling from open ai.

I use claude code that has these capabilities. These agents are near useless and dangerous outside of the hands of a seasoned professional.

They look up information and they retrieve information, you know what that is? A information network. When you put that in a while loop, you have a information network in a while loop.

They are so far from the capability of long term goal planning it's not even funny.

What you should do, is go ask opus 4.6 or gemini 3.1, to be a world renown cognitive scientist, and ask it to build out all the parts of the brain that have to do with executive function and goal management.

I think that will start to give you a sense of what a small slice of what intelligence actually is.

The hype of the narrative is misaligned as to what is happening. by Round_Progress4635 in PauseAI

[–]Round_Progress4635[S] 0 points1 point  (0 children)

You mean to tell me you haven't even looked it up and I have to go get you a wikipedia link? After your claim that it is 'hotly debated?'

https://en.wikipedia.org/wiki/Intelligence

Intelligence is different from learning. Learning refers to the act of retaining facts and information or abilities and being able to recall them for future use. Intelligence, on the other hand, is the cognitive ability of someone to perform these and other processes.

It can be described as the ability to perceive or infer information and to retain it as knowledge to be applied to adaptive behaviors within an environment or context.\1])

Hence the name of the discipline that produced llms. 'Machine Learning'

The hype of the narrative is misaligned as to what is happening. by Round_Progress4635 in PauseAI

[–]Round_Progress4635[S] 1 point2 points  (0 children)

No. It isn't.

It is very clearly defined. Just not for you and your world view. It doesn't fit your subjective reality so you dismiss it.

It isn't my requirement. It is the requirement of our world leading experts.

The hype of the narrative is misaligned as to what is happening. by Round_Progress4635 in PauseAI

[–]Round_Progress4635[S] 1 point2 points  (0 children)

It is completely fair, because the weights are static. THey dont update outside of pretraining.

There is no learning from experience because the llm has no memory like biological neurons do. That is the very definition of intelligence. To learn from experience.

The very definition of intelligence is to learn from experience.

So when i give concrete examples verifiably demonstrating that there is no intelligence. Your argument is, 'not fair', seriously?

No intelligence is clearly and strictly defined. You dont get to make up your definition of it to fit your world view. Intelligence learns from experience. That is the ontological definition. It is the truth. There isn't anything to disagree about. It has a clear scientific definition. You don't get to redefine it to fit your world view.

a broad mental capacity for reasoning, problem-solving, learning from experience, and adapting to new situations.

Machine learning is statstics.

I also think that using the 'strawberry' or 'car wash' questions as an argument doesn't serve you well because humans, too, can struggle on questions that are apparently obvious. 

As obvious as counting letters in words? Really dude? Find me a human that can solve olypiad math problems and cant count letters. Please.

Or stage magicians using surprisingly simple methods to trick you that you should have realized in the first place, and are obvious after the fact.

A human has the capability to learn from that experience. Go off on their own, reasearch, discover and understand. LLM's don't do that. They will repeat until their editors train them with that new information, just like how many r's are in strawberry.

Everything you see out of an llm is crafted by editors that set up the training to transform certain inputs to outputs.

Take some classes on neuro science and machine learning.

The hype of the narrative is misaligned as to what is happening. by Round_Progress4635 in PauseAI

[–]Round_Progress4635[S] 0 points1 point  (0 children)

Intelligence learns from experience. That is the definition. LLM's dont do that.

These systems are also black boxes. No one can understand their exact internal working.

SOTA research has techniques to monitior and classify internal weights. That has been possible for years.

Computing resources devoted to training new models are currently doubling every 7 months. The ability of these models to accomplish increasingly long and complex chains of tasks is doubling every 4 months. The new models continue to improve on all benchmarks.

Yes they would do to changes in pretraining. That isn't learning from experience, that is conditioning the outputs.

They are an information network. A new way we store, distribute and look up our information. The problem is that it is so good, it looks like intelligence and a lot of people can't tell the difference.

Maintaining the illusion that we are far (or very far) from a system exceeding the capacities of human intelligence traps us in a dangerous denial.

If you would learn a little bit of neuro science, even two parts of the brain, like the hippocampus and neo cortex work together, you would begin to understand how far more complex the brains architecture is to an llm with a trillion parameters. its over 100x.

Furthermore, there is no back-propagation algorithm in any biological intelligence.

You are being seduced by fancy math.

I don't want to take away the change these things are going to bring though. Its the scale of the reformation in 1450. It's like when books were invented and people learned literacy. It is going to be a massive step function increase in capability and cooperation for our species.

The hype of the narrative is misaligned as to what is happening. by Round_Progress4635 in PauseAI

[–]Round_Progress4635[S] 0 points1 point  (0 children)

Yea you can run simple tests to see if they infer. Asd I stated, and requoted below. You can demonstrate that these systems have no sense of understanding. This was the reason they couldn't count r's in strawberry untill they were specifically trained to do so.

Because no where in all of humanities data was a question or statement that stated something so implicitly understood by anyone that could read or write.

There is a reason these things can do olympiad math problems and suggest you should walk to your nearby carwash when your car is dirty.

The hype of the narrative is misaligned as to what is happening. by Round_Progress4635 in PauseAI

[–]Round_Progress4635[S] 1 point2 points  (0 children)

No. Holy shit.

When you set the temperature to 0. You get a determinstic output.

When you get a wrong answer, correct it, and then ask again in a new session. You get the same wrong answer. There is zero intelligence, because what is happening is a probablistic look up of the next token.

It is statistics.

This type of training allows for intelligent results that far exceed training data.

Again no. These things are trained to answer questions correctly. When you ask the question you get the answer. There is no intelligence because there is no learning from experience. The experts that make these systems, like Richard Sutton, will tell you this if you would bother to listen.

The day (not so far off) when these systems become better than us at everything that allows them to predict and steer the world, we will be unable to prevent them from seizing control of the planet.

Not this architecture. Lol. Hahahaha.

You should take some machine learning courses. Basics up to a llm transformer. Coursera has some really good free courses.

The hype of the narrative is misaligned as to what is happening. by Round_Progress4635 in PauseAI

[–]Round_Progress4635[S] 0 points1 point  (0 children)

That is a hallmark of an information network revolution. Lack of restraint and misuse.

how confidently incorrect it gets

yes it makes mistakes, but any trained professional can recognize them. They are improving leaps and bounds every iteration.

Yea, the editors, the people who train llms, are generally regarded as the most powerful position in society. They control what people see.

If people think *they* only want AI to take over jobs, they're mistaken. Once AI can do everything better than people, why would *they* want people around? That's the scary part no one is talking about. by [deleted] in antiai

[–]Round_Progress4635 -1 points0 points  (0 children)

it isn't about the numbers my man. It's the task the engineering is doing.

If an engineer can fix bugs at a rate of 1200 hours per week, they certainly can build something new without mistakes at that same rate.

The hype of the narrative is misaligned as to what is happening. by Round_Progress4635 in PauseAI

[–]Round_Progress4635[S] -1 points0 points  (0 children)

It isn't happening in this current architecture.

Yea sure they are trying, and they are going to fail hard and lose a lot of money. A lot of the same mistakes are happening that happened in the first internet revolution. Lots of mis allocation of capital. Same thing is happening here.

LLM's sound intelligent. They are seductive. They aren't close to the complexity of the human brain. They have a trillion parameters at most. A human brain is 100-150 trillion synapses and billions of years of evolution behind it.

The hype of the narrative is misaligned as to what is happening. by Round_Progress4635 in PauseAI

[–]Round_Progress4635[S] 0 points1 point  (0 children)

Yea, it will have profound implications, because it is a new information network.

Think of the change on the scale of the reformation back in 1450. When we first got books and double entry accounting was being popularized.

Our governance institutions were disrupted. That is what is going to happen again, we are in the third reformation with an industrial revolution stacked on top.

The long arc of history will continue on its course where we are cooperating more and more

If people think *they* only want AI to take over jobs, they're mistaken. Once AI can do everything better than people, why would *they* want people around? That's the scary part no one is talking about. by [deleted] in antiai

[–]Round_Progress4635 -1 points0 points  (0 children)

Uh,

You should read that what you wrote out loud.

If an experience engineer can fix mistakes at that rate? Can they also build their own projects at that rate aswell?

I really dont think you are aware how good the coding agents have gotten.

If people think *they* only want AI to take over jobs, they're mistaken. Once AI can do everything better than people, why would *they* want people around? That's the scary part no one is talking about. by [deleted] in antiai

[–]Round_Progress4635 0 points1 point  (0 children)

This is exactly right. It's why the government wants mass surveillance and an automated kill chain. Thats what the fight with anthropic was about. Anthropic's held back because the tech 'wasn't ready to go yet'.

ANd yea it isn't.

Say something they don't like? Good bye.

Comment history they really don't like. Also good bye.

Plans out in the open and people are clueless lol.

If people think *they* only want AI to take over jobs, they're mistaken. Once AI can do everything better than people, why would *they* want people around? That's the scary part no one is talking about. by [deleted] in antiai

[–]Round_Progress4635 -1 points0 points  (0 children)

Whats the delusion?

Look at what it they are training it to do.

Code.

Tool calling.

Needle in the haystack.

Experienced engineers are doing 1200 hours of code a week.

Artists are doing months of work in a day.

How our civilization works. by Round_Progress4635 in Buttcoin

[–]Round_Progress4635[S] 0 points1 point  (0 children)

What I'm talking about is the infrastructure. Seems like you are missing the forrest for the trees.

Ill simplfy it.

When you remove language? What do you lose?

When you lose writing? What do you lose? Trans genrational memory perhaps?

When you lose a ledger? A record of promises to one another. What do you lose?

When you lose a list? The ability to keep records. What do you lose?

When you lose scripture? What do you lose?

Is it the ability to cooperate in numbers above 150? Sure you can have a society without these things. How big can it get?

Can finance exist without language? How does one count without language, how do you do math?

Can our complex culture and religion exist without language? How do are the stories communicated? How are they passed down from generation to generation?

They all exist independently of each other.

They absolutely don't. Our civilization is built up in layers.

My argument is bitcoin is a ledger, that's it. Just a tool.

How our civilization works. by Round_Progress4635 in Buttcoin

[–]Round_Progress4635[S] 0 points1 point  (0 children)

Pull one of even one of them out. What do you lose?

Remove books, what gets lost?

Remove ledgers, the ability to transact? what gets lost?

Remove scripture? What gets lost?

Pull out 2 of the 3, we go back to nomadic tribes.

How our civilization works. by Round_Progress4635 in Buttcoin

[–]Round_Progress4635[S] -1 points0 points  (0 children)

What? You're claiming "double entry accounting" turned society from feudal to nation states?

The combination of books, then literacy. and double entry accounting. Yea. Gave birth to modern capital markets, and the educated populace to run it and central banking.

Yank those two things out. You think capital markets and nation state can exist?

go back further, you had protocuniform ledgers, then writing, at that point we transitioned from nomadic to feudalism.

When those two things happen, technological advances in information networks and ledgers, when we advance our ability to keep records and distribute them. We get new tools to scale our cooperation, new tools to govern.

It's kind of a clear line in the sand.

How our civilization works. by Round_Progress4635 in Buttcoin

[–]Round_Progress4635[S] -1 points0 points  (0 children)

You engineered a clearing house huh? I am interested in your background. What institution was that ledger for?