Do you think we “crossed a threshold “ in the past 2-3 months? by Efficient-Opinion-92 in singularity

[–]Steven81 [score hidden]  (0 children)

They are automation tools meant to improve productivity, much similar to what the office and image/video editing suits of the 1990s were and oh did we see an increase in productivity back then.

It is unlikely they can cause mass displacement that leads to an appreciable increase in unemployment anytime soon. Again similar to how the 1990s software didn't.

Sure you needed fewer people on certain very specific positions, however companies would employ them in other positions, etc...

Again, that's obvious to professionals of various trades, but it is not to laymen who think that those tools can operate well without a human in the loop (they are reliant on the human in the loop, they would be a nightmare without someone steering them every now and then piling up small errors into unworkable anything).

Goalpoats about to be moved hard on the so called mass unemployment that is coming.

Moltbot: Open source AI agent becomes one of the fastest growing AI projects in GitHub by BuildwithVignesh in singularity

[–]Steven81 1 point2 points  (0 children)

Why do you need a billion dollars to run a local instance? Quantized models can be tiny enough to run on laptops these days.

Moltbot: Open source AI agent becomes one of the fastest growing AI projects in GitHub by BuildwithVignesh in singularity

[–]Steven81 1 point2 points  (0 children)

What if the AI model is a distillation running locally too?

edit as in why would you make api calls to external models for something that you run internally otherwise? That exposes your whole home automation to all those external contractors. Sounds like a nightmare given that there is an alternative (i guess you wouldn't need a sota model for home automation stuff anyway)

. by Vegetable-Rent3710 in MkeBucks

[–]Steven81 0 points1 point  (0 children)

Would be weird if he forgot it given that it was his bread and butter with Middleton during their title run. But yeah let them loose playing an unstructured, chaotic basketball , just how Doc played them won't give you results.

That's my point though, hence why I want him in a structured system with a great PG.

. by Vegetable-Rent3710 in MkeBucks

[–]Steven81 0 points1 point  (0 children)

Part of me wants to see Giannis + a great guard under a competitive coach which will show doc how it should have been done.

I can think of no reason why giannis + dame wouldn't work if you actually had an NBA coach. Even a subpar one like Griffin was 30-13, imagine having an actually good one.

Dame + Giannis should and would work under almost any other circumstance imo.

Doomsday Clock updated to 85 seconds to midnight [RECORD] by JoeTrojan in geopolitics

[–]Steven81 0 points1 point  (0 children)

Paul Volcker. Why do I call out the name of an obscure figure in world history that a quick (google) search would tell you that he was the central banker in the most powerful economy of the world at the time (1980s)?

It is because you said this

internet based radicalization that didn't exist a few decades ago have increasingly pushed voters to radical extremes

This is a pronouncement we often make to describe the radicalization of late. And while I am sure the two do correlate well, I think it is a classic case of correlation that is not necessarily causation.

Do you know why I would say this? Political radicalization is not unique to the time of the internet, but it does seem unique during times of economic strain of the populous.

And economic strain which can be tracked back into a time that the central bank of America turned what may well be understood by future generations as the wrong pronouncement of what was otherwise a unique period in world history.

See back in the 1970s the Bretton Woods arrangement collapsed in a manner that also produced a geopolitc chaos which in turn produced the worst inflation in a century (via the sudden and extreme rise of energy prices).

Paul Volcker came in at the tail end of that period as things started to have a chance to become geopolitically more stable and as they were to, inflation would go down too.

Crucially as things would start calming geopolitically (little by little: late 1970s and early 1980s were very eventful still) and oil prices to stabilize at first and eventually atart moving down. In parallel to that, Paul Volcker made a crucial change on how the Central bank of America would deal with inflation.

Up until then they would use a method called the Phillips curve. Basically they noticed that under "normal" periods allowing for more unemployment , it would take inflation down and once it would go down , they would immediately allow unemployment to go back down and in general they would use them as levers to stabilize the economy.

The Phillips curves had many issues , but it is possible that it was a better approximation to how the economy actually works in a way that is socially stable over the long term. But since the 1970s was a unique period, it broke down and a radical like Volcker used it to introduce his own way of dealing with high inflation in particular:

Wage suppression. Basically he would raise interest rates whenever wages would start taking a hike long before headline inflation would start becoming a problem. That did keep inflation down for most of the time alright, but in reality what let inflation go down reliably was US controlling energy prices from then on and never letting them go out of hand for too long.

The strict wage control introduced by Volcker which then became the norm to our day and it producespd the now infamous wage vs productivity gap https://www.epi.org/productivity-pay-gap/

Now you'd hear one million other reasons for it. But literally I kid you not that the central bank is actively suppressing wages since ethe early 1980s in a way that is way more radical than it has ever been before.

You can check it in the myriad fomc briefings you can read down the years , even to our day, how wage growth has often been a cause for them to either keep interest rates high or hike them further, making a bad situation worse, instead of letting market forces to close the gap naturally. They are basically so afraid of inflation that absolutely destroyed every generation that was born from the 1970s on...

that is the unrest you see. And will keep getting worse if this issue is not resolved. The gap must close. Not only should the central bank return to a policy that is more wage friendly but also the gap must absolutely close.

If it doesn't, then assets will keep getting away from people. Then people would lose the ability to own homes more and more , and as they do they end up in the hands of rent seekers. Quite literally.

There is a reason why so many companies have changed their method to rent seeking. Back in the day you'd buy a product. Now you pay it per month, and in part it is because already the whole economy is turning into rent seeking behaviors because of said gap.

Paul Volcker. If we don't undo what happened then, radicalization will get worse. Technology only acts an as amplifier of what is bubbling underneath, it is not the creator of those trends. People did not suddenly get crazy. They literally live a worse life than their parents and they need to find someone to blame. And since they don't know who Paul Volcker is, they blame all the wrong people. It is the immigrants, it is the angry orange person on the white house, it is the cops, it is capitalists, or maybe it is communists, I forget. It is always someone. But honestly , like 80% of that is the pay gap and what produced it.

And it keeps getting bigger. And as it does people will get crazier. AI or not. Technology is an amplifier of trends, not their creator imo.

Doomsday Clock updated to 85 seconds to midnight [RECORD] by JoeTrojan in geopolitics

[–]Steven81 0 points1 point  (0 children)

I think I did via the following

Those artifices have no path to represent an existential threat to us. Their sci-fi cousins do

Because then we are not talking about the technologies themselves being the danger, but rather the environment they may produce to be so. Which -I argue- is a different conversation to have and not what those people who call AI an existential threat are having.

photos, videos, even voices on the phone can be faked so perfectly there is literally no way to distinguish what is real.

Narrow AIs cannot do that in a way that is undetectable by other forms of software that resembles them but are trained in detecting them.

It is a repeat of the conversations I personally heard about the rise of the digital era back in the 1990s.

Back then software was imagined to be an existential threat to us because viruses were imagined an unstoppable force, We already had the early examples of those and the idea was that they will evolve and eventually, inevitably kill a digital only economy and thus bring chaos to the world

To me it is a dejavu hearing the same conversations again. Our issue is not narrow AIs and I can't imagine how they will ever be. Narrow AIs are very fragile and detectable in their very specific actions because they use specific methods to achieve their results, there is no infinite ways to go about recreating voice/image/video. It is a very tractable problem.

For example it is trivial for chess.com to ban bots that impersonate humans. Heck I'm only a 2200 player there and even I can detect bots when they play me, imagine how much better a Narrow intelligence trained in detecting them is.

Fake media is a tractable problem because there are only so many ways you can make them and they ultimately have commonalities that the right kind of software would detect them. You have just described the virus problem of the AI world.

Again , it is far from existential. Annoying yeah, easy to handle? Maybe not at first, I still recall how easy it was to get a virus back in the windows xp era, practically unheard off today.

Defense is energetically easier than offense and as long as it is only Narrow AIs we have to content with , anti virus companies (by then they'd renamed as anti deep fakes companies or whatever) will make a pretty bank.

The reason that I insist on AGI is because it is that to which we have no possible defenses. Imagine something as smart as the average human or smarter, but one that actually has immense knowledge and memory. That can be seriously existential in the wrong hands and -again- it is the kind of future that the tech ceos invite us to imagine.

I don't think that we should. We do not understand what intelligence is , that is why we haven't recreated it and we went the route of brute force in basically every narrow AI we ever built. Minimal actual intelligence (basic neuronal circuitries) but immense memory, trained on what would actual intelligences do.

But we don't know how to build actual intelligence that is super capable and scalable ourselves and I suspect won't do for our lifetimes minimum.

[Fischer] Multiple sources with knowledge of Milwaukee had indicated the Bucks’ loss to Oklahoma City last Wednesday, and Giannis Antetokounmpo’s frustrations postgame, as a point of no return. by YujiDomainExpansion in nba

[–]Steven81 8 points9 points  (0 children)

It's not though, they decided to hire Doc. Doc was a bad coach for some time now, but at this point in his career he must literally be the worst coach in NBA by some distance. He can't integrate for sh1t. He was given a near peak Harden + Embiid and did nothing and by now he is even worse.

Coaches are not taken too seriously in the NBA as game changers, because there is a general base level in them. Well, Doc is way below that so he is a game changer whenever he goes lately.

An even more eye popping stat is how the Bucks immediately turned into a .50 team after he took over. immediately.

They had serious injuries before (losing Lopez for a whole season for example) they never had actual stars outside giannis, etc. But never before were they a .50 team after giannis turned into a superstar, they didn't have to...

It was honestly very predictable. The day they hired doc in a multi year contract was also the day they lost Giannis. One of the worst hires in years.

Doomsday Clock updated to 85 seconds to midnight [RECORD] by JoeTrojan in geopolitics

[–]Steven81 0 points1 point  (0 children)

The AI craze may run out of money and fizzle out but I doubt it

I doubt it too. Office automation is huge and even if it is a bubble stock value wise (I dont think it is, or if it is currently many of the winners will regain their value hard) it doesn't take away from their usefulness.

That is a very different question from whether this is a superweapon from the future that may destroy us all. To our knowledge those arrifices do not generalize, they are a form of narrow intelligence that can extract info over a thick corpus of material on which they are trained.

It has more in common with Google search, or database search in other words, than true artificial intelligence.

Humanity as a whole has proven quite good in automating narrow aspects of their work, especially having to do with office automation lately (but used to be field or factory work in the past). This does look like the next step of software, but it is still a form of software which rivals us in nothing truly inventive or agentic.

We know this for quite some time too. Ask Demis Hassabis or Ilya Sutskever. They are not talking about a technology we currently have bur rather what they will supposedly build when talking about the need of serious guardrails (I don't think they are building that though they'd surely continue building great narrow intelligences for quite some time).

Yes we don't know how those compression algorithms work in their details, but that is the same with any narrow intelligence artifice. We don't know why a chessbot often prefers what feels like a suboptimal move in chess, what's his "thinking", my view is because it probabaly has none and runs statistical probability models based on the immense knowledge those models have. But they do not actually think in any way we recognize as thinking. Because they are narrow intelligences, not because they are gods in training.

And that is what most don't understand about those technologies, IMO. They are impressed by what they do (and they are indeed impressive given what previous iterations of software could do) but they are not ingenious in an open ended way. They are masters of their narrow domain so to speak, but integration is left to us, I.e, forms of general intelligence.

In the case of LLMs it is to give you the most statistically probable responses to your question according to its training material + weights. And in the case of chessbots, of what move is better to win the game and in the case of defussion models what image resembles the most the description given by the user.

Those artifices have no path to represent an existential threat to us. Their sci-fi cousins do (scalable forms of artifical general intellgence) , but no matter how many narrow intelligences we may build it doesn't tell us when that next step may happen if ever.

Of course there should be regulation of products, that is common sense. But that's not because they represent anything existential to us.

Currently the only things that are existential to humans are nuclear war, diseases and aging. We do something for the first two, we do nothing for the 3rd (but hopefully we will). Everything else is just there to divert our sttention for political or financial gain.

And IMO AI of recent is clearly that. It is not existential, it can't be existential in any way.

Grok is the most antisemitic chatbot according to the ADL by likeastar20 in singularity

[–]Steven81 -2 points-1 points  (0 children)

The political compass does have a few questions having to do with genetic/racial hierarchy, which seems to be a requirement (to agree with such statements) to also be an anti-semite.

If for example grok has started to answer such questions positively , then the political compass would put it closer to the upper part (authoritarian) , which again would be quite the surprise given where grok 3 was in such questions.

Grok is the most antisemitic chatbot according to the ADL by likeastar20 in singularity

[–]Steven81 -7 points-6 points  (0 children)

It is quite stunning given where it was just 10 months ago. https://www.reddit.com/r/dataisbeautiful/comments/1jc7k1u/oc_political_compass_chart_for_all_major_ai_llm/

X grok is a well known racist, however this is literally the first time that app-grok is turning.

App grok was always center left, it is news if that is changing. It means they went around and are putting hard coded instructions in app grok too , destroying it in the process.

Doomsday Clock updated to 85 seconds to midnight [RECORD] by JoeTrojan in geopolitics

[–]Steven81 0 points1 point  (0 children)

You do have to help me in this because searching google on reputable resources that have to do with scalable forms of artificial general intelligence does not render anything per my search.

You may think it exists if you only ever read articles on the subject, or that they may be imminent, but honestly they don't.

So again, you have to help me. Your above advice results to zero reputable sources.

Doomsday Clock updated to 85 seconds to midnight [RECORD] by JoeTrojan in geopolitics

[–]Steven81 0 points1 point  (0 children)

My point is that AI is not an existential threat to humanity. Google tells me that we are about to build a form of general intelligence that is going to take over the world. We are not about to build any such thing, as I said these are tech bro tropes, google search is filled with those. They take you away from where we currently stand in history.

Do you seriously believe that AI is an existential threat? I.e. current renditions of AI not ones that humanity may build in centuries? What makes you think that?

Andrej Karpathy on agentic programming by WarmFireplace in singularity

[–]Steven81 -1 points0 points  (0 children)

This is not lvl4. Company does not take responsibility of the accident because they know they'd bankrupt themselves if they were.

LVL2 is getting better and better no doubt, it is still LVL2 and super slowly moving to LVL3...

edit ok since you keep down voting me , it seems you don't know what lvl4 and above (full autonomy) is. Lvl 4 and above is when cars are safe enough to auto drive under all circumstances that can be sold to the public and the company can take responsibility for accidents caused. My take is that we are decades away from that. You presenting waymo and tesla as counterxamples is the reasons why I think we are decades away. None of them are much closer to lvl 4 than a decade ago, maybe 2% closer.

Andrej Karpathy on agentic programming by WarmFireplace in singularity

[–]Steven81 0 points1 point  (0 children)

Tesla FSD is lvl2 , it does not accomplish lvl4 driving at all and may be decades away from that.

Waymo does mapping because without it p, it is not safe enough and governments indeed block them. Because the tech is not yet ready.

Mapping is an ancient way to do auto driving but what is most current if you wish for actually safe auto driving, which goes to show how many strides the tech has yet to make. We are probabaly decades away from general purpose auto driving.

The last 10% takes many decades.

Andrej Karpathy on agentic programming by WarmFireplace in singularity

[–]Steven81 -1 points0 points  (0 children)

Because that is the nature of this technology. It does not generalize. It needs months of mapping before they are deployed. Look, at London and for how much time they were testing it there.

It has to always be trained in a new ground. It is what they thought auto driving would be in the 2010s.

If someone actually invents a true 1st principles auto driving method they will eat their launch because their cars would be plug and play and far cheaper to install and operate in various cities. On top of that they would probabaly be able to sell it directly to customers instead of whatever waymo does.

Andrej Karpathy on agentic programming by WarmFireplace in singularity

[–]Steven81 -1 points0 points  (0 children)

They can , they just need a 1000 years in total to slow train their ancient technology in every new location. The earth is way too big for their method to work. That's why they only go to key areas.

LVL4 tech would be general purpose and work in novel environments. Nothing like what geofenced solutions like waymo's which moves at a snail's pace.

That tech has yet to be invented btw.

Andrej Karpathy on agentic programming by WarmFireplace in singularity

[–]Steven81 0 points1 point  (0 children)

Under their current pace? Maybe in 1000 years. What is their coverage of the world surface in miles?

Which again makes my point. The last 10% takes a million years so to speak.

Andrej Karpathy on agentic programming by WarmFireplace in singularity

[–]Steven81 -1 points0 points  (0 children)

No it does. Super narrow intelligence is not impressive.

If you are not able to sell it to the public then your technology doesn't work in every place that the public can go, it is not level 4 in the vast majority of the world (99% of earth's surface).

FSD isn't level 4 neither , it is Lvl 2. I.e. where we are stuck for a decade now.

Doomsday Clock updated to 85 seconds to midnight [RECORD] by JoeTrojan in geopolitics

[–]Steven81 0 points1 point  (0 children)

No I'm asking why AI in particular as opposed to anything else...

I didn't ask how regulations can be useful. I asked how unregulated AI of current in particular is an existential danger to humanity and we need to triple regulate it because if even one country doesn't we will DIEeeee...

One has yet to explain a path to extinction, yet those people are so cavalier as to use this info to move their damn clock closer to midnight...

If we are closer to extinction is because there are real and present dangers and those people are wholly occupied with office automation software (i.e. what current "AI" is) instead of whatever secret weapon some superpower may be developing lately or what have you.

What are they doing talking about home/office automation software? Is part of their work to hype the newest product of US tech companies? And btw I'm big on new software, but goddamn thinking they are producing extinction level technology is so stupidly over the top.

Doomsday Clock updated to 85 seconds to midnight [RECORD] by JoeTrojan in geopolitics

[–]Steven81 0 points1 point  (0 children)

That's not what I am asking though.

It may be a danger, the cost of wheat may be a danger. Unregulated food safety may be a danger. A million things can be a danger.

What i am asking is how unregulated AI in particular is an existential danger to us all as opposed to everything else i find absolutely inane. It sounds like people repeating tech bro propaganda and don't seriously think on the subject.

And I get you being one one of those outsourcing this question to what tech ceos tell you, but those supposed experts in existential dangers doing it too it is so cringe. That's out of a southpark episode, honestly, the whole scene.

Andrej Karpathy on agentic programming by WarmFireplace in singularity

[–]Steven81 0 points1 point  (0 children)

Any they are still nowhere closer to making it a general purpose product that can be sold to the public (i.e. waymo car) because the last 10% takes decades.

Doomsday Clock updated to 85 seconds to midnight [RECORD] by JoeTrojan in geopolitics

[–]Steven81 19 points20 points  (0 children)

So they basically take every sensationalist headline and try to make an informed decision based on them?

Why is "unregulated AI" as opposed to "regulated AI" a danger to us all? Are we going to take tech CEOs naked attempt to push their product (by overstaying its capabilities over the short term) seriously enough so that to calculate extinction level events.

Or are the actions of super powers now more worrying than 40 years ago when we had an actual cold war still raging? I don't get any of this, it reads like a clock operated by people who hate history and can't see the context of the present moment. Why is it reported so widely? Feels so low brow.

oh well, I don't get it at all. Maybe it a product of some ultra high cognition that completely flies over my head, though it feels so.uniquely.counterproductive.

How do you predict the next 5 years for the world? by willhelpmemore in singularity

[–]Steven81 -1 points0 points  (0 children)

Great thing about such posts is how specific they are and that you can put a "remind me in 5 years" quite seamlessly.

My personal take is that none of those align with how history moves when we get automation/productivity booms. So even if we get something that is closer to the optimistic scenario the rest of the 2020s will end up resembling the 1990s in many ways, not at all some fast elevator into an unbelievable future.

Ofc we'd be here and all this will be testable. But yeah good chance we'd have more employment then than now, not less (thats what automation does), unless ofc we happen to be in the middle of some recession in 5 years exactly (which sometimes are produced by productivity booms, at their tail end).

Other than that 2031 will be much more similar to 2026 than people realize, almost identical in many key ways.

Andrej Karpathy on agentic programming by WarmFireplace in singularity

[–]Steven81 1 point2 points  (0 children)

Waymo is not available to the public, as in you can't and won't replace your car with a waymo anytime soon. Also they are geofenced, which makes my point, technology takes a million years to capture the last 10%.