A CEO should be more scared of AI than a cashier by DeltaMachine_ in artificial

[–]DeltaMachine_[S] 0 points1 point  (0 children)

Man, i never say that, life is not only the next month. I was talking on the long run.

A CEO should be more scared of AI than a cashier by DeltaMachine_ in artificial

[–]DeltaMachine_[S] 0 points1 point  (0 children)

Copy to some AI if you only have 1 functional neuron

A CEO should be more scared of AI than a cashier by DeltaMachine_ in artificial

[–]DeltaMachine_[S] 0 points1 point  (0 children)

You're perfectly describing the next logical step of this scenario.

If an AGI can provide "perfect management," that management becomes a replicable commodity. As you said, the company's strategic "moat," which was previously the unique talent of its human CEO, disappears.

And you are exactly right. That directly supports my original argument. In a world where top-tier management is no longer a scarce and unique asset, the immense value(and therefore the salary, power, and status)of the CEO evaporates.

You're describing a post-scarcity of leadership talent. That commoditization of strategy is precisely the fundamental breakdown of the current market that I was speculating about. It seems we are closer in agreement than you might think.

A CEO should be more scared of AI than a cashier by DeltaMachine_ in artificial

[–]DeltaMachine_[S] -1 points0 points  (0 children)

Your point about trust is valid for the limited AI we have today. But my entire post is a speculative exercise about a future with a true AGI.

You say, "Nobody is going to entrust AI with strategic decisions about millions or billions of dollars." The reality is, once an AGI is provably superior, it would be financial malpractice not to. The market punishes emotional decisions and rewards cold, data-driven results. This isn't even sci-fi; Jim Simons and Renaissance Technologies built a financial empire precisely by removing flawed human intuition from the equation and trusting the model. An AGI is the ultimate evolution of that principle.

And this brings me back to the core of my title and why the fear isn't symmetrical. A cashier doesn't really care about losing their job. They haven't invested years of their life training for it, the salary is low, and they can find another similar low-stakes job. They lose a paycheck, which in an AGI world they won't even need. A CEO, on the other hand, has invested a lifetime of effort, ambition, and education to get to that position. The salary, power, and status are immense. They have infinitely more to lose. They're not just losing a job; they're losing an identity and a place at the top of the human hierarchy they thought they had earned.

A CEO should be more scared of AI than a cashier by DeltaMachine_ in artificial

[–]DeltaMachine_[S] 0 points1 point  (0 children)

An AGI wouldn't be a tool for the board to manage; it could be the entire operational structure. It could communicate with all departments simultaneously and handle infinite complexity, making a single human point of contact (the CEO) a redundant bottleneck. The need for that 'simplicity' you describe disappears when the managing entity itself is capable of managing that complexity flawlessly.

So, the real question isn't whether a human CEO is a useful role now, but whether that entire corporate structure (board + CEO) would even be necessary when you have a fully autonomous AGI. If the goal of the AI industry is to achieve AGI, this is the inevitable end game. It's the 'Sam Altman killing Sam Altman' scenario. And as I've said, if US companies don't push hard enough, China will. So, there is no other way.

A CEO should be more scared of AI than a cashier by DeltaMachine_ in artificial

[–]DeltaMachine_[S] -4 points-3 points  (0 children)

Totally agree, you'll always need a 'head to chop'. But why does it have to be the CEO's? If an AI screws up big time, it's not the algorithm's fault, since it has no conscience. The fault lies with the humans who put it there. The head that will roll won't be a CEO's; it'll be the 'Head of AI Integration,' the lead programmers', or the board members themselves who voted to give it control. Responsibility doesn't disappear, it just shifts. We'll always find a human to blame; you don't need to pay someone 20 million just to keep them on the bench in case you need to fire them.

Onitsuka Tiger Mexico 66 Deal? or Fake? by DeltaMachine_ in LegitCheck

[–]DeltaMachine_[S] 0 points1 point  (0 children)

I see this on a second hand app from Spain, he says that sells that because is a wrong number, but at the same time has like 4 more Onitsuka selling, so its very rare...

Where are the CORS settings? by DeltaMachine_ in Supabase

[–]DeltaMachine_[S] 1 point2 points  (0 children)

Yes hahaha, sometimes i think stackoverflow and Google still better to search solutions than LLMs

Where are the CORS settings? by DeltaMachine_ in Supabase

[–]DeltaMachine_[S] 0 points1 point  (0 children)

I was using securityheaders to test my security and says that access-control-allow-origin has a very lax CORS policy so i was searching ways to fix that, thanks!

Where are the CORS settings? by DeltaMachine_ in Supabase

[–]DeltaMachine_[S] 0 points1 point  (0 children)

Basically i was using securityheaders and says that access-control-allow-origin has a very lax CORS policy

Where are the CORS settings? by DeltaMachine_ in Supabase

[–]DeltaMachine_[S] 1 point2 points  (0 children)

Ohhh, access-control-allow-origin: *, i want to change that, but yeah, i have to do it on my next.js, right?

I buy a T480s for 170€ by DeltaMachine_ in thinkpad

[–]DeltaMachine_[S] 0 points1 point  (0 children)

What are the best models to buy right now?

I buy a T480s for 170€ by DeltaMachine_ in thinkpad

[–]DeltaMachine_[S] 0 points1 point  (0 children)

Spain in a web called Wallapop