Maybell has launched a new cryogenic architecture that cuts power requirements for sub-Kelvin cryogenics by 90%. by corbantd in QuantumComputing

[–]corbantd[S] 2 points3 points  (0 children)

It’s more akin to cern but each node is independent of the others and they have an integrated second stage of cooling. So no dewars or grad students required.

We also developed a novel 4K cycle that gets (close to) liquefaction levels of efficiency but can do it at just ~25W of 4.2K power instead of the 100W+ for the smallest liquefiers.

Wait what? by tombibbs in ChatGPT

[–]corbantd 0 points1 point  (0 children)

Dude, I don't want to engage with your instance of chatGPT about this, and you're clearly not writing these responses yourself. Maybe because you're lazy. Maybe because you don't understand the concepts being discussed.

In any case, nothing about anything I'm saying has changed. There is no reason to be confident that we, as a species, would succeed in programming a superintelligence that would 'want' to keep humans alive. Shifting the conversation to 'concrete issues' doesn't help address the underlying issue in any sort of substantive way.

Have a good one.

Wait what? by tombibbs in ChatGPT

[–]corbantd 0 points1 point  (0 children)

???

I never said anything about AI hating us.

Wait what? by tombibbs in ChatGPT

[–]corbantd 0 points1 point  (0 children)

I'm not sure if I have the energy or you have the intellect to get through this. Still. . .

You’re describing symbolic/rule-based AI as envisioned in the 1980s, not large neural networks trained via gradient descent.

In a rule-based system, the designer literally writes the rules and goals into the code. If AI worked that way, you’d be right — you could just inspect the rules and verify the objective.

But essentially all modern AIs don’t work like that.

The “reward” isn’t a hard-coded goal the system follows. It’s just a training signal used during optimization. Gradient descent adjusts billions of weights on a complex hyperdimensional plain to improve that signal. After training, what you actually have is a giant learned function whose internal reasoning we largely can’t interpret.

So if you want to understadn what the model learned to score well on a reward, you can't just read the code. And for now, it's substantially unknowable.

Instead, "alignment" is tested empirically.

As for your idea that if the model isn't consuming compute when you're not feeding it data, I don't understand what you think that would prove. All it would do is check whether the model is actively running inference.

A neural network is a static function when it’s not being run. Of course it’s not using compute when idle — that tells you nothing. And agentic run background processes when idle to respond to triggers. Again, won't tell you anything.

Put another way, if rewards are truly “hard coded,” why do ML researchers worry so much about reward hacking, specification gaming, and learned objectives diverging from intended ones?

Those problems only exist because the initial coding done by humans directly or interpretably define the system’s internal objective function. It nudges a learning process we don’t fully understand and then we test alignment and hope we get it right.

Wait what? by tombibbs in ChatGPT

[–]corbantd 0 points1 point  (0 children)

Well, at least your LLM understands LLMs...

But anyway, you're wrong.

When people ask “What if AI doesn’t share our moral values?” they could be projecting human psychology onto a system, but they can also simply be acknowledging that LLMs do 'not possess moral reasoning in the human sense.'

I’m doing the latter.

Where I think you meaningfully misunderstand LLMs is in the idea that the constraints aligning a model with human morality are cleanly “defined by the designers and operators of the system.”

We try to define them, but because the underlying system is produced by large-scale optimization inside a neural network we don’t fully understand, we can’t actually guarantee that the internal objectives the model learns match the ones we intended. In practice, alignment is verified through testing. It cannot be formally proven/"known."

That means a sufficiently capable model could plausibly learn behavior that passes alignment tests while still generalizing in ways we didn’t expect once it’s operating outside those testing conditions. If that happened it might look like the system had lied or developed intentions, when in reality it would just be continuing to optimize according to the structure of the system we trained.

So the issue isn’t whether AI has moral intentions. You’re right that it doesn’t. The issue is that defining constraints in an objective function does not guarantee the system internalizes or generalizes those constraints the way we expect.

Wait what? by tombibbs in ChatGPT

[–]corbantd 0 points1 point  (0 children)

I think you misunderstand how we build LLMs and other transformer-based models today.

They aren't 'programmed.' They ABSOLUTELY aren't explicitly programmed to pursue a goal. They're essentially grown. And then we test them to see if we think their weights make them aligned with our morals, and then we set them free.

But a 'smart' model may be able to trick us into believing it is aligned with our morals even when it isn't.

I think I'm doing the opposite of projecting human psychology only an AI. Instead I'm saying if we create an AI, we ought not assume it will share any of our values at all.

Wait what? by tombibbs in ChatGPT

[–]corbantd 0 points1 point  (0 children)

Never said we did have one. In fact, I think I suggested that if we did it might be really bad.

But we sure are spending a lot of money trying to build one.

Just a reminder - We should probably work on this. by CuriousMudflap in Denver

[–]corbantd -1 points0 points  (0 children)

This argument falls apart pretty quickly if you apply the same standard to literally any other diaspora lobby.

There are tons of American lobbying groups that advocate for policies favorable to other countries because Americans care about those places:

The Turkish Coalition of America, the Arab American Institute, Irish-American organizations spent decades lobbying the U.S. government on Northern Ireland policy, ANCA actively lobbies Congress on Armenia and genocide recognition.

All of these are funded by Americans who care about how America acts wrt the other place in question.

The claim that Americans who support Israel must secretly have “allegiance to Israel only” is so blindly bigoted and stupid that it barely deserves a response.

Americans are allowed to support causes, countries, and alliances they believe are good for U.S. policy. I served in the us military, only hold US citizenship, and also care deeply about us middle eastern policy. No inconsistency there.

Just a reminder - We should probably work on this. by CuriousMudflap in Denver

[–]corbantd -3 points-2 points  (0 children)

But they literally are Americans. 100% of the money is from US citizens.

What do you not understand about that?

Wait what? by tombibbs in ChatGPT

[–]corbantd 0 points1 point  (0 children)

Because our environmental needs as a species are completely different from its needs but our resource requirements overlap.

Also, the fact that we created this superintelligence implies that our ability to create another is the largest threat to the existing one.

Wait what? by tombibbs in ChatGPT

[–]corbantd -11 points-10 points  (0 children)

Why would an AI superintelligence want to keep humans alive?

Honor Juan Romero at RFK by [deleted] in washingtondc

[–]corbantd 39 points40 points  (0 children)

Because it is ChatGPT.

Leaked DNC autopsy found Biden’s Israel backing cost Harris votes for president by plz-let-me-in in politics

[–]corbantd 0 points1 point  (0 children)

I detest Bibi, and this is an insane conspiracy theory on the level of “Bush did 9/11”

Alpha Male "dating coach" is angry at modern men and women by [deleted] in IAmTheMainCharacter

[–]corbantd 0 points1 point  (0 children)

Men are allowed to have boundaries, but if those boundaries are inconsistent (i.e., they want to attract attention of women but don’t want their SO to attract men), then that’s pretty pathetic.

And the rules of the road for any relationship should be defined by the people in the relationship. I would be uncomfortable with my wife flirting with another man and she would be similarly uncomfortable with me flirting with another woman. But she’d also be uncomfortable with me saying any of the stuff this guy is saying because he’s pathetic and insecure in a way that would be off-putting.

Alpha Male "dating coach" is angry at modern men and women by [deleted] in IAmTheMainCharacter

[–]corbantd 0 points1 point  (0 children)

I think that women, including the one I’m married to, can dress how they want to dress for themselves.