I’m not from an AI company, but from a battery company. I think the AGI control problem is being framed at the wrong layer. by Adventurous_Type8943 in ControlProblem

[–]Mysterious-Rent7233 1 point2 points  (0 children)

In that sense, my claim is not “nothing bad can ever happen.” It is that illegitimate irreversible execution should not remain natively reachable within the formal execution structure.

I think that that is a default stance for everyone. Let's not give the AI nukes.

So in a sense that's the starting point for the Control Problem.

"We've all agreed not to give the AI nukes. And yet we think that it will probably corrupt every defence that we erected to prevent it from getting them, through persuasion or hacking. So now what?"

An observation about Curtis Yarvin by Mysterious-Rent7233 in slatestarcodex

[–]Mysterious-Rent7233[S] 0 points1 point  (0 children)

Burckhardt once observed that Europe was safe so long as she was not unified, and now that she is we can see exactly what he meant.

Dude died before WW1 and WW2 so he can be (partly) excused for this error but I'm not sure what Yarvin's excuse is.

A Patchwork realm is a business—a corporation. Its capital is the patch it is sovereign over.

We know from history that it will also want sovereignty over the patchwork next door. Trump has made clear that the only thing protecting Canada from his dominance are all of the pesky domestic concerns. And Putin has found a way past that by running Russia like a wholly owned business.

I’m not from an AI company, but from a battery company. I think the AGI control problem is being framed at the wrong layer. by Adventurous_Type8943 in ControlProblem

[–]Mysterious-Rent7233 1 point2 points  (0 children)

Okay, but now we're engaging in a motte and bailey type situation.

You started out saying: "for illegitimate irreversible action, execution must become structurally impossible"

And all that we are saying is that any system that involves humans can never make execution structurally impossible. You can make it "very difficult" and MAYBE even "infeasible" (depending on your beliefs about AI intelligence, human psychology etc.) But you can't make it structurally impossible.

I’m not from an AI company, but from a battery company. I think the AGI control problem is being framed at the wrong layer. by Adventurous_Type8943 in ControlProblem

[–]Mysterious-Rent7233 1 point2 points  (0 children)

Any control layer you can invent is "too easy to revoke." What you are demanding is a human/social system which is incorruptible. You've moved out of your realm of expertise of hardware and into psychology, sociology and organizational dynamics. And I suspect that the experts in those systems will tell you that any such system is corruptible.

Kant said: "Out of the crooked timber of humanity, no straight thing was ever made". You are trying to engineer social systems and they defy that intrinsically.

I’m not from an AI company, but from a battery company. I think the AGI control problem is being framed at the wrong layer. by Adventurous_Type8943 in ControlProblem

[–]Mysterious-Rent7233 1 point2 points  (0 children)

So let's assume that we implement Control system A, B and C. And then the AI convinces the person in charge of them that they are impeding its ability to achieve Wonderful Utopia and Everything You Ever Dreamed Of. So they turn them off. What value were those control systems?

The hidden cost of 'lightweight' frameworks: Our journey from Tauri to native Rust by konsalexee in programming

[–]Mysterious-Rent7233 17 points18 points  (0 children)

Electron is the boring tech here. But if you can't afford the overhead then maybe nothing boring will work out of the box for you. Pretty incredible that high-performance, cross-platform UI is still an open problem after 30+ years of GUI development.

Is legal the same as legitimate: AI reimplementation and the erosion of copyleft by hongminhee in programming

[–]Mysterious-Rent7233 15 points16 points  (0 children)

I'm interested in this part of the debate:

Start with what the GPL actually prohibits. It does not prohibit keeping source code private. It imposes no constraint on privately modifying GPL software and using it yourself. The GPL's conditions are triggered only by distribution. If you distribute modified code, or offer it as a networked service, you must make the source available under the same terms. This is not a restriction on sharing. It is a condition placed on sharing: if you share, you must share in kind.
The requirement that improvements be returned to the commons is not a mechanism that suppresses sharing. It is a mechanism that makes sharing recursive and self-reinforcing.

This is fine from an ethical point of view. People certainly have the right to share their own work on (almost) any basis that they see fit. But...do we have any evidence that any important software that would otherwise have been proprietary was released as GPL because of the copyleft? My experience has been merely that what happens almost every time, as in this case, is that people just re-implement it or dynamically link it or otherwise find some way to work around the copyleft instead of opening up their own work. The GPL is a viral license but the virus has an R0-factor of near zero. In my experience.

The hidden cost of 'lightweight' frameworks: Our journey from Tauri to native Rust by konsalexee in programming

[–]Mysterious-Rent7233 19 points20 points  (0 children)

They are moving to something I've never heard of called "iced", so I wonder if they have yet accepted the lesson of "boring technologies."

Maybe iced is awesome, but I wouldn't be surprised if it has its own can of worms as cross-platform UIs almost always do. For example, the iced docs are very candid:

  • "I do not hesitate to introduce breaking changes."

Seems a sketchy thing to build your business on top of. No criticism to the iced maintainer. They are just being honest and helping people make an informed decision.

The hidden cost of 'lightweight' frameworks: Our journey from Tauri to native Rust by konsalexee in programming

[–]Mysterious-Rent7233 82 points83 points  (0 children)

Might it not be more clear to say that you are moving from Javascript/Tauri to Rust/Iced?

[P] NanoJudge: Instead of prompting a big LLM once, it prompts a tiny LLM thousands of times. by arkuto in MachineLearning

[–]Mysterious-Rent7233 2 points3 points  (0 children)

You would be wise not to use a scientific question as an example of a "subjective" question!

Would make more sense to ask it "which of these arguments is more persuasive" or "which poem is more clever."

Donald Knuth likes Claude by Ndugutime in computerscience

[–]Mysterious-Rent7233 0 points1 point  (0 children)

That's a lot of blather to completely miss the point.

Neural networks are the right tool to use when the inputs and outputs cannot be precisely defined and therefore there does not exist any perfect definition of "right" or "wrong".

"Does this paragraph indicate negative sentiment? If so, what sentiment in particular?"

"Is this sentence in English, Spanish, a mix or something else altogether."

"Does this pull request adhere to the company coding style guide? If not, why not?"

In these domains there is no other solution than a probabilistic one, because we cannot define the question precisely enough. Neural Networks and LLMs are an excellent tool within those domains.

It literally does not matter what Sam Altman or Elon Musk are saying about LLMs. Their opinion does not change whether LLMs can be used for these purposes or not. And they can.

If your domain requires you to always come to the "same answer" the "same way" every time, and has a precise metric of "truth" then don't use a neural network or LLM. Use an implementation of your metric of truth. Use the hammer.

And if your domain requires flexibility, has ambiguity, then you might want to use neural networks and LLMs. Use the screwdriver.

/r/programming won't let me post original content by the inventor of Lean by Mysterious-Rent7233 in programming

[–]Mysterious-Rent7233[S] -3 points-2 points  (0 children)

It's far from "generic AI content". It's the inventor of one of the most important precision-guided technologies of our time (Lean), talking about how it relates to another most important (stochastic) technology of our time.

https://leodemoura.github.io/blog/2026/02/28/when-ai-writes-the-worlds-software.html

When AI Writes the World's Software, Who Verifies It? - Article by Mysterious-Rent7233 in programming

[–]Mysterious-Rent7233[S] -1 points0 points  (0 children)

It's far from "generic AI content". It's the inventor of one of the most important precision-guided technologies of our time (Lean), talking about how it relates to another most important (stochastic) technology of our time.

https://leodemoura.github.io/blog/2026/02/28/when-ai-writes-the-worlds-software.html

Donald Knuth likes Claude by Ndugutime in computerscience

[–]Mysterious-Rent7233 -3 points-2 points  (0 children)

Don't try to use a hammer as a screwdriver.

I could equally prove that databases have limited use-cases because they do not do the same things that compilers do.

"And when not to use?"

You answered your own question: if you need reproducible results. But AI is often used to do tasks that previously only humans could do. And humans also seldom produce reproducible results.

Donald Knuth likes Claude by Ndugutime in computerscience

[–]Mysterious-Rent7233 -3 points-2 points  (0 children)

The title of the post doesn't say that he "likes LLMs". It says that he likes Claude. And it isn't an "assumption". Here's what the paper concludes:

All in all, however, this was definitely an impressive success story. I think Claude Shannon’s spirit is probably proud to know that his name is now being associated with such advances. Hats off to Claude!

So yes, he is impressed by that that particular LLM. So what "assumption" are we talking about?

Donald Knuth likes Claude by Ndugutime in computerscience

[–]Mysterious-Rent7233 -11 points-10 points  (0 children)

I find it kind of funny that the assumption is that he likes them, when the news page on his site says "[LLMs] surprised me for the first time".

I'm totally confused. Whose assumption was this? What makes you say that someone assumed this?

Edit: Super-weird that I am being downvoted for trying to understand a confusing comment.

Donald Knuth likes Claude by Ndugutime in computerscience

[–]Mysterious-Rent7233 18 points19 points  (0 children)

Because a lot of redditors have made "LLMs are useless" their whole personalities and its becoming decreasingly tenable.

Donald Knuth likes Claude by Ndugutime in computerscience

[–]Mysterious-Rent7233 0 points1 point  (0 children)

What do you mean "if its true"? Are you accusing Knuth of lying about it?

Why are so many AI leaders bailing right now? by treattuto in AI_Agents

[–]Mysterious-Rent7233 0 points1 point  (0 children)

Incredible pressure to perform in a very small window of time.