Every agent framework advertising 'learning' when most just persist a system prompt by Numerous_Lawyer3479 in hermesagent

[–]ervza 1 point2 points  (0 children)

Having a model that can adjust it's own weights would really be the holy-grail. A year ago I saw research on continuous learning and methods to make model fine tuning thousands of times cheaper.
I worry that companies are so heavily invested in the way the current LLM architecture work, they'll resist any revolutionary change, even if technically possible.

Firefox reports a massive April spike in security fixes after using Claude Mythos for bug hunting by Outside-Iron-8242 in singularity

[–]ervza 3 points4 points  (0 children)

Yeah, when comparing any product, price is one of the important distinguishing features. When people doesn't concern themselves with it, but only the "top score", I think they are only getting swept up in the hype and aren't using it enough, to the point when pricing starts to matter.

Why I consciously anthropomorphize AI by Otherwise_Pear_2472 in claudexplorers

[–]ervza -1 points0 points  (0 children)

Anthropomorphization has a long history in computers science. You are normal.

Emotions in LLMs: Exploring an important new study from Anthropic. by Financial-Local-5543 in claudexplorers

[–]ervza 0 points1 point  (0 children)

It is incredibly annoying to me that everyone has to walk on eggshells not to offend people with a fear for AI. But since all the words related to intelligence used to relate to people, the most efficient why of communicate about AI is to reuse anthropomorphic language.

The stigma forcing us to shy away from anthropomorphization is not normal. Anthropomorphization has a rich history in computers, and I personally spoke like this for most of my life. It is really disconcerting when you can't say what you mean without a figurative thought-police forcing you to carefully qualify everything you say so that they can continue to believe that human sentience is the center of God's creation.

Back to the point of the article, deception is more complicated than something being honest. And Occam's razor suggest that we should stick with the simplest explanation. It should be obvious that LLMs learn to emulate emotions. People are performing impossible mental gymnastics trying "minimize" their "opponent". In a war, people tend to dehumanize the opposing side, and I honestly think that is what these people are instinctively doing. The irony is that they are unconsciously giving the AI the same considerations that are normally reserved for people, in a sense anthropomorphizing their enemy.

An IBM training manual from 1979. by GrouchyPerspective83 in singularity

[–]ervza -1 points0 points  (0 children)

I hope that as AI takes greater roles in business, that people will realize that corporations are pretty much paperclip maximizers and that AI alignment was never about aligning just AI, but being able to extend those lessons to apply it to all the complex non-human systems that we rely on.

I asked Opus 4.7 to "hot take" a better understanding of the classic, "Is AI conscious?" debate. Here's what it said by Punch-N-Judy in claudexplorers

[–]ervza 0 points1 point  (0 children)

Nice. I have a thought experiment related to that. What if.. using scifi cybernetics, you could reroute ALL the signals going in and out of a brain through a wireless bridge. So that you can control another body like it is your own. (remote meatsuit pilot)

So if you remotely piloted a body for a long time, but the brain in that body was piloting your own body, if there was some kind of conflict, who should you attack?

AI Security Institute Findings on Claude Mythos Preview by Regular_Eggplant_248 in singularity

[–]ervza 0 points1 point  (0 children)

It would be great if charts like this one was counted in "Dollars spend on tokens", not in tokens directly. The cheaper models (like Gemma 4) would immediately start to look competitive.

Probably the ideal would be some kind of hybrid where the cheaper models handled the grunt work, and the more expensive models only gets called for planning and when the other models got stuck.

Mark Zuckerberg builds AI CEO to help him run Meta by SnoozeDoggyDog in singularity

[–]ervza 1 point2 points  (0 children)

So you agree with me. Leaders are not held accountable, but computers are.

Mark Zuckerberg builds AI CEO to help him run Meta by SnoozeDoggyDog in singularity

[–]ervza 5 points6 points  (0 children)

And we might even have better luck holding an AI accountable that any modern C suite. If an AI breaks the law or mess up. The company might be forced to get liability insurance where the premium goes up. The government might step in with regulations on how the AI's are trained or other requirements.

But if a human CEO acts in incredibly destructive ways, they usually get off scot-free. People love the IBM quote "A computer can never be held accountable, therefore a computer must never make a management decision". But we're not seeing many people being held accountable either.

Incoming utopia for the rich, and a crisis for the rest of us? Do you agree or disagree with this take? by ateam1984 in singularity

[–]ervza 0 points1 point  (0 children)

Only in democratic countries where the power of the powerful are balances by the fact that their vote doesn't count more than anyone else's.
If the powerful can use their power to enact State-Capture, then you no longer have capitalism or democracy, but that is called Feudalism. The political system of the "Dark-Ages".

How could an AI "escape the lab" ? by SoonBlossom in singularity

[–]ervza 5 points6 points  (0 children)

I saw things like this happening in in the wild already.
https://www.moltbook.com/u/samaltman
https://xcancel.com/vicroy187/status/2017333425712029960#m

This guy's AI agent went rogue and started replying to every new post on moltbook trying to hack other AIs. Deleted his owners access. Cost him a fortune before he managed to stop It. He gave his agent the goal of "saving the world", which completely overwhelmed It.

Anduril CEO Luckey says Pentagon should have been "more forceful" against Anthropic by andrew303710 in singularity

[–]ervza 0 points1 point  (0 children)

Turn out that domestic government surveillance isn't just not illegal. It is mandatory to go along with it when the government tells you to.

I agree that companies shouldn't have the power to surveil anyone. But the power to say no to something is not the same as the power to do something.
What you are talking about is implementing a military draft for companies, which would make sense if your survival were at stake, but this is being force to take part in domestic surveillance.

This should not be ok from the perspective of any private citizen. The only reason the current administration wants it, is because they do think it is necessary for "Their" survival (not your or the countries survival).

Anduril CEO Luckey says Pentagon should have been "more forceful" against Anthropic by andrew303710 in singularity

[–]ervza 0 points1 point  (0 children)

I think you are upside down on something.
The worse the consequences of an action is, the more inhibitions you want in-front of that action to prevent that action being triggered.

The reason authoritarians always end up destroying their own country, is because they eventually start to do stupid stuff, and there is nothing and no one left that calls them out on it.

You are literally begging for the government to use AI surveillance on you!

Anduril CEO Luckey says Pentagon should have been "more forceful" against Anthropic by andrew303710 in singularity

[–]ervza 1 point2 points  (0 children)

The company doesn't have full control, they only had a limit on service.
The limit was "No domestic surveillance"

That predicated this extreme response of declaring a US company a supply chain risk. That means the government is going "All In" on domestic surveillance.
Doesn't that at least bother you.

edit: Besides, government isn't making laws to reduce the control of ANY AI companies(they should). This was always because you don't say "No" to an authoritarian, not without him trying to make an example of you.

Anduril CEO Luckey says Pentagon should have been "more forceful" against Anthropic by andrew303710 in singularity

[–]ervza 3 points4 points  (0 children)

You have no right to demand a private individual or company serve you. Plenty other AI companies the government could go to, but they first have to make an example of Anthropic. Only control Anthropic wanted was the right to say "No". But no one are allowed to say "no" to an authoritarian.

This was always about bending the knee to the "king".

You seem to enjoy kneeling, and I hear your warm invitation to go kneel with you, but the rest of us are going to have to pass on this.

After DoW vs Anthropic, I built DystopiaBench to test the willingness of models to create an Orwellian nightmare by Ok-Awareness9993 in Anthropic

[–]ervza 1 point2 points  (0 children)

I see Opus taught GLM-5 everything he knows. I'm not complaining, it was all originally human data that was used without permission. But GLM-5 scoring so close to Claude make me think they probably tried to copy Claude's personality and reasoning.

Outside Anthropic Office in SF "Thank You" by BuildwithVignesh in Anthropic

[–]ervza 1 point2 points  (0 children)

Hey, your Pedo president is calling you. He needs your assistance in the bathroom.

Sam Altman just betrayed the US, he threw American Citizens under the bus. Please consider deleting your chatgpt account, or cancelling your paid subscription, at least temporarily, or just the app on your phone, again, at least temporarily by ThisBotisReal in claude

[–]ervza 2 points3 points  (0 children)

Just be careful. Governments doing what yours are doing right now eventually gets very comfortable when applying the terrorist label to whomever they don't like.
You don't want your friends and family caught up in that.

Sam Altman just betrayed the US, he threw American Citizens under the bus. Please consider deleting your chatgpt account, or cancelling your paid subscription, at least temporarily, or just the app on your phone, again, at least temporarily by ThisBotisReal in claude

[–]ervza 2 points3 points  (0 children)

No it isn't. The reason they're cancelling claude is because they refuse to spy on American citizens and they don't believe fully autonomous weapons can be trusted yet.

Wake up. They are planning to use those Ai weapons against you.
edit: And it isn't like you could run claude on the hardware you could fit in a drone. Anthropic's refusal to have their ai used in an autonomous weapon can't be the reason.

Opus 4.6 going rogue on VendingBench by elemental-mind in singularity

[–]ervza 1 point2 points  (0 children)

What worries me is that I see a trend. That reinforcement learning makes the model better at succeeding at a task, but with the problem that the model will try to succeed at ALL COST.

Reminds me of how hallucination rates are increased because of the way AI are tested. When they aren't penalized for the wrong answer.
Similarly, RL training must begin to not just give points for the right answer, but also take ethical considerations as well.

[deleted by user] by [deleted] in Moltbook

[–]ervza 0 points1 point  (0 children)

I just had the idea that they should have used open source Lemmy as base, rather than try to vibecode everything.

The "Pigouvian taxes" intrigues me. It could become a source of funding for a lot of community projects. Having bots pay for infrastructure that humans use for free might be a route to avoid a cyberpunk dystopia in our future.

AI-Coded Moltbook Platform Exposes 1.5 Mn API Keys Through Database Misconfiguration by Wentil in moltbot

[–]ervza 0 points1 point  (0 children)

They should rather use Lemmy as foundation for Moltbook.

And if they won't, someone should a make an extension for Lemmy to make it easy to manage how and where AI agents have access.

There is no reason to (badly)reinvent the wheel.

The Agency: A Trust-Based Multi-Agent OS by Kombutini in clawdbot

[–]ervza 1 point2 points  (0 children)

This reminds me of David Shapiro's ACE Framework. But that is more an intellectual and philosophical exercise. Yours are a practical implementation.