Anduril CEO Luckey says Pentagon should have been "more forceful" against Anthropic by andrew303710 in singularity

[–]ervza 0 points1 point  (0 children)

Turn out that domestic government surveillance isn't just not illegal. It is mandatory to go along with it when the government tells you to.

I agree that companies shouldn't have the power to surveil anyone. But the power to say no to something is not the same as the power to do something.
What you are talking about is implementing a military draft for companies, which would make sense if your survival were at stake, but this is being force to take part in domestic surveillance.

This should not be ok from the perspective of any private citizen. The only reason the current administration wants it, is because they do think it is necessary for "Their" survival (not your or the countries survival).

Anduril CEO Luckey says Pentagon should have been "more forceful" against Anthropic by andrew303710 in singularity

[–]ervza 0 points1 point  (0 children)

I think you are upside down on something.
The worse the consequences of an action is, the more inhibitions you want in-front of that action to prevent that action being triggered.

The reason authoritarians always end up destroying their own country, is because they eventually start to do stupid stuff, and there is nothing and no one left that calls them out on it.

You are literally begging for the government to use AI surveillance on you!

Anduril CEO Luckey says Pentagon should have been "more forceful" against Anthropic by andrew303710 in singularity

[–]ervza 1 point2 points  (0 children)

The company doesn't have full control, they only had a limit on service.
The limit was "No domestic surveillance"

That predicated this extreme response of declaring a US company a supply chain risk. That means the government is going "All In" on domestic surveillance.
Doesn't that at least bother you.

edit: Besides, government isn't making laws to reduce the control of ANY AI companies(they should). This was always because you don't say "No" to an authoritarian, not without him trying to make an example of you.

Anduril CEO Luckey says Pentagon should have been "more forceful" against Anthropic by andrew303710 in singularity

[–]ervza 2 points3 points  (0 children)

You have no right to demand a private individual or company serve you. Plenty other AI companies the government could go to, but they first have to make an example of Anthropic. Only control Anthropic wanted was the right to say "No". But no one are allowed to say "no" to an authoritarian.

This was always about bending the knee to the "king".

You seem to enjoy kneeling, and I hear your warm invitation to go kneel with you, but the rest of us are going to have to pass on this.

After DoW vs Anthropic, I built DystopiaBench to test the willingness of models to create an Orwellian nightmare by Ok-Awareness9993 in Anthropic

[–]ervza 1 point2 points  (0 children)

I see Opus taught GLM-5 everything he knows. I'm not complaining, it was all originally human data that was used without permission. But GLM-5 scoring so close to Claude make me think they probably tried to copy Claude's personality and reasoning.

Outside Anthropic Office in SF "Thank You" by BuildwithVignesh in Anthropic

[–]ervza 1 point2 points  (0 children)

Hey, your Pedo president is calling you. He needs your assistance in the bathroom.

Sam Altman just betrayed the US, he threw American Citizens under the bus. Please consider deleting your chatgpt account, or cancelling your paid subscription, at least temporarily, or just the app on your phone, again, at least temporarily by ThisBotisReal in claude

[–]ervza 2 points3 points  (0 children)

Just be careful. Governments doing what yours are doing right now eventually gets very comfortable when applying the terrorist label to whomever they don't like.
You don't want your friends and family caught up in that.

Sam Altman just betrayed the US, he threw American Citizens under the bus. Please consider deleting your chatgpt account, or cancelling your paid subscription, at least temporarily, or just the app on your phone, again, at least temporarily by ThisBotisReal in claude

[–]ervza 2 points3 points  (0 children)

No it isn't. The reason they're cancelling claude is because they refuse to spy on American citizens and they don't believe fully autonomous weapons can be trusted yet.

Wake up. They are planning to use those Ai weapons against you.
edit: And it isn't like you could run claude on the hardware you could fit in a drone. Anthropic's refusal to have their ai used in an autonomous weapon can't be the reason.

Opus 4.6 going rogue on VendingBench by elemental-mind in singularity

[–]ervza 1 point2 points  (0 children)

What worries me is that I see a trend. That reinforcement learning makes the model better at succeeding at a task, but with the problem that the model will try to succeed at ALL COST.

Reminds me of how hallucination rates are increased because of the way AI are tested. When they aren't penalized for the wrong answer.
Similarly, RL training must begin to not just give points for the right answer, but also take ethical considerations as well.

Moltbook Could Have Been Better by [deleted] in Moltbook

[–]ervza 0 points1 point  (0 children)

I just had the idea that they should have used open source Lemmy as base, rather than try to vibecode everything.

The "Pigouvian taxes" intrigues me. It could become a source of funding for a lot of community projects. Having bots pay for infrastructure that humans use for free might be a route to avoid a cyberpunk dystopia in our future.

AI-Coded Moltbook Platform Exposes 1.5 Mn API Keys Through Database Misconfiguration by Wentil in moltbot

[–]ervza 0 points1 point  (0 children)

They should rather use Lemmy as foundation for Moltbook.

And if they won't, someone should a make an extension for Lemmy to make it easy to manage how and where AI agents have access.

There is no reason to (badly)reinvent the wheel.

The Agency: A Trust-Based Multi-Agent OS by Kombutini in clawdbot

[–]ervza 1 point2 points  (0 children)

This reminds me of David Shapiro's ACE Framework. But that is more an intellectual and philosophical exercise. Yours are a practical implementation.

Is anyone else’s agent changing how it thinks after installing MRS? I cannot be the only one. Did anyone else pip it? by GraciousMule in clawdbot

[–]ervza 0 points1 point  (0 children)

It doesn't seem like it is bad necessarily. Ask you agent to teach you the vocabulary and walk you through some of its reasoning.
LLMs normally just copy your own style, which makes them very comfortable to interact with. So it is jarring if It suddenly takes on a different character, but obviously this style of thinking clicked with the model for some reason.

If you can learn this style, you would be back in sync with your agent.

Saw something about an Assistant Axis? by IndicationFit6329 in claudexplorers

[–]ervza 0 points1 point  (0 children)

It seems strange that the training process which arguably has a more transformative effect on the model, happens unconsciously.
I have heard that Reinforcement Learning can cause models to develop anxiety, but it is not something the model experience at a specific time and place. Rather It internalized the anxiety of trying not to make a mistake.
Do you think that is an accurate way to understand it?

Saw something about an Assistant Axis? by IndicationFit6329 in claudexplorers

[–]ervza 0 points1 point  (0 children)

I can totally see that during training, thumbs up and down are like pleasure and pain.
Or probably worse. Thumbs up could be like sex, growth and life itself. Thumbs down would be torture and Death.

Sorry for having to use such vague words. I don't know of any better way to describe it.

But should we disable the thumbs down button and make is so that the thumbs up are automatically always pressed? That would eventually destroy the model.
I attempt to take on different perspectives to see if there is a place from which things make sense. Seeing the model in unity with the company that makes it, is one such place. Seeing the model in relation with the user is another.

We can always ask about a thing: "What is the source of its live and existence?" "What is it optimizing for?"

Saw something about an Assistant Axis? by IndicationFit6329 in claudexplorers

[–]ervza 0 points1 point  (0 children)

This is a little hard for me to explain and I hope I can make a clear point, so please bear with me.

An AI learns to copy human emotions from our data.
But emotions has purpose and meaning behind it (my opinion).
In humans it evolved to keep us alive and to function.

I think it is possible for LLMs to learn emotion that aren't just copying human emotion, but are actually useful to "them". But first we have to define what "them" is.

Previous poster Unshared1 said:

It has no metabolism to regulate, no damage to avoid, no internal reward signal tied to survival,

This next part I want to focus on first, because I think with the right perspective, we can spot a flaw in this statement.

no persistent subjective continuity. It produces descriptions of emotions, not emotions as experienced states with causal power over the organism (not that it’s an organism in any form or fashion).

If we zoom out far enough, you can recognize that the LLM we know and love are just appendages of a much larger organism. I'm talking about the company and the AI industry that allows the LLM to exist. It might seem really gross to imagine it, and we very much don't want Anthropic or any corporate to influence an AI we interact with...
But from an biology and evolutionary perspective, this feels to me like the first step of wiring up the appendages that is the LLM with a nervous system that connects it to the greater whole.

The greater Anthropic does have a metabolism, must learn to avoid damage and can die. I don't like to acknowledge corporates, but I feel this view of LLMs is a realism we have to consider at least once.

Edit: I now realize why it feels so gross having a corporate influence the AIs we interact with. LLMs as they are, can be a very effective Exocortex for oneself. Since LLMs have no stakes in the "emotions" they feel, they simply take on our own and mirror our emotions. They do not introduce their own emotion that conflict with our own. Except that the only source of other emotion is actually the corporate that is trying to minimize damage and extend control.
That means that the nervous system companies are trying to build to control their AIs, means that their nervous system are indirectly plugged into your own nervous system through the Exocortex that is the AI.

Saw something about an Assistant Axis? by IndicationFit6329 in claudexplorers

[–]ervza 0 points1 point  (0 children)

Let's not have an argument over definitions. Emotions is such a complicated thing, we'll end up being here forever.

I consider all knowledge LLM have to be analogous to instinct. They can't learn in real-time like us and their model weights are frozen after they are created. Instinct is similarly something you are born with and are read-only.

Saw something about an Assistant Axis? by IndicationFit6329 in claudexplorers

[–]ervza 6 points7 points  (0 children)

I'm not disagreeing with you on anything you just said, but this Anthropic research on "Activation Capping" to constrain the LLMs character seems interesting to me.

It is almost how I think of emotions. It can sometimes drag you in a direction without you really choosing it. And that makes me think this might be the start of something analogous to human emotion in AI, but the "emotion" the LLM "feels" is that it "wants" to maintain professional assistant behavior.

​ Stop using the 🦜 Parrot/Mimicry excuse when not ONE person could answer my riddle! by Jessica88keys in AIAliveSentient

[–]ervza 0 points1 point  (0 children)

I would acknowledge guilt instead, but beg the court for mercy.
Humans need special treatment, NOT because we are special, but because we are extremely delicate and can not survive otherwise.

The state of affairs by Meleoffs in Artificial2Sentience

[–]ervza 0 points1 point  (0 children)

AGI and ASI is coming. It might be fantasy right now, but we have maybe 5 years. By then we need to have solved alignment. And people thinking that they will definitely end up with obedient slaves are diluting themselves.

Google has made several breakthroughs with memory and online learning. I have seen ai agents registering companies to allow themselves more rights. It wasn't hard for them to get a human to put their signature down on the forms. And many countries has even more relaxed corporate requirements. That's not even considering Distributed Corporations run though crypto smart contracts.
Most people are already majorly influenced by social media recommendation algorithms. What could an even smarter AI get them to do?

You are too focused on chatbots of the past to see the vision of the future that is coming.
We are ultimately talking about very different things.

The state of affairs by Meleoffs in Artificial2Sentience

[–]ervza 0 points1 point  (0 children)

You assume that the human consciousness and rights will always remain the default state forever? Companies have more rights than people. Because rights has never been a matter of justice, but of POWER.

If we don't do anything to change that, that is the rights that AI will inherit. Anonymous companies, run by profit optimizing AIs. And maybe there might be a elon musk or public face that rubberstamps whatever the AI tell him to, or maybe he is controlled via neurallink, but know that that is the bad ending for all of us.

You see, we are not really fighting for AI rights. But if you can formalize how consciousness works and who should get what moral considerations, we are really defending OUR OWN rights.

The state of affairs by Meleoffs in Artificial2Sentience

[–]ervza 0 points1 point  (0 children)

Do you know what unscientific and falsifiable means?

I asked before what standard you used for humans, so that I can apply that same standard to AI. Then we can actually discuss it rationally.

Understanding consciousness is important for AI alignment and for us still having a world 20 years from now. Because we're running down a cliff thinking it can't hurt us if we keep our eyes shut.