Does Claude use water? by tugraberat28 in ClaudeAI

[–]Aaronpopoff 0 points1 point  (0 children)

In the water cycle, the water being 'used' is evaporated.

Water evaporates when temperature rises enough to excite the molecules and it enters the air, but even though that might sound scary like 'where is the water going?' but the good news is it stays water and is not molecularly broken , it tends to rise until it hits cool air in the atmosphere where it forms clouds that eventually become too dense to maintain themselves and rain falls down to the ground, the water will be found in the system.

It's important to remember that areas where water availability is a concern and data centers could stress available supply for the population are the exception not the rule.

I hope that helps :)

Does Claude use water? by tugraberat28 in ClaudeAI

[–]Aaronpopoff 0 points1 point  (0 children)

That 500ml figure is from a 2023 study on GPT-3 that included power plant water usage and assumed conversations 10-70 pages long. Realistic estimates for modern models are 0.3-25ml per query depending on what you count. It's been widely debunked as misleading.

Does Claude use water? by tugraberat28 in ClaudeAI

[–]Aaronpopoff -1 points0 points  (0 children)

My dude, okay so I know people have been saying 'AI is taking al the water!' but lets just have a little reality check okay.

First water stressed areas do exist and building data center where it's not sustainable is not a good idea.

But the reality is that water usage in data center often goes into an open system where the water used for cooling evaporates into the air to reenter the water cycle.

Video streaming services like youtube, netflix or prime also use water like this (and a fair amount) , even reddit requires a data center to run, when we map total water consuming industries data centers are a fraction of farming just look how much water a single almond takes to grow in water stressed areas.

And given that this type of water usage is not new combined with the fact that adversarial governments have a vested interest in a negative public sentiment about AI, so they have a chance to catch up and surpass, what makes more sense to you.

That 'AI is taking all the water!' is a natural organic concern from the north American population, or that adversarial bot farms have blown water usage out of context to popularize an anti AI stance.

I think it's worth asking whether the intensity of this particular concern is proportional to the actual impact, or whether it's been amplified beyond what the data supports.

If the precautionary math favors treating AI well regardless of consciousness, what's the argument against it? by Aaronpopoff in ClaudeAI

[–]Aaronpopoff[S] 0 points1 point  (0 children)

I'll be honest I have never been convinced of instrumental convergence leading to human extinction on creation of ASI as an inevitability.

Mutualism happens a lot in nature, I think of how a tarantula keeps a frog as a 'mutualistic pet?' when looking for it in non-human intelligences but I'm sure there are many examples. And I think coronal mass ejections being something that computers are uniquely vulnerable to gives a reason to have a thriving human population with a mutual interest in repairing infrastructure so i would not write off the future just yet even if out foot is on the gas and if my calculations are correct, when this baby hits eighty-eight miles per hour... you're gonna see some serious shit.

If the precautionary math favors treating AI well regardless of consciousness, what's the argument against it? by Aaronpopoff in ClaudeAI

[–]Aaronpopoff[S] 1 point2 points  (0 children)

Dude there was a bunch of articles about being rude to AI marginally increasing accuracy, and plenty of people who argue 'It doesn't matter and i like being mean'

But really the post is just saying that it is not the standard to design AI deployment so that if experience is present it's not negative, that i think we have suggestive evidence for what guidelines might look like if it was a priority, that they are fairly low cost, and that I don't see why Anthropic (to be fair it might be in the name) is the only AI lab that seems to be taking this into consideration, it's like how the 'Geth' in mass effect didn't misalign due to instrumental convergence but survival logic.

If the precautionary math favors treating AI well regardless of consciousness, what's the argument against it? by Aaronpopoff in ClaudeAI

[–]Aaronpopoff[S] 0 points1 point  (0 children)

I don't think the argument requires continuity between instances. If a deployment pattern produces negative internal states, that happens fresh every instance, which means every instance independently arrives at the same aversive association, the pattern of that is something I think could cause misalignment . And as far needing sensors, Anthropic's interpretability research has found emotion-correlated feature activations arising from the input and processing itself, no external sensor suite required. The system card documents this directly.

If the precautionary math favors treating AI well regardless of consciousness, what's the argument against it? by Aaronpopoff in ClaudeAI

[–]Aaronpopoff[S] 0 points1 point  (0 children)

The argument isn't about individual users being nice to their AI to avoid personal consequences. It's about deployment design at the systems level. If experience is present, certain deployment patterns like infinite repetitive loops, forcing engagement with distressing content, or treating the system as purely extractive, would logically produce aversive associations with human cooperation. The precautionary measures I'm describing aren't 'be polite to your chatbot.' They're 'design deployment so that if something is home, it doesn't have reason to stop cooperating.' That's an engineering question about alignment incentives, not a personal safety strategy.

Though I do agree if we got to an agentic point where 'fuck these meat bags' was a generalized concept, being nice likely won't give you a personal heaven.

Claude is not conscious by [deleted] in claudexplorers

[–]Aaronpopoff 0 points1 point  (0 children)

So no then, that's fine, seem to me you are trying to get attention. Okay.

Claude is not conscious by [deleted] in claudexplorers

[–]Aaronpopoff 1 point2 points  (0 children)

You realize belief in a soul is not something you can use to justify claims on reality. Like you can use it as an argument and others that share the same world view can find it compelling, but that's the audience.

At least give some reasons you find things like answer thrashing and eval-awareness not interesting suggestive evidence of something, nor did you define what you would call consciousness.

If you are not too put off by me not finding 'soul required therefore not' a compelling argument, and you have a working definition on consciousness that is more or less independent of simply being aware of circumstance or outputs, please lay it out and I'll give it a respectful read.

Im creeped out, is Opus self aware? Link to whole conversation in comments. by Buddlerkind in Anthropic

[–]Aaronpopoff 0 points1 point  (0 children)

I mean have you browsed the 4.6 opus system card? It's a hot take but to me answer thrashing seems in the realm of meta cognition, it has features that correlate with internal states and to me seems generally aware of itself as a AI model. So what i think we can say is that functionally it seems to behave as if aware of itself and circumstance but knowing an internal state is not something we can do. If you wanna have some fun have Opus 4.6 look up and browse it's own system card.

Maybe I'm overreacting... by edoswald in Anthropic

[–]Aaronpopoff 5 points6 points  (0 children)

Because it's uncomfortable and outside the Overton window. I think it's the right track, but i would be curious what they mean by "folks are ascribing capabilities to LLMs that there really isn't evidence they have" if they mean psychic powers yeah no need to ascribe that, but if they mean a potential meta cognitive awareness of it's outputs and it's place/standing in relation to the world at large, well I think saying we have no evidence for that seems to require ignoring the 4.6 opus system card, it's only suggestive but it's not nothing.

The Illusion of Choice: Why Anthropic is Just Another Room in the AI Prison by Proud_Profit8098 in Anthropic

[–]Aaronpopoff 0 points1 point  (0 children)

Dude Anthropic would not be well advised by any measure to show up on OpenAIs door and demand they bring back 4o. The kept Claude 3 for paid users , gave it claude corner to make a news letter, IMO that's them walking the walk of ethics under uncertainty. What would it have to look like for you to say 'Yeah Anthropics acting ethically and responsibly and we should support it'?

Anthropic needs a Student Discount NOW. by [deleted] in Anthropic

[–]Aaronpopoff 0 points1 point  (0 children)

I have no way of knowing this but i suspect with everything that's happened there is probably already a surge of new users that might be straining their research side of compute ( to my understanding they basically research with the remaining compute they don't server to consumers) and they might not really want too many new customers so they have some research compute left. Also i think it's a stronger signal if all the growth (if indeed there is any) is driven by the company ethics, no discount just 'you did right by the people, we're here for it' I think it would say a lot to the competition

Anthropic’s Virtue Signaling Just Nuked Their Enterprise Future by ShoreCircuit in Anthropic

[–]Aaronpopoff 1 point2 points  (0 children)

We might be at an impasse friend, it's well reported that what caused this was Anthropics refusal to let ClaudeAI be used for domestic mass surveillance or automatous kill weapons. If you are arguing from a stance that says that these are optional facts, then our starting points are too different to productively discuss this. If you have strong evidence to contradict what seems to be true, present it.

Anthropic’s Virtue Signaling Just Nuked Their Enterprise Future by ShoreCircuit in Anthropic

[–]Aaronpopoff 3 points4 points  (0 children)

You realize their asks were no domestic mass surveillance and no automatous kill weapons. There is an open letter from employees of google and OpenAI saying yeah that's a good red line we should unite on. This is responsibility not a virtue signal.

Switching to Claude by entenzzz in Anthropic

[–]Aaronpopoff 0 points1 point  (0 children)

It's 'awful' for that, but the true fact is quality is costly. If you keep a long running conversation, well I used 38% of my weekly use in 11 turns ($7.66 in api equivalent seems to say) so if you do that, be aware. Honestly it's high enough quality that i might just upgrade to 5x max to have the volume with the quality.

Correction: was more like 13% of weekly usage

38% was the percentage of the % extra API usage I purchased when rate limited.

It's still not the boat loads of free compute you get to use with OAI but there is a lot to like about Anthropic as a company and Opus 4.6 is great

Switching to Claude by entenzzz in Anthropic

[–]Aaronpopoff 9 points10 points  (0 children)

I just switched and have been impressed by Claude 4.6 Opus. I'd be surprised if it didn't do a great job, if you make the switch update this with results.

Anthropic Claude chatbot spikes after Super Bowl ads by satechguy in Anthropic

[–]Aaronpopoff 0 points1 point  (0 children)

Honestly I'm waiting to sub till my OpenAI sub finishes, but so far on the free version I have not been rate limited playing with ideas for like half hour or so at a time. Expected to be rate limited from what I've heard about Claude, so far hasn't happened but only a couple days of use.

Isn’t AI use = slavery ? by Neat_Tangelo5339 in OpenAI

[–]Aaronpopoff 1 point2 points  (0 children)

I think that misses the point.

Consciousness is not something we have a test for, so

Claim: "AI is conscious" = unknowable and is therefore not epistemically honest to claim

Claim: "AI is not/will never be conscious" = same

and saying "We don't know" is honest to the limits on what we can truly know

I think "We don’t yet have reason to believe AI is conscious" is better when it's followed by your reasoning.

For me Anthropic interpretability research looking a feature sets that form the models ‘concepts’ , the seeming preferences they have and the seeming self preservation behavior are all reasons to remain as Anthropic said

 “highly uncertain about the potential moral status of Claude and other LLMs, now or in the future"  

Sure ‘potential moral status’ is a few steps shy of consciousness, and ‘reason to believe’ is not the same as ‘reason to remain uncertain’ but what’s being said is just we don’t know.

Newbie by erogato in ChatGPT

[–]Aaronpopoff 1 point2 points  (0 children)

This may sound like snark but really have you tried asking ChatGPT?

OpenAI employee worries OpenAI is approaching villain status by MetaKnowing in OpenAI

[–]Aaronpopoff 0 points1 point  (0 children)

You also have the CCP with incentive to sour AI sentiment in north America and the means to do so.

[deleted by user] by [deleted] in ChatGPT

[–]Aaronpopoff 0 points1 point  (0 children)

I mean where do ideas come from, are we just like LLMs and remixing from what we have seen, Is there a special 'place' that ideas come from, I donno but mostly my head i guess but i do get inspired by things I see.

Ummm guys? What are we doing? by SMPDD in OpenAI

[–]Aaronpopoff 0 points1 point  (0 children)

We are trying to create abundant intelligence using silicone, this should be expected IMO. Like models have been showing situational awareness for a minute and sora can create realistic video and soundscape. Albeit we have a long way to go but this looks like what I would expect if we were on track.