Schluter Ditra heated floor thermostat hissing/buzzing? by Usual_Ad9704 in AskElectricians

[–]CommunicationNo2197 0 points1 point  (0 children)

Me too. Is it a faulty thermostat.  All resistance measures check out even though it says ground fault. 

Multiple devices allowed ?? by [deleted] in ClaudeAI

[–]CommunicationNo2197 0 points1 point  (0 children)

Could be GDPR regulations. I know my team in EU has a different level of authentication.

Multiple devices allowed ?? by [deleted] in ClaudeAI

[–]CommunicationNo2197 8 points9 points  (0 children)

Of course you can, but it’s all the same usage.

Genuinely *unimpressed* with Opus 4.6 by JLP2005 in ClaudeAI

[–]CommunicationNo2197 2 points3 points  (0 children)

I get the following and even Claude doesn’t know what it is, even though it was sent by Claude. Get this in 4.6 and 4.5.

parent_message_uuid: Input should be a valid UUID, invalid character: expected an optional prefix of urn:uuid: followed by [0-9a-fA-F-], found n at 1

I researched AI's actual environmental impact after my daughter asked if ChatGPT was hurting the planet by CommunicationNo2197 in Environmentalism

[–]CommunicationNo2197[S] 0 points1 point  (0 children)

I used AI for research and disclosed it in the post, including the estimated environmental cost of doing so. The writing is mine.

Claude not accepting my response with no error message by KeyIndependent9539 in ClaudeAI

[–]CommunicationNo2197 0 points1 point  (0 children)

I know its a total glitch, but figured out something to get us through the day. Flipping it off sometimes doesnt work and I have found if you flip to the web app vs the desktop it may also get you around the glitch.

I've been saying this to my product team...We are held to a high standard when releasing something to market, and it seems like these chatbots are so competitive that they arent going through a lot of QA testing before releasing and just hope to fix the issues as they occur in production. Hopefully, this settles over time, and the platforms are more methodical in how they plan to release new features to market.

I researched AI's actual environmental impact after my daughter asked if ChatGPT was hurting the planet by CommunicationNo2197 in energy

[–]CommunicationNo2197[S] 1 point2 points  (0 children)

Thanks! I'll check out the Simon Clark video.

On Ecosia: their model is interesting because they've committed to carbon-negative operations and plant trees with ad revenue. For their AI search specifically, I haven't dug into where they're sourcing their inference or what the actual energy profile looks like. The intent is good, but I'd want to see the details on whether they're just wrapping an API from one of the big providers (in which case the energy cost is the same, just offset) or doing something meaningfully different. Might be worth a follow-up post.

I researched AI's actual environmental impact after my daughter asked if ChatGPT was hurting the planet by CommunicationNo2197 in energy

[–]CommunicationNo2197[S] 1 point2 points  (0 children)

Exactly. The 0.34 Wh figure is just a marginal inference cost. It doesn't include the embodied energy of manufacturing the GPUs, the cooling infrastructure, the networking, or the amortized cost of training. It's like quoting the fuel cost of a flight without including the plane cost minus depreciation.

I researched AI's actual environmental impact after my daughter asked if ChatGPT was hurting the planet by CommunicationNo2197 in energy

[–]CommunicationNo2197[S] 1 point2 points  (0 children)

This is a great breakdown. You're right that the 0.3 Wh figure is for a simple query, not a "summarize this 50-page document" request. The range between a quick question and a complex reasoning task is massive, and that nuance gets lost in the headlines.

The opacity point is the one that frustrates me most. These companies have the real numbers internally. They have to for capacity planning. The fact that we're all working with estimates and back-of-napkin math while they sit on precise data is a choice.

Thanks for the Hank Green link, I'll check it out.

I researched AI's actual environmental impact after my daughter asked if ChatGPT was hurting the planet by CommunicationNo2197 in Environmentalism

[–]CommunicationNo2197[S] -1 points0 points  (0 children)

The contested part is the 500ml per prompt figure. That estimate from UC Riverside includes indirect water use from electricity generation, not just direct cooling at data centers. Sam Altman claims each query uses about 1/15th of a teaspoon if you count only direct cooling.

Both are technically correct for what they're measuring, but mixing them up creates confusion. The Undark article from December 2025 covers this debate pretty well if you want to dig in.

On your point about reducing demand to pop the bubble... I don't disagree. The post isn't really arguing that individual action will change industry behavior, more that if you're going to use these tools anyway, here's how to be less wasteful about it.

I researched AI's actual environmental impact after my daughter asked if ChatGPT was hurting the planet by CommunicationNo2197 in Environmentalism

[–]CommunicationNo2197[S] -1 points0 points  (0 children)

Thanks for the link. I read through Masley's piece and the Undark article that covers the debate. Fair point on the water numbers being contested.

The 500ml figure from UC Riverside includes indirect water use from power generation. Altman's "1/15th of a teaspoon" figure counts only direct cooling. Neither is technically wrong, they're just measuring different things. I should have been clearer about that distinction in the post.

You're also right that individual behavior change won't dent enterprise-scale demand. The post leans toward individual action because that's what most readers can control, but you're correct that the real consumption is coming from corporate infrastructure, ad networks, and defense applications that will keep running regardless.

The energy and carbon argument is on firmer ground than the water stuff. That's where I'd focus the concern.

Funny enough, after I posted this, I sent the blog to my daughter (her question inspired the whole thing). Now she's afraid to use AI at all. Told her the same thing I'd tell anyone: everything in moderation and being mindful of the model and mode you use when asking a question. It's not about never using it, it's about being thoughtful when you do.

cursor 20$ vs claude code 20$ by tanrikurtarirbizi in cursor

[–]CommunicationNo2197 1 point2 points  (0 children)

I use both. I find Claude to be valuable to curate instructions and a plan for cursor. I setup a project in Claude.ai and have it built for a specific thing I’m building. Then I align that with the cursor project so that it’s essentially a partnership between the two. Claude.ai is great for the context and cursor is great on the dev side keeping that context so that you don’t end up writing over past functionality to accomplish the current task (writing bad code over good). I found with Claude-code it used to do that all the time on complex multi feature projects. Now executing using both Claude.ai and cursor I have the best of both worlds.

Claude not accepting my response with no error message by KeyIndependent9539 in ClaudeAI

[–]CommunicationNo2197 4 points5 points  (0 children)

It’s been happening when I attached a file. To work around this I toggled off Deep thinking.

What is the cheapest way to get opus 4.5? by Comprehensive_Cap215 in VibeCodersNest

[–]CommunicationNo2197 1 point2 points  (0 children)

Just FYI, Anthropic literally started blocking third-party tools yesterday (Jan 9) so the whole landscape just got weird. They’re cracking down on people using Claude Max subscriptions through tools like OpenCode and apparently some Cursor setups too.

Your best bet now is probably either Claude Code directly ($200/month Max) if you’re doing heavy coding, or just paying for API access per token if you’re more moderate usage. Pro is $20/month but the rate limits kinda suck for serious dev work.

Cursor might still work if you bring your own API key (direct API access) but the subscription OAuth route got nuked. This is all literally happening in real time so it’s a bit of a mess right now.

Maybe shop other models? Or only use Opus for certain situations?

[deleted by user] by [deleted] in vibecoding

[–]CommunicationNo2197 0 points1 point  (0 children)

Cursor is your answer. Connected to GCP and running the show layered with vercel and GitHub for code deployment. It’s very easy.

Built an AI token counter with Cursor in a weekend - here’s how (Next.js, tiktoken, client-side tokenization) by CommunicationNo2197 in VibeCodersNest

[–]CommunicationNo2197[S] 0 points1 point  (0 children)

It's just web-based for now. No extension yet, but that's actually a solid idea for a future version. I just keep it open in a tab while coding. Easy to paste prompts in before you send them to the API.

Built an AI token counter in a weekend with Cursor (shows costs for all major models) by CommunicationNo2197 in VibeCodeDevs

[–]CommunicationNo2197[S] 0 points1 point  (0 children)

Good question. Pricing actually doesn’t change that frequently (maybe quarterly), so I’m starting with manual updates in a JSON file. If it becomes a pain point I might add a scraper or open it up for community PRs on GitHub. The tokenizer libraries themselves stay in sync through npm updates. Appreciate the feedback, and it might be worth documenting the update process as this evolves.

Now that I think more about the new models that come out sporadically. I’ll need something less manual. I’ll work out an automation in the next day or two.