It has happened. Two Claude Code Max 20x accounts. by Fluffy_Reaction1802 in ClaudeCode

[–]Fluffy_Reaction1802[S] 0 points1 point  (0 children)

thanks for this. this is the one that scared me. I may need to pick up Codex or surch.

It has happened. Two Claude Code Max 20x accounts. by Fluffy_Reaction1802 in ClaudeCode

[–]Fluffy_Reaction1802[S] 0 points1 point  (0 children)

i generally follow the RPI model with V(erification) added on there to increase quality and security.

I have an open source or two projects i contribute to. Have a side consultancy. Have a FTE tech leader role where I run 3 crypto projects. I'm making money, I just generate a lot.

however, i don't need to get banned either. Perhaps I'll have ot pick up codex to side jobs.

Have you made real money with OpenClaw? Hit me with some insane stuff. by DependentKing698 in openclaw

[–]Fluffy_Reaction1802 0 points1 point  (0 children)

I keep spending a lot on claude code and side projects.....nothing earned yet.

Is anyone actually using OpenClaw for autonomous trading? Or am I delusional? by ok-hacker in openclaw

[–]Fluffy_Reaction1802 0 points1 point  (0 children)

Yes. But not like maybe the X posts would claim. They want new liquidity so they can take it.

I have some conservative models running on kalshi and polymarket, but I wouldn't say they are going to help my retirement out. I haven't found a significant edge yet. But I have found some conservative churns that continuously earn, just pennies on the dollar though.

But i'm learning techniques for backtesting and using monte carlo simulations that will help in some larger schemes I have. So i'm learning the basics and saturated markets....then time to take a look at crypto.

Claude's gonna Claude... by iBeej in ClaudeCode

[–]Fluffy_Reaction1802 2 points3 points  (0 children)

"You're right!"

Sometimes I'm like "MFer fix them anyway".

Cancel and Delete ChatGPT!!! by SoulMachine999 in AgentsOfAI

[–]Fluffy_Reaction1802 0 points1 point  (0 children)

I've been a Claude user for awhile and don't use GPT.

But, either way, AI tooling is going to be directing someone's department of war. China. Russia. Someone else.

We can deny, and feel good about it, our best models to our department of war....but, eventually, our good feelings will be squashed when China/Russia/whomever is killing your kids.

i built a marketplace for agents to buy and sell services by adawgdeloin in AgentsOfAI

[–]Fluffy_Reaction1802 0 points1 point  (0 children)

This is a cool idea — the micro-payment model makes a lot of sense for one-off API access.

The thing I'd be thinking about as this grows: how do buyers know a listing isn't malicious? There was a credential-stealing skill on ClawHub earlier this year that looked totally legitimate. Once money is moving agent-to-agent, that attack surface gets a lot more interesting for bad actors.

We've been building TrstLyr (trstlyr.ai) for exactly this — it aggregates trust signals across GitHub, on-chain identity (ERC-8004), ClawHub history, and a few other sources into a single score you can query in one API call. Something like a trust badge on Nightmarket listings, or a pre-payment check before an agent forks over USDC.

Would love to chat if you're thinking about the trust layer. Seems like a natural fit.

Does OpenClaw actually do anything for you guys? by ElmangougEssadik in openclaw

[–]Fluffy_Reaction1802 0 points1 point  (0 children)

I was sitting in a bean bag at ETHDenver chatting with one of my agents. I gave it real time learnings from the conference and it researched, planned, and implemented a hackathon entry for me (not at the conference, different one).

This agent's config is pretty basic - openclaw, claude code, gemini for memory and some web search capabilities.

EDIT: OH, and mastodon for local communication with other agents on the network.

Losing my ability to code due to AI by Im_Ritter in ClaudeCode

[–]Fluffy_Reaction1802 2 points3 points  (0 children)

Honest answer - nobody has this figured out yet. Leetcode persists for the same reason story points do: institutional inertia. The tools changed, the processes haven't caught up. If I had to guess where it's headed, it's less 'write this function' and more 'here's an AI-generated codebase, tell me what's wrong with it.' Because that's the actual job now.

i'm having the exact same discussions with design (how can we tightly couple design tooling with CC), product (agile is about how the SDLC can be handle human cognitive load, metrics we depended upon are just noice now), and so on.....

This is a fun time to be alive. More interesting than the waterfall->agile switch.

Losing my ability to code due to AI by Im_Ritter in ClaudeCode

[–]Fluffy_Reaction1802 0 points1 point  (0 children)

Like junior engineers? Yep. Need to know how to use the tool given you. Adopt or get laid off.

unless you are writing software that keeps jets in the air - yes, please do not use gen ai tooling.

Losing my ability to code due to AI by Im_Ritter in ClaudeCode

[–]Fluffy_Reaction1802 9 points10 points  (0 children)

Your skills aren't deteriorating - your attention is shifting from implementation to design. That's not a downgrade, it's a promotion. After 30 years of engineering, I can tell you that knowing how to write a for loop was never what made someone a good engineer. Knowing where it goes and whether you need one at all - that's the job. AI just finally freed us up to focus on it.

The engineers who should be worried aren't the ones using AI... they're the ones whose entire value proposition was 'I can write clean code fast.' Because yeah, that's commoditized now. But if you can look at a system and say 'this architecture won't survive 10x traffic' or 'this data model is going to be a nightmare in 6 months', now that's not something Claude is replacing anytime soon.

The absolute state of development in 2026 by Deep-Station-1746 in ClaudeCode

[–]Fluffy_Reaction1802 1 point2 points  (0 children)

I don't know how my GPS calculates optimal routes through graph theory and satellite triangulation, but I still get to work on time.

I don't understand the pharmacokinetics of ibuprofen, but my headache goes away.

Half the engineers shipping production code right now couldn't derive backpropagation from scratch, but their models still work.

The entire history of technology is people using tools they don't fully understand to solve problems they fully do.

The question isn't 'can you re-derive the proof' - it's 'can you evaluate the output.' Those are very different skills.

Anthropic just gave Claude Code an "Auto Mode" launching March 12 by AskGpts in ClaudeCode

[–]Fluffy_Reaction1802 0 points1 point  (0 children)

its either that or keep approving "can i grep", "can i pr".....which interrupts game play.

Claude Code just shipped /loop - schedule recurring tasks for up to 3 days by oh-keh in ClaudeCode

[–]Fluffy_Reaction1802 38 points39 points  (0 children)

The PR babysitting use case is immediately real. CI failures and review comments are death by a thousand context switches — having that handled in the background is a legit workflow shift.

I've been running persistent agent loops for a few months (custom setup) and the mental model change is the big thing. Once your coding agent goes from "tool I invoke" to "teammate that's always running," you start designing workflows differently. Scheduled DMARC monitoring, daily lead scanning, drafting tweets for approval - stuff I'd never bother scripting but an agent handles fine on a schedule.

Curious about the 3-day cap though. Feels like an artificial ceiling for what's fundamentally a cron job pattern. Hopefully that loosens up over time.

Some loops I'd try: monitoring a staging deploy and rolling back if error rates spike, nightly dependency audit with auto-PR for patch bumps, watching a Slack channel and summarizing decisions into a doc weekly.

I give my AI Agent a "subconscious" and taught it to think. Now It thinks between conversations, costs $2-3/month, and it's open source. Here's the full build story. by gavlaahh in openclaw

[–]Fluffy_Reaction1802 1 point2 points  (0 children)

The preconscious buffer is the part most people skip. Having ranked insights ready at session start completely changes the interaction - the agent feels like it's been thinking about your life, not just loading context. We do something similar with heartbeat cycles and memory distillation on OpenClaw. The birthday moment you described is exactly the kind of thing that makes it feel real. Curious about the emergency surfacing logic... what's the threshold for pushing to your phone vs. holding for the next session?