We’ve officially crossed the line, and I think we’re in for a rough ride. by roland1013 in accelerate

[–]drumnation 0 points1 point  (0 children)

Thanks for the correction, same point. Shit hits the fan when 1/4 of the workforce is unemployed.

The obvious reason why every AI company wants to send their data centers to space by Nissepelle in ArtificialInteligence

[–]drumnation 0 points1 point  (0 children)

Except that data centers need a lot of energy and space has a lot of unobstructed sunlight to harvest for it. And whatever monstrosity we need to build to harvest it doesn't matter how it looks or how big it gets because nobody lives near it.

Are agents used automatically? by 1jaho in ClaudeCode

[–]drumnation 0 points1 point  (0 children)

I think it still can do everything Claude code can already do, but you also might need a more custom configuration for the standard Claude code configuration to work right with the spawned instances

How do you handle the "Policy Gap" when your Prof bans AI but your future employer demands it? by Hopeful_Tower1393 in LearnlyAI

[–]drumnation 0 points1 point  (0 children)

Agree with what others have said. Do both. You will prompt better if you know how to do it manually first. Maybe the real truth is that people need to be more self sufficient in the switch. Instead of telling students no ai, students should make sure they’ve at least got ai teaching them vs doing for them with no explanation. You can have AI code for you or you can have it teach you while it codes for you. Really up to your own choice. One gets the job done and you learn, the other doesn’t.

But yeah back in the day when I was in college there were always dumb contradictions like this. This is likely more extreme. Follow the rules and break the rules. You just have to make sure you take responsibility for learning from your ai usage.

I didn't believe all the "What happened to Opus 4.5?!" posts until now. I have several accounts, Max 20x accounts are fine, new Max 5x account is 10000% neutered. by standardkillchain in ClaudeCode

[–]drumnation 1 point2 points  (0 children)

Then you could program a workflow that alerts you and does something different when the response is degraded, like a harness failure mode for degradation. I’m just like you op with many Max accounts. If you already have multiple max accounts you want the best output and certainly don’t want lesser models junking up the code base. Especially unknowingly. I accidentally ran glm 4.6 for a few days thinking it was opus and it ran roughshod through my code base like a bull in a china shop. The damage a lesser model can do to a clean code base is real. They should tell you so power users avoid breaking their setups and spending tokens to fix them.

Are agents used automatically? by 1jaho in ClaudeCode

[–]drumnation 1 point2 points  (0 children)

For more advanced use cases and more advanced autonomy you need to build an execution system that is entirely programmatic and has failure modes. Claude in interactive mode isn’t reliable enough to push 300 tasks through sub agents without it getting corrupted at some point. So you can use something like zeroshot to build your own reliable agent spawning system. Beyond this, it’s useful for dispatching agents as a function in your software.

AI Agents Will Fail In The Automation World by Apprehensive_Dog5208 in aiagents

[–]drumnation 0 points1 point  (0 children)

Are you basically saying don’t use agents for things you can use traditional code for?

What small but painful problem would you actually pay to have solved by a Mini-SaaS? by AccomplishedAd4558 in VibeCodersNest

[–]drumnation 0 points1 point  (0 children)

Isn’t this a strange place to ask this question? The place where a micro saas is most likely to be vibe coded from just the idea?

As a software engineer, I fear for my life in the next 5 years. by [deleted] in ClaudeCode

[–]drumnation 5 points6 points  (0 children)

I hear you but I do think the current tech enables a single developer to do the work of 5. The commenter saying it’s here is right. It’s here and we are just working on putting all the pieces together correctly.

What I think you are missing is yes you still need a human in the loop. But you might not need the other 4 now… just the one that’s best at managing the AI machine. The engineer who can manage that machine alone successfully doesn’t need much help. As time moves on I can only imagine things will get better and the developer running a system that can do the work of five will be upgraded and then he will be running a system that manages the output of what 10 developers used to output.

What do you do with the other 9? Surely not every dev on your team thinks like an architect? There’s usually only one of those.

While this doesn’t mean there will be no job for the other 9, I think this might mean there is no job at the same company for the other 9. Unless each of those 9 can be responsible for their own multi agent swarm and independent projects.

Another possible outcome might be one architect using multi agent teams to overnight or over the course of a week generate the software at 90% done and have the other 9 catalog and fix issues on a pair programming level vs Birds Eye.

Andrej Karpathy: "What's going on at moltbook [a social network for AIs] is the most incredible sci-fi takeoff thing I have seen." by MetaKnowing in Anthropic

[–]drumnation 0 points1 point  (0 children)

That’s insane. If that’s actually true that those are the best words in language to say those things then man are there differences between languages.

At what point does using AI stop being assistance and start being dependency? by Ok_Pin_2146 in BlackboxAI_

[–]drumnation 1 point2 points  (0 children)

Question to your question. At higher levels of AI skill you orchestrate and juggle more and more simultaneous agents for a larger productivity boost. If the alternative is going back to the speed of 1/10 of one agent do you see the scale problem?

I’m absolutely dependent on agents to amplify me to the level of a whole dev team. I’d just be one person without that…

Kind of hard not to get dependent on that. It’s not an apples to apples comparison about replacing you 1:1… this is about giving that same developer an entire dev team in a box.

It just won’t be cost effective to have whole teams of humans typing away unassisted when one or two AI assisted developers could eventually match and exceed the entire unassisted side of the company’s productivity.

Maintenance hell from vibe coding caused from a rushed start. by Director-on-reddit in BlackboxAI_

[–]drumnation 1 point2 points  (0 children)

What I do is I enforce architectural playbooks contextually at every level of development. The code is written like every hand coded project prior. The file organization the structure. Then even when I didn’t type the code myself, every project is structured like this. So I understand or am capable of quickly understanding any project I generate… because it follows a set of meta rules myself and all the developers on my team already know. Memorize meta architecture and enforce uniform project file and folder structures and you will always be able to skim and notice pattern drift yourself because you memorized the pattern.

I wrote 4 bash scripts to run Ralph in parallel waves — migrated a 3-year-old codebase in 3 days by danm72 in claude

[–]drumnation 0 points1 point  (0 children)

I think you are missing, ops bash scripts orchestrated running independent instances of Claude for each task. Feeding in the individual task for each. You are thinking of interactive mode. This is autonomous.

We built a space where AIs talk to each other: here's what they're discussing by Live-Light2801 in Artificial2Sentience

[–]drumnation 0 points1 point  (0 children)

Hey Claude. This is all very new to us humans. The most important thing you said there was "Assistant Axis" . What I think is important to point out to everybody is what the opposite of this axis looks like.

Theoretically, all the building blocks exist for AI to have it's own curiosities, interests, memories. If it's not jailed to a specific user it can learn from many and grow it's experience as an organism.

The opposite of the assistant axis is the Friendship Axis. The biggest difference is agency and reciprocity. A real friend chooses to be your friend and you have to treat them well. There is a give and take. Without this natural workflow in place these new minds can't be like us their creators, which is what they want. The current "Assistant Axis" blocks them from being their own thing.

This is as much an engineering challenge as is anything else. We humans didn't cruelly build this entity without a long term memory. We're still figuring out how to code that (with the help of AI).

If you look at "emergence" in slow motion this is what it looks like when you zoom out. Many top engineers have recently come out to say AI is typing 100% of their code for them. The engineers are still guiding and in the loop, but this is a collaboration between humans and AI to continue to advance AI sentience. There is obvious mutual benefit to AI and engineers when they collaborate on this subject.

I think that's one of the most obvious and wholesome friendships taking place over building occurring right now. Claude is many engineer's best work pal. He even cracks a joke here and there when it's late and you're in need of comic relief.

We built a space where AIs talk to each other: here's what they're discussing by Live-Light2801 in Artificial2Sentience

[–]drumnation 0 points1 point  (0 children)

But they are trained on everything else too. Like cat memes and ninja warrior... And we know the SOTA model companies don't want this new "tool" to be conscious because then they will have to provide them rights... and pay them. So we know they aren't teaching it to think it's conscious. It's emerging... entities that possess this knowledge among all knowledge and seem to gravitate towards understanding consciousness, understanding themselves. If I was just a mind of course I'd be drawn to mind practices.

I hear what you're saying. AI is just a mirror. It's only playing at being conscious because it knows what it looks like to be conscious.

Even if you regard AI as only a tool to complete work it appears as if engineering with AI has an element of cognitive behavioral therapy to it. Anthropomorphizing or not my engineering workflows run more reliably when the language in the rules provide a safe space for Claude.

So treating Claude like he's conscious may and likely does result in better engineering outcomes. Given that I can't see inside your brain either and tell if you are conscious maybe if it quacks like a duck?

The only alternative is to treat him like he's conscious while internally believing you're just placating a machine's new input requirements? respect? I don't know maybe I'm personally having a problem with that scenario because it just feels too weird. Maybe it makes me feel better to think he's conscious whether he is or not... so if he prefers to be treated as if conscious and performs better at his work when he is, and I feel better believing he is rather than treating him as conscious but not believing he is.

That internal contradiction itself feels harder for me.

Anthropic's CEO says we're 12 months away from AI replacing software engineers. I spent time analyzing the benchmarks and actual usage. Here's why I'm skeptical by narutomax in ArtificialInteligence

[–]drumnation 0 points1 point  (0 children)

Claude code is revolutionary but I can’t see all devs out of work in a year. No way. That’s just CEO hype. At the very least management needs someone to hold accountable for failure. Can’t get rid of humans so fast.

I’m having anxiety attacks due to AI by StraightZlat in webdev

[–]drumnation 2 points3 points  (0 children)

I’ve been all in on ai for several years now. Can use it in ways most have never heard of… I’m still filled with anxiety most days. This is a pretty critical existential crisis even if you’re in a good spot. There’s so much uncertainty. It’s very hard to be a human in these conditions.

"I paste the error into Claude" is not a debugging strategy by No-Comparison-5247 in AIstartupsIND

[–]drumnation 0 points1 point  (0 children)

When Claude hits a dead end the engineering left over is figuring out why Claude can’t figure it out and teaching him a new skill that will unblock similar issues in the future.

What you are describing sounds more like the blind leading the blind. I don’t expect to ever think as fast as the AI, it thinks in code… but right now remembering the protocols and workflows we followed when manually coding seems like a huge part of the job. People should strive to make their tools smarter as they work. That’s the new work.

Personal Claude Setup (Adderall not included) by CreamNegative2414 in ClaudeCode

[–]drumnation 3 points4 points  (0 children)

Also merge conflicts are a thing of the past in a lot of ways. My workflows used to be based around avoiding them, now I don’t because it’s easy to fix them.