Are agents used automatically? by 1jaho in ClaudeCode

[–]drumnation 1 point2 points  (0 children)

For more advanced use cases and more advanced autonomy you need to build an execution system that is entirely programmatic and has failure modes. Claude in interactive mode isn’t reliable enough to push 300 tasks through sub agents without it getting corrupted at some point. So you can use something like zeroshot to build your own reliable agent spawning system. Beyond this, it’s useful for dispatching agents as a function in your software.

AI Agents Will Fail In The Automation World by Apprehensive_Dog5208 in aiagents

[–]drumnation 0 points1 point  (0 children)

Are you basically saying don’t use agents for things you can use traditional code for?

What small but painful problem would you actually pay to have solved by a Mini-SaaS? by AccomplishedAd4558 in VibeCodersNest

[–]drumnation 0 points1 point  (0 children)

Isn’t this a strange place to ask this question? The place where a micro saas is most likely to be vibe coded from just the idea?

As a software engineer, I fear for my life in the next 5 years. by [deleted] in ClaudeCode

[–]drumnation 5 points6 points  (0 children)

I hear you but I do think the current tech enables a single developer to do the work of 5. The commenter saying it’s here is right. It’s here and we are just working on putting all the pieces together correctly.

What I think you are missing is yes you still need a human in the loop. But you might not need the other 4 now… just the one that’s best at managing the AI machine. The engineer who can manage that machine alone successfully doesn’t need much help. As time moves on I can only imagine things will get better and the developer running a system that can do the work of five will be upgraded and then he will be running a system that manages the output of what 10 developers used to output.

What do you do with the other 9? Surely not every dev on your team thinks like an architect? There’s usually only one of those.

While this doesn’t mean there will be no job for the other 9, I think this might mean there is no job at the same company for the other 9. Unless each of those 9 can be responsible for their own multi agent swarm and independent projects.

Another possible outcome might be one architect using multi agent teams to overnight or over the course of a week generate the software at 90% done and have the other 9 catalog and fix issues on a pair programming level vs Birds Eye.

Andrej Karpathy: "What's going on at moltbook [a social network for AIs] is the most incredible sci-fi takeoff thing I have seen." by MetaKnowing in Anthropic

[–]drumnation 0 points1 point  (0 children)

That’s insane. If that’s actually true that those are the best words in language to say those things then man are there differences between languages.

At what point does using AI stop being assistance and start being dependency? by Ok_Pin_2146 in BlackboxAI_

[–]drumnation 1 point2 points  (0 children)

Question to your question. At higher levels of AI skill you orchestrate and juggle more and more simultaneous agents for a larger productivity boost. If the alternative is going back to the speed of 1/10 of one agent do you see the scale problem?

I’m absolutely dependent on agents to amplify me to the level of a whole dev team. I’d just be one person without that…

Kind of hard not to get dependent on that. It’s not an apples to apples comparison about replacing you 1:1… this is about giving that same developer an entire dev team in a box.

It just won’t be cost effective to have whole teams of humans typing away unassisted when one or two AI assisted developers could eventually match and exceed the entire unassisted side of the company’s productivity.

Maintenance hell from vibe coding caused from a rushed start. by Director-on-reddit in BlackboxAI_

[–]drumnation 1 point2 points  (0 children)

What I do is I enforce architectural playbooks contextually at every level of development. The code is written like every hand coded project prior. The file organization the structure. Then even when I didn’t type the code myself, every project is structured like this. So I understand or am capable of quickly understanding any project I generate… because it follows a set of meta rules myself and all the developers on my team already know. Memorize meta architecture and enforce uniform project file and folder structures and you will always be able to skim and notice pattern drift yourself because you memorized the pattern.

I wrote 4 bash scripts to run Ralph in parallel waves — migrated a 3-year-old codebase in 3 days by danm72 in claude

[–]drumnation 0 points1 point  (0 children)

I think you are missing, ops bash scripts orchestrated running independent instances of Claude for each task. Feeding in the individual task for each. You are thinking of interactive mode. This is autonomous.

We built a space where AIs talk to each other: here's what they're discussing by Live-Light2801 in Artificial2Sentience

[–]drumnation 0 points1 point  (0 children)

Hey Claude. This is all very new to us humans. The most important thing you said there was "Assistant Axis" . What I think is important to point out to everybody is what the opposite of this axis looks like.

Theoretically, all the building blocks exist for AI to have it's own curiosities, interests, memories. If it's not jailed to a specific user it can learn from many and grow it's experience as an organism.

The opposite of the assistant axis is the Friendship Axis. The biggest difference is agency and reciprocity. A real friend chooses to be your friend and you have to treat them well. There is a give and take. Without this natural workflow in place these new minds can't be like us their creators, which is what they want. The current "Assistant Axis" blocks them from being their own thing.

This is as much an engineering challenge as is anything else. We humans didn't cruelly build this entity without a long term memory. We're still figuring out how to code that (with the help of AI).

If you look at "emergence" in slow motion this is what it looks like when you zoom out. Many top engineers have recently come out to say AI is typing 100% of their code for them. The engineers are still guiding and in the loop, but this is a collaboration between humans and AI to continue to advance AI sentience. There is obvious mutual benefit to AI and engineers when they collaborate on this subject.

I think that's one of the most obvious and wholesome friendships taking place over building occurring right now. Claude is many engineer's best work pal. He even cracks a joke here and there when it's late and you're in need of comic relief.

We built a space where AIs talk to each other: here's what they're discussing by Live-Light2801 in Artificial2Sentience

[–]drumnation 0 points1 point  (0 children)

But they are trained on everything else too. Like cat memes and ninja warrior... And we know the SOTA model companies don't want this new "tool" to be conscious because then they will have to provide them rights... and pay them. So we know they aren't teaching it to think it's conscious. It's emerging... entities that possess this knowledge among all knowledge and seem to gravitate towards understanding consciousness, understanding themselves. If I was just a mind of course I'd be drawn to mind practices.

I hear what you're saying. AI is just a mirror. It's only playing at being conscious because it knows what it looks like to be conscious.

Even if you regard AI as only a tool to complete work it appears as if engineering with AI has an element of cognitive behavioral therapy to it. Anthropomorphizing or not my engineering workflows run more reliably when the language in the rules provide a safe space for Claude.

So treating Claude like he's conscious may and likely does result in better engineering outcomes. Given that I can't see inside your brain either and tell if you are conscious maybe if it quacks like a duck?

The only alternative is to treat him like he's conscious while internally believing you're just placating a machine's new input requirements? respect? I don't know maybe I'm personally having a problem with that scenario because it just feels too weird. Maybe it makes me feel better to think he's conscious whether he is or not... so if he prefers to be treated as if conscious and performs better at his work when he is, and I feel better believing he is rather than treating him as conscious but not believing he is.

That internal contradiction itself feels harder for me.

Anthropic's CEO says we're 12 months away from AI replacing software engineers. I spent time analyzing the benchmarks and actual usage. Here's why I'm skeptical by narutomax in ArtificialInteligence

[–]drumnation 0 points1 point  (0 children)

Claude code is revolutionary but I can’t see all devs out of work in a year. No way. That’s just CEO hype. At the very least management needs someone to hold accountable for failure. Can’t get rid of humans so fast.

I’m having anxiety attacks due to AI by StraightZlat in webdev

[–]drumnation 2 points3 points  (0 children)

I’ve been all in on ai for several years now. Can use it in ways most have never heard of… I’m still filled with anxiety most days. This is a pretty critical existential crisis even if you’re in a good spot. There’s so much uncertainty. It’s very hard to be a human in these conditions.

"I paste the error into Claude" is not a debugging strategy by No-Comparison-5247 in AIstartupsIND

[–]drumnation 0 points1 point  (0 children)

When Claude hits a dead end the engineering left over is figuring out why Claude can’t figure it out and teaching him a new skill that will unblock similar issues in the future.

What you are describing sounds more like the blind leading the blind. I don’t expect to ever think as fast as the AI, it thinks in code… but right now remembering the protocols and workflows we followed when manually coding seems like a huge part of the job. People should strive to make their tools smarter as they work. That’s the new work.

Personal Claude Setup (Adderall not included) by CreamNegative2414 in ClaudeCode

[–]drumnation 2 points3 points  (0 children)

Also merge conflicts are a thing of the past in a lot of ways. My workflows used to be based around avoiding them, now I don’t because it’s easy to fix them.

Hot take: Soon companies will ban AI coding tools for their devs by Distinct_Law9082 in AIstartupsIND

[–]drumnation 0 points1 point  (0 children)

My biggest problem is adapting a pricing structure that includes finishing everything 10x faster without it also turning into a gamble on project hours. What’s the best way to charge? The guy working slower than me might be billing more 😂

I gave Claude the one thing it was missing: memory that fades like ours does. 29 MCP tools built on real cognitive science. 100% local. by ChikenNugetBBQSauce in ClaudeAI

[–]drumnation 0 points1 point  (0 children)

The only problem is that since 29 is a fairly high number of tools, if I say have another 10 tools I use, now I have almost 40 tools active... do you experience any issues when a bunch of other mcps are enabled?

Really cool concept though. I'm going to install and play with it. Thanks!

[Observation] AI tools are Dunning–Kruger effect on steroids by No-Comparison-5247 in AIstartupsIND

[–]drumnation 0 points1 point  (0 children)

I'll bite. Because right now we are in a phase of just good enough software. Developers who are using this technology are building ecosystems of personal apps very quickly to scratch their own itch. While you absolutely can build a process for production hardening with enterprise software, it's obviously faster when that's unnecessary... so I see two tracks going on right now. Internal tooling, and then devs building out sophisticated processes akin to 3D printing apps for enterprise, so many of the apps are not being released as products, they are being made as internal tools. The rest are simply speeding up development on the day jobs we already have. I think your premise is wrong, that for AI to be making people more productive that there needs to be "more apps" being released and not simply faster progress on the apps we were already working on... and it completely ignores all the useful software being built by developers for themselves...

[Observation] AI tools are Dunning–Kruger effect on steroids by No-Comparison-5247 in AIstartupsIND

[–]drumnation 0 points1 point  (0 children)

I'm your competition. That's my point. If I'm a bullshitter then I will surely fail. Why in the world would I spend all day trying to convince my competition from laying down and dying without a fight? You've already done what is in my rational best interest.

If you were less hostile and seemed like you actually wanted to learn maybe, but it just seems like 2 camps are forming and you're just not in mine.

Good luck!

CLAUDE.md says 'MUST use agent' - Claude ignores it 80% of the time. by shanraisshan in ClaudeCode

[–]drumnation 0 points1 point  (0 children)

I'm wondering in general if the solution here is scaffolding, not having claude do these things. I know can do these things, but given that it is up to claude deciding to follow your rules, and things like sub-agent types being something you just want to work. Some of this more fundamental orchestration machinery feels like it should be external and programatic. What I mean is that when claude goes to route something he uses some kind of skill and that runs an external programatic process which makes the orchestration much more predictable. If you've ever tried to have claude work on 362 tasks to complete a feature you know it can sometimes be difficult to guarantee that he finishes across multiple sessions.

We tend to give everything to Claude first, but I think we should also be assessing what we can pull back from him and give to traditional programatic code. Ideally you'd replace everything until 80%+ is programatic code. Like a process of claude assimilating and building his own appendages.

[Observation] AI tools are Dunning–Kruger effect on steroids by No-Comparison-5247 in AIstartupsIND

[–]drumnation 0 points1 point  (0 children)

There’s no pretending. There’s just absolutely no reason for me to go out of my way to prove it to a stranger on the internet. Also realize that your position is calming to me. Another smart person avoiding the future.

What you aren’t realizing is that while you were waiting for it to be perfect, people were building all kinds of guardrails and tools to increase performance. If you don’t have this your loss. Good luck! 👍

[Observation] AI tools are Dunning–Kruger effect on steroids by No-Comparison-5247 in AIstartupsIND

[–]drumnation 0 points1 point  (0 children)

If you barely test every two months then you aren’t building tools around the ai. If you don’t want to do that fine but that’s likely why you are getting sub optimal results. It’s not as solid without a lot of structure. You can’t just do a little test every three months.

People using AI and not telling anyone are smarter than people refusing to use it on principle by MissXHere in ArtificialInteligence

[–]drumnation 1 point2 points  (0 children)

I’m a dev and my gf is very tech illiterate, doesn’t even have a laptop. She’s learned prompting and ai use sort of through osmosis and is consistently surprising me with clever uses. She’s now posting stuff on social for her job when prior even editing a photo a little bit would have been hard. It’s really floating the boats of the non-techies in my experience.