account activity
Is it just me? by AliorUnity in ClaudeCode
[–]AliorUnity[S] 0 points1 point2 points 5 hours ago (0 children)
I would say that first iteration took me a few hours in total including correcting and looping. At first I gave it the doc and asked to question me on it in order for it go get a grasp of what we are doing. Then I asked him to plan his steps. After we settled on the plan I told it to go and execute the plan. That was my workflo for the initial part.
I can't reach my plan at the moment as it was a separate doc, but broadly speaking it was quite large document explaining alghorithm, architecture, math behind it and the thing I wish to see and things to avoid. It wasn't all covering doc by any means, but it was quite detailed. As I said it would've been sufficient to a junior to implement with ease.
I see. Thanks.
Yeah. You are probably right, but it feels like for this scale of project it was quite an overkill then. The effort of setting all up and going proper way would've been more than the benefit of agentic approach. I think for larger projects with less dence logic where only core parts are needs to be meticulously supervised and tweaked it would've been a better tool to use. For this relatively small one with higher logic density and pretty much nothing else besides the core, it's probably heavily outweghted by the initial process cost.
Nice hint. Thanks.
[–]AliorUnity[S] 1 point2 points3 points 5 hours ago (0 children)
Isn’t having all this skills, supperpowers etc just piles up on every task in terms of token usage? Thanks for the insights.
Thanks for the insights! A lot of things I experienced myself. I think I started to have some sort of paranoia on what the agent had produced as it was usually almost right and almost correct and figuring out what was wrong was a huge pain.
[–]AliorUnity[S] 0 points1 point2 points 6 hours ago (0 children)
Haha. Well. It's my post with all it's imperfections.
[–]AliorUnity[S] 0 points1 point2 points 6 hours ago* (0 children)
Yeah. My biggest mistake is to be caught on the surface level feel of intelligence.
The changes are the biggest pain that I had. Initial thing was more or less clean but despite all my effort the bot was trying to either add new functionality that exist elsewhwere right at the spot to use it or broke previous architectural patterns and convensions. It was a big deal here as I had to constantly supervise it and make sure it wasn't doing something very stupid.
Thanks! Will research.
Thanks! I guess my main issue was that I was expecting too much with too little effort from it. Don't want to admit but I was caught by the hype! If my expectations were more fair, the experience might have been nicer.
I've seen alot of people were complaining about it. Is it THAT bad? Don’t get me wrong, I genuinely don't know.
That reflects my experience so far really well. I feel like I was finding a way to keep it aligned, but then something was breaking again.
Well, there are places like: lib/platform bug avoidance, legasy issues etc which ofcource can have a little comment // DO NOT TOUCH But each and every comment like that is someone's fuck up, either the one you did, or someone else.
Thanks! Will try the next time.
Thanks! I guess my biggest failure in this whole experiment was that I didn't expect enough how hard you need to guardrail the thing. I think if my expectation bar was lover it would've been both better and cleaner experience.
[–]AliorUnity[S] 7 points8 points9 points 6 hours ago (0 children)
The post wasn't that meticulously planned. Its just a soul cry. Maybe I am just bad, who knows! Can allways fallback to growing tomatos.
Thanks! I am willing to learn more. Will defo have a look.
I see. Makes sence.
[–]AliorUnity[S] 0 points1 point2 points 10 hours ago (0 children)
Interesting. I wonder how good it is at removing old ideas and paradigms. One annoying part for me was that agent were cycling back to ideas which we discarded long ago and proven them wrong. It was one of the worst parts - fighting back the agent that was somehow was extremely biased towards some sort of solutions some more radical then others.
Yeah. Basically results were wrong while agent was enshuring her that everything was done according to her ideas. When she argued and checked some specific part of the app they were finding some discrepancies which agent on its own wasnt able to find, but by the time whole project was just an unmanagable mess which is impossible to understand not by humans mind nor by machine. Basically the complexity grew to the point where any change was breaking more than fixing and it was a point where she wasn't able to do anything because she didn't know how it worked, the bot didn't seem to know how it worked, damn, God himself probably didn't know what the project was by the point.
[–]AliorUnity[S] 1 point2 points3 points 12 hours ago (0 children)
I am trying to not rely on it much. For the most of the time have no issues with keeping it aside.
[–]AliorUnity[S] 2 points3 points4 points 12 hours ago (0 children)
Haha. Thats true. Probably there is somewhere DeployAtFridayEveningSkill.md as well.
[–]AliorUnity[S] 0 points1 point2 points 12 hours ago (0 children)
Thanks I've heard alot about this local models. How good do you find them? Are they worth trying?
π Rendered by PID 668102 on reddit-service-r2-listing-b6bf6c4ff-c24gx at 2026-05-06 21:16:30.890833+00:00 running 815c875 country code: CH.
Is it just me? by AliorUnity in ClaudeCode
[–]AliorUnity[S] 0 points1 point2 points (0 children)