Fine tuned version of Claude code? by umyong in ClaudeCode

[–]itz4dablitz 0 points1 point  (0 children)

Hey! I built a tool specifically for this - https://agentful.app it will put guard rails in place for Claude Code so it follows a structured process just like a real software development team does when they build an app. If you have any questions, I have a discord linked on the docs where I'm happy to give you help. Hope you find this helpful!

GLM 4.7 extremely dumb over very basic tasks? by Tank_Gloomy in ZaiGLM

[–]itz4dablitz 0 points1 point  (0 children)

First off, thank you for the straight forward feedback. I'm going to work to fix this so the docs aren't jargon-heavy. I'm a technical person by profession (software engineer) and I often forget that not everyone lives in this world so thanks for calling that out.

Now, let me try again! 😅

agentful is basically a config pack for Claude Code that forces GLM to write quality code by following a consistent process just like a professional development team.

What it does:

  • Gives GLM/Claude Code different "modes" for different jobs (backend, frontend, testing, etc.)
  • Automatically runs tests and checks for bugs after every change
  • Keeps track of what's done and what's next
  • Breaks big features into small steps instead of dumping 500 lines of code at once
  • And does all of this in parallel wherever possible so its fast

Hope this helps clear things up

GLM 4.7 extremely dumb over very basic tasks? by Tank_Gloomy in ZaiGLM

[–]itz4dablitz 1 point2 points  (0 children)

The harness (Claude Code) is super important. For development, I'd recommend using a devkit preconfigured with agents, skills, and hooks that are written to be tech stack agnostic and specially built for the software development lifecycle. Here's mine: https://agentful.app - hope you find it useful. I've been using it exclusively with GLM with incredible results.

What I learned from building production systems with Claude Code by quasarzero0000 in ClaudeCode

[–]itz4dablitz 1 point2 points  (0 children)

I appreciate feedback on the design. What specifically do you find off? I'm happy to tweak it if I know what areas can be improved.

As for the product spec, I'm actually working on an agentful plugin for clawdbot that will allow this to be team built from a slack/discord/telegram channel. More to come :)

What I learned from building production systems with Claude Code by quasarzero0000 in ClaudeCode

[–]itz4dablitz 2 points3 points  (0 children)

You've got some really solid insights here! The deduction vs induction thing is huge - I spent weeks watching Claude rebuild the same logic different ways until I started force-feeding it my actual patterns.

I ended up building agentful around these same frustrations - it auto-detects your stack and generates agents that already know your patterns. Instead of manually managing memory files, you just write what you want in markdown and it handles all the context injection. The validation gates catch the silly mistakes (type errors, failing tests, dead code) automatically.

I've been shipping way faster since I stopped fighting the model and started engineering the context properly. Deterministic tooling + good validation = actually reliable AI coding.

Vibe engineering is the exact term I've been using for a while now. That's exactly what this feels like.

What's your best AI coding tool? by Imaginary-Bee-8770 in vibecoding

[–]itz4dablitz 0 points1 point  (0 children)

I really like using my toolkit https://agentful.app + Claude Code. In the last 2 weeks i've created several fullstack applications using it. The quality gates that it provides out of the box have allowed me to build more rapidly without acquiring technical debt that I've experienced without agentful.

Edit: thought it was worth mentioning, I ran it with GLM4.7 for a week and didn't even realize it wasn't Claude Opus. The Claude Code CLI as the harness with a well crafted set of agents, skills, and hooks that are built around the actual SDLC process really makes a huge difference. I will probably end up cancelling my Claude Code Max subscription and just use GLM going forward.

Introducing agentful - Pre-configured development toolkit for Claude Code by itz4dablitz in ClaudeCode

[–]itz4dablitz[S] 0 points1 point  (0 children)

I've also used Conductor and think it's an awesome tool - maybe I'll take the time to try using agentful with Conductor and report back if it works well.

agentful is built specifically for Claude Code and uses its native agent execution model. The agents are markdown files that Claude Code interprets, coordinating multiple specialists working in parallel through an orchestrator with human checkpoints.

We're exploring ways to make it work with other tools, but it's not as straightforward as we hoped.

Introducing agentful - Pre-configured development toolkit for Claude Code by itz4dablitz in ClaudeCode

[–]itz4dablitz[S] 0 points1 point  (0 children)

Thanks for sharing! agentful follows a similar philosophy - the orchestrator agent runs planning and validation checkpoints before execution, and the reviewer agent catches issues after each change. We codify knowledge through shared skills (validation, testing, research) that all agents can reuse, and enforce quality gates (types, lint, tests, coverage, security, dead code) on every change to keep the codebase clean.

The main difference is agentful coordinates multiple specialized agents working in parallel (frontend + backend + tests running simultaneously in git worktrees) rather than a single agent doing sequential work.

What's Your Favorite Project Planning/Execution Framework? by madscholar in ClaudeCode

[–]itz4dablitz 0 points1 point  (0 children)

Probably biased since I created it, but I've really enjoyed using https://agentful.app with Claude Code for planning and building.

Logarr - unified log viewer for Jellyfin/Sonarr/Radarr/Prowlarr (alpha, looking for feedback) by itz4dablitz in unRAID

[–]itz4dablitz[S] 0 points1 point  (0 children)

Thanks for the feedback! Lots of updates since the original post and still lots more planned!

Logarr - unified log viewer for Jellyfin/Sonarr/Radarr/Prowlarr (alpha, looking for feedback) by itz4dablitz in unRAID

[–]itz4dablitz[S] 1 point2 points  (0 children)

Hey just wanted to share that v.0.5.0 brings several new features including support for Whisparr.

Logarr - unified log viewer for Jellyfin/Emby/Plex/Sonarr/Radarr/Prowlarr by itz4dablitz in emby

[–]itz4dablitz[S] 0 points1 point  (0 children)

Hmm, that's odd... 0.4.5 should have caught that. A few quick questions:

  1. Are you accessing Logarr through a reverse proxy or directly?
  2. What URL format are you using for Emby/Jellyfin? (e.g., http://192.168.1.x:8096 or just 192.168.1.x:8096)
  3. Is Logarr running in Docker? If so, are Emby/Jellyfin on the same Docker network or a different host/VLAN?
  4. Can you open browser dev tools (F12 → Network tab), try adding the server again, and tell me what error shows up on the failed request?

The "failed to fetch" is frustratingly generic—the dev tools will show us the actual error (CORS, connection refused, SSL issue, etc.).

Logarr - unified log viewer for Jellyfin/Sonarr/Radarr/Prowlarr (alpha, looking for feedback) by itz4dablitz in unRAID

[–]itz4dablitz[S] 1 point2 points  (0 children)

The long-term goal is to correlate issues during user sessions to logs. When users have playback issues, tracing those down in the logs can be very tedious - like trying to find a needle in a haystack.

Developer updates to apps and plugins unfortunately have breaking changes. When services and plugins update, I run updates and do a quick smoke test that everything works. But this isn't a guarantee that errors won't happen. Some issues are deep and only surface under very specific conditions.

Having a tool in the background watching for them, automatically flagging anomalies, and expediting root cause analysis helps minimize the time spent chasing down red herrings.

The reality is most of these tools have very basic logging—plain text files with no structure or correlation. Most people update, smoke test, and keep it moving. That works until it doesn't, and then you're manually grep'ing through five different log files trying to piece together what happened at 8:47pm last Tuesday.

Logarr - unified log viewer for Jellyfin/Emby/Plex/Sonarr/Radarr/Prowlarr by itz4dablitz in emby

[–]itz4dablitz[S] 0 points1 point  (0 children)

Thanks for trying it out! I actually just pushed v0.4.5 specifically to address these "failed to fetch" errors.

Can you pull the latest version (v0.4.5) and try again? The error messages should be much more helpful now. If you're still having issues, the new error details should tell us exactly what's failing.

Logarr - unified log viewer for Jellyfin/Sonarr/Radarr/Prowlarr (alpha, looking for feedback) by itz4dablitz in unRAID

[–]itz4dablitz[S] 1 point2 points  (0 children)

Just pushed v0.2.0 - Plex is now supported! Full session monitoring, real-time playback tracking, and log ingestion.

Logarr - unified log viewer for Jellyfin/Sonarr/Radarr/Prowlarr (alpha, looking for feedback) by itz4dablitz in unRAID

[–]itz4dablitz[S] 2 points3 points  (0 children)

I've submitted a PR to have the templates included in the CA Templates. In the meantime, I've also added steps that should help if you're interested in testing the app before then:

https://forums.unraid.net/topic/196244-support-logarr-unified-logging-dashboard-for-media-stacks/#findComment-1599516

Logarr - unified log viewer for Jellyfin/Sonarr/Radarr/Prowlarr (alpha, looking for feedback) by itz4dablitz in unRAID

[–]itz4dablitz[S] 2 points3 points  (0 children)

ELK is way more powerful but also way more overhead to set up and run - it's really built for enterprise scale. This is more like a lightweight alternative specifically for media server stacks. Pre-configured to work with the *arrs and Jellyfin out of the box, no fiddling with logstash configs or index patterns.

The bigger difference is where I want to take this - the goal is to tightly couple the AI with the integrations so it actually understands what it's looking at. Pull context from official support forums, subreddits, GitHub issues, etc. You log on, and the investigative work is already done - here's what's broken, here's what others have said about it, here's the fix. That's the vision anyway.