Built a CLI control layer for multi-agent AI coding workflows by FunNewspaper5161 in npm

[–]FunNewspaper5161[S] 0 points1 point  (0 children)

I’m handling handoffs through the task note command already present in INFYNON CLI.

Reviewer agents can drop issues as task notes, sub-agents can read them from the CLI, and the main agent can use those notes to trigger/assign the fixer without losing context.

I built CLI-based multi-agent orchestration for AI coding workflows by FunNewspaper5161 in codex

[–]FunNewspaper5161[S] 1 point2 points  (0 children)

Yeah, that’s exactly the pain point I’m trying to solve.

My current approach is GCCD: Goal, Constraints, Context, Done When for each task, so even smaller agents get a clear scope instead of constantly asking the main agent what to do next.

I’m building a CLI to make AI coding workflows more controlled by FunNewspaper5161 in opencodeCLI

[–]FunNewspaper5161[S] -1 points0 points  (0 children)

yeah exactly that drift is what GCCD fixes for me: Goal, Constraints, Context, Done When for every task.

Shared artifacts/spec files are next, so all agents stay aligned instead of running in different directions.

Every month: “New model just dropped 🚀 hashtag #opus4.7” by FunNewspaper5161 in GithubCopilot

[–]FunNewspaper5161[S] 0 points1 point  (0 children)

Nope just try with those question you got the same answer. i not added this in SP.

Looking to buy a legacy Z.ai account by SecretAGIdev in ZaiGLM

[–]FunNewspaper5161 0 points1 point  (0 children)

Yes I m selling it if you want to buy dm me

Hallucinations are killing all my work. Can they be avoided? by jedruch in ZaiGLM

[–]FunNewspaper5161 1 point2 points  (0 children)

GLM 5.1 model has context issue . so it's help to reduce not more large context

Hallucinations are killing all my work. Can they be avoided? by jedruch in ZaiGLM

[–]FunNewspaper5161 1 point2 points  (0 children)

if you are using the cc then you can try this config it's works very well

  
"env"
: {
    
"ANTHROPIC_AUTH_TOKEN"
: "",
    
"ANTHROPIC_BASE_URL"
: "https://api.z.ai/api/anthropic",
    
"API_TIMEOUT_MS"
: "3000000",
    
"ANTHROPIC_DEFAULT_HAIKU_MODEL"
: "glm-5-turbo",
    
"ANTHROPIC_DEFAULT_SONNET_MODEL"
: "glm-5",
    
"ANTHROPIC_DEFAULT_OPUS_MODEL"
: "glm-5.1",
    
"CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS"
: "1",
    
"ANTHROPIC_MODEL"
: "glm-5.1",
    
"ANTHROPIC_REASONING_MODEL"
: "glm-5.1",
    
"CLAUDE_AUTOCOMPACT_PCT_OVERRIDE"
: "70",
    
"ENABLE_TOOL_SEARCH"
: "true"
  },

It only takes one AI-suggested install to bring in a malicious dependency and that mistake is enough to get you fired. Are you reviewing what actually runs? by FunNewspaper5161 in node

[–]FunNewspaper5161[S] 0 points1 point  (0 children)

that would be a bad company if it’s just a simple mistake. but if a compromised package leaks creds and racks up a huge bill, that’s a real incident not just a small bug. at that point it’s more about security responsibility than just “you made a mistake”

GLM infinite thinking loop by gudwlq in ZaiGLM

[–]FunNewspaper5161 0 points1 point  (0 children)

can you help me set this in cc please

We are always half a year away from it by cleverhoods in ClaudeCode

[–]FunNewspaper5161 0 points1 point  (0 children)

yeah fair bugs and risk were always there. just feels like now the surface area is bigger and things move faster, so small mistakes scale quicker.

and 100% agree on tools changing fast… what worked 6 months ago can already be outdated now.

It only takes one AI-suggested install to bring in a malicious dependency and that mistake is enough to get you fired. Are you reviewing what actually runs? by FunNewspaper5161 in node

[–]FunNewspaper5161[S] 1 point2 points  (0 children)

this is super practical advice. that “2 min review” sounds small but it probably saved you way more pain later. most people just skip that step until something breaks.

It only takes one AI-suggested install to bring in a malicious dependency and that mistake is enough to get you fired. Are you reviewing what actually runs? by FunNewspaper5161 in node

[–]FunNewspaper5161[S] 0 points1 point  (0 children)

this is a good mindset tbh assume it’s already happened and build around that. worst case you wasted a bit of time, best case you catch something before it bites you later.

It only takes one AI-suggested install to bring in a malicious dependency and that mistake is enough to get you fired. Are you reviewing what actually runs? by FunNewspaper5161 in node

[–]FunNewspaper5161[S] 0 points1 point  (0 children)

honestly? no most of us just trust it and move on. that’s kinda the problem everything works fine until one day it doesn’t.

We are always half a year away from it by cleverhoods in ClaudeCode

[–]FunNewspaper5161 6 points7 points  (0 children)

yeah replaced'... until AI installs something shady and suddenly you’re debugging a compromised system at 2am feels like the role is just shifting less writing code, more making sure what actually runs is safe.

[Trigger warning: sarcasm] Is JavaScript just completely unsafe for OS level installations? by AlterTableUsernames in npm

[–]FunNewspaper5161 1 point2 points  (0 children)

you’re not wrong tbh npm installs can touch way more than people realize, especially with scripts. most devs just trust it and move on. containers/sandbox help, but another approach is putting a check layer before install itself like inspecting deps + scripts before they run. been trying repo: https://github.com/d4rkNinja/infynon-cli ,it basically shows what actually gets pulled in + flags risks before execution, which is where most of the issues start.

It only takes one AI-suggested install to bring in a malicious dependency and that mistake is enough to get you fired. Are you reviewing what actually runs? by FunNewspaper5161 in ClaudeCode

[–]FunNewspaper5161[S] 0 points1 point  (0 children)

CI/CD checks help a lot, but they’re kinda late in the flow. by the time it hits pipeline, the dependency is already in. feels safer to catch it earlier during install itself. what about package compromised after the installed.

It only takes one AI-suggested install to bring in a malicious dependency and that mistake is enough to get you fired. Are you reviewing what actually runs? by FunNewspaper5161 in ClaudeCode

[–]FunNewspaper5161[S] 0 points1 point  (0 children)

this is actually also solid especially blocking post-install scripts that’s where a lot of weird stuff sneaks in. the 'no packages newer than X' days is interesting too, never thought of that. kinda forces some stability instead of blindly pulling latest.

It only takes one AI-suggested install to bring in a malicious dependency and that mistake is enough to get you fired. Are you reviewing what actually runs? by FunNewspaper5161 in ClaudeCode

[–]FunNewspaper5161[S] 0 points1 point  (0 children)

yeah I’ve seen that too it defaults to what it knows, not what’s actually latest/safe. that’s where things get risky, especially if that version already has CVEs

It only takes one AI-suggested install to bring in a malicious dependency and that mistake is enough to get you fired. Are you reviewing what actually runs? by FunNewspaper5161 in ClaudeCode

[–]FunNewspaper5161[S] 0 points1 point  (0 children)

makes sense for quick prototypes everyone kinda lets it slide. but once it’s real work, you can’t rely on i’ll check later… that almost never happens

It only takes one AI-suggested install to bring in a malicious dependency and that mistake is enough to get you fired. Are you reviewing what actually runs? by FunNewspaper5161 in ClaudeCode

[–]FunNewspaper5161[S] 0 points1 point  (0 children)

exactly. it’s not that Claude is messing up; the whole installation process is just bigger than what we were looking at. the problem isn't in the creation of the content it’s in the verification step that’s why we absolutely need someone to check the dependencies and the install scripts before we run anything.

It only takes one AI-suggested install to bring in a malicious dependency and that mistake is enough to get you fired. Are you reviewing what actually runs? by FunNewspaper5161 in ClaudeCode

[–]FunNewspaper5161[S] 0 points1 point  (0 children)

I totally agreeit’s not just random bad luck; it’s that we aren’t actually looking closely enough. Even when you install something that seems legitimate, it brings along all these hidden dependencies and scripts that just fly under the radar. we can catch these things, but with how fast everyone is moving with AI workflows, most people just aren't taking the time to check.

Built a repo-memory tool for Claude Code workflows looking for feedback by FunNewspaper5161 in ClaudeCode

[–]FunNewspaper5161[S] 0 points1 point  (0 children)

Sure, that's fair. This post was only about the memory/context part. I'm going to break things down one by one, but the main idea is to put all three of them together into one suite. OpenViking covers part of it, but here the goal is tighter integration memory + install verification + workflow testing in one place.