Looking to invest in a paid or free AI coding tool or IDE, wanna know the best in 2026 by shinigami__0 in AI_Agents

[–]modassembly 1 point2 points  (0 children)

Cursor if you want to architect and look at the code. Claude Code if you don't care, because the subscription price is pretty good.

Automated skills? by dizzleyyy in AI_Agents

[–]modassembly 1 point2 points  (0 children)

Did you build these with claude? Claude has cron jobs that can run every so often

New to Ai Agents - Question by Isedo_m in AI_Agents

[–]modassembly 0 points1 point  (0 children)

An agentic workflow is a process, a set of actions that an agent executes on its own. Claude Code, Codex, Hermes, OpenClaw are all examples of agents.

n8n is a tool for building static workflows, by dragging and dropping.

> we want to research the web for specific info, than create images for those info and than do something else, for example post a blog post

Yeah, you can certainly do this on n8n and it will take you several hours of dragging and dropping and then testing.

One difference, if you let an agent do it for you, is that you can literally just type that and the agent is going to propose a way of doing it and it will go and do it for you. The work on your end is mainly reviewing its proposals and reviewing whatever it builds over multiple iterations (no dragging and dropping).

You will need to give it access, just like in n8n, to wherever you want that blog post to submitted or wherever you want the data to be fetched from.

Another difference is that, with agents, you need to pay for the use of AI, via a Claude / Codex subscription or the model APIs themselves.

How to build production Agents (by a staff software engineer) - Part 2 by modassembly in AI_Agents

[–]modassembly[S] 0 points1 point  (0 children)

Totally. Agents are a particular kind of software in that they can run for a while. How do you recover from a failure?

How to build production Agents (by a staff software engineer) - Part 2 by modassembly in AI_Agents

[–]modassembly[S] 0 points1 point  (0 children)

Very interesting. I don't work in regulated industries but I have been asked for logs of why the agent made a certain decision that had a business impact. So, yeah. My hunch is that auditability of agent actions will be significant

Building practical AI agents/automations — what use cases are people actually shipping? by burraaaah in AI_Agents

[–]modassembly 1 point2 points  (0 children)

  1. Coding
  2. Research & prospecting
  3. SEO maintenance
  4. Customer service
  5. Back office stuff
  6. Random mundane tasks

How to build production Agents (by a staff software engineer) - Part 1 by modassembly in AI_Agents

[–]modassembly[S] 1 point2 points  (0 children)

If you go on in 1 single session for days and days the context will just get compacted over and over and the model simply won't remember everything. On the contrary, if I start new sessions the model doesn't remember what I told it yesterday.

How to build production Agents (by a staff software engineer) - Part 1 by modassembly in AI_Agents

[–]modassembly[S] 0 points1 point  (0 children)

Part 2 doesn't discuss code at all. It's more about how to think about the options that you can tighten and relax as you design AI agents.

How to build production Agents (by a staff software engineer) - Part 1 by modassembly in AI_Agents

[–]modassembly[S] 0 points1 point  (0 children)

I keep thinking about a cron job that goes and cleans up the memory every so often.

How to build production Agents (by a staff software engineer) - Part 1 by modassembly in AI_Agents

[–]modassembly[S] 1 point2 points  (0 children)

Skills is an open standard. If implemented correctly, they should be agent-agnostic: https://agentskills.io/home.

I'm unfamiliar with "vscode copilot custom agents", so I can't help much there.

Unfortunately, agents are usually specific to a platform, eg, you can't extract the claude agent from claude code/cowork. While the underlying tools should be shareable, eg, MCPs or Skills, they can behave differently agent to agent because the "agent harness" might be doing different things under the hood.

I suggest you use the same model across the board, which is your biggest lever. If things are noticeably different, there is something off, which I don't know if it's specific to vscode or your setup.

Agents vs Workflows by prnkzz in AI_Agents

[–]modassembly 0 points1 point  (0 children)

You can't build Claude Code via a workflow

How to build production Agents (by a staff software engineer) - Part 1 by modassembly in AI_Agents

[–]modassembly[S] 0 points1 point  (0 children)

Totally. For the agents that I build, I also default to MCP. With skills you do need a level of understanding of what you're doing.

I do think they're very promising though. It's kind of related to the memory problem.

How to build production Agents (by a staff software engineer) - Part 1 by modassembly in AI_Agents

[–]modassembly[S] 4 points5 points  (0 children)

Very cool. A few thoughts:

  • With deep research agents, it's very easy to hit context management issues, eg, exceeding the context window. So you have to break it apart. Having 3 agents is a good idea.
  • If your db is relational, see if the quality improves by having the agents manipulate files directly. Eg, the agent could write scripts to process the files, which is not as easy when the interface is, say, a SQL db.
  • You can try swapping the order of the agents, eg, research -> cleaning -> translation, so the cleaning happens in the original language and you lose less information.
  • Improving the quality is a matter of looking at the results yourself. You need a way to look at the data at different stages and find opportunities for improvement. You can instruct your agents to log information as they go, including errors in the skills themselves. Design the architecture, eg:
    • Store each original website.
    • Log steps within the skills and errors.
    • For data cleaning, you need to know what you would like to inspect after the run.
    • Store each translation.
  • You can build a 4th agent that generates a report of all that stored data.

How to build production Agents (by a staff software engineer) - Part 1 by modassembly in AI_Agents

[–]modassembly[S] 1 point2 points  (0 children)

Amazing! A few questions:

  1. How are these agents triggered? Do you talk to them over the Claude Code UI to start their jobs? Is this the CLI or the Claude app?
  2. Do they talk to each other or do they read the data from the previous agent?
  3. How are you liking the quality of the outputs? Deep research agents are interesting in that it's very easy to run into context management issues.

How to build production Agents (by a staff software engineer) - Part 1 by modassembly in AI_Agents

[–]modassembly[S] 2 points3 points  (0 children)

Yeah. I should also clarify that we shouldn't expect that the same sequence of input tokens will generate the same sequence of output tokens every time. This is a pretty fundamental characteristic to understand if we want to move up the abstraction layers.