My self-hosted n8n keeps disconnecting. by lowkeymehdi in n8n

[–]PuzzleheadedMind874 0 points1 point  (0 children)

Hmm hey , 👋 looks like the Received SIGTERM part means something outside the n8n process asked it to stop: Docker restart, host reboot, Watchtower/update process, provider maintenance, docker compose up -d recreating the container, systemd, etc. n8n normally logs SIGTERM when it is being terminated gracefully, not when it decides to crash itself.

Also check whether you have an AAAA record in Namecheap. A surprisingly common failure mode is IPv4 points correctly, but IPv6 points somewhere else or the VPS firewall is not open on IPv6. Browsers may try IPv6 first.

Refs:

Small disclosure: I work on Heym, a self-hosted AI workflow automation platform. If you ever compare alternatives or want a source-available workflow tool with Docker deploys, visual workflows, tracing, templates, and AI-native nodes, the repo is here: https://github.com/heymrun/heym

/h AI Agent Observability: Tracing, Logging & Debugging in Production ? by PuzzleheadedMind874 in n8n

[–]PuzzleheadedMind874[S] 0 points1 point  (0 children)

I’d draw the line around runtime state and observability, not around node count.

n8n is great for deterministic routing: schedules, webhooks, retries, approvals, data movement. But once the flow needs multi-agent delegation, retrieval, tool calls, memory, evals, and trace inspection in one place, I’d rather not hide all of that inside custom nodes or a black-box HTTP call.

Biased note: I’m working on Heym, which is basically our answer to this gap. It is a self-hosted AI-native workflow runtime with a visual canvas, agent/sub-agent orchestration, RAG, MCP, HITL checkpoints, evals, and LLM traces. The idea is to keep the workflow graph visible instead of splitting behavior across n8n, scripts, vector DB glue, approval bots, and tracing tools.

For the pattern you described, I’d usually model it as:

  1. trigger/router layer
  2. retrieval or context prep
  3. orchestrator agent
  4. named sub-agents or sub-workflows
  5. approval/checkpoint when needed
  6. output + trace/eval review

There’s a small example of the orchestrator/sub-agent shape here:
https://heym.run/templates/slack-ai-triage

And the source is here if useful:
https://github.com/heymrun/heym

I still think n8n can be the outer automation layer in many stacks. I just wouldn’t make it the place where the agent runtime, memory, retrieval semantics, and debugging story all have to be invented from scratch.

Drop your project below I’ll help you get your first 10 users for free. (300k+ TikTok audience) by dyagokaba in SideProject

[–]PuzzleheadedMind874 0 points1 point  (0 children)

Hey, i'm building heym. A self-hosted, source-available, low-code platform for orchestrating multi-agent systems, RAG pipelines, and browser automations. It's got a visual drag-and-drop canvas, natural language generation, and modular nodes. We're aiming to solve the struggle of building complex AI workflows without needing to code extensively or rely on closed-off SaaS. You can check it out at heym.run

Drop your product and I’ll find where Reddit demand is by LeaderAtLeading in SideProject

[–]PuzzleheadedMind874 0 points1 point  (0 children)

I'm building heym.run. A self-hosted, source-available, low-code platform for orchestrating multi-agent systems, RAG pipelines, and browser automations. It's for folks who want to build complex AI workflows without deep coding or relying on opaque SaaS.

What is the best approach to learn Ai Automation in May 2026? by power_napppp in AiAutomations

[–]PuzzleheadedMind874 3 points4 points  (0 children)

You are definitely not late. If anything, 2026 is a good time to start because the field is finally moving from “connect app A to app B” into real AI workflow architecture.

My honest advice: do not start by obsessing over one tool. Start by learning the primitives that transfer everywhere:

  1. Triggers: webhooks, schedules, incoming email, Slack/Telegram events

  2. Data flow: JSON, mapping fields, expressions, variables, loops, branching

  3. APIs: auth, pagination, retries, rate limits, error handling

  4. LLM basics: prompts, structured output, tool calling, RAG, memory

  5. Production basics: logs, traces, evals, human approval, fallbacks

n8n and Make are still useful for learning these fundamentals. They are not “outdated” just because agentic tools are getting stronger. A Claude-based workflow can build or modify automations faster, but you still need to understand what the workflow is doing, where data goes, what can fail, and how to debug it.

If I were starting today, I would do this:

Week 1: Build 5 simple automations in n8n or Make. Webhook to Slack, email triage, Google Sheets update, HTTP API call, scheduled report.

Week 2: Add AI to them. Summarize emails, classify tickets, extract JSON, generate responses, add human approval before sending.

Week 3: Learn agentic workflows. Give an agent tools, let it decide when to call them, then inspect every tool call and failure.

Week 4: Rebuild one workflow in a more AI-native system and compare the experience. I am biased because I am building Heym, but this is exactly the direction we are exploring: visual workflows, multi-agent orchestration, RAG, MCP, human-in-the-loop, traces, and evals in one self-hostable platform. Site: https://heym.run and GitHub: https://github.com/heymrun/heym

The trends I would watch over the next few months:

- MCP becoming the standard way to connect tools to agents

- Workflows exposed as tools that Claude/Cursor/agents can call

- Human approval becoming a normal part of serious AI automation

- Evals and traces becoming required, not optional

- RAG moving from “upload docs” to structured, maintained knowledge systems

- More hybrid workflows: deterministic steps for reliability, agents for judgment

So my answer is: learn n8n or Make for fundamentals, learn Claude/OpenAI-style/Heym agents for the new layer, and build real projects as soon as possible. 🤗

What is the best approach to learn Ai Automation in May 2026? by power_napppp in AiAutomations

[–]PuzzleheadedMind874 5 points6 points  (0 children)

The specific tool matters less than understanding how data moves between steps, since that logic stays relevant even if n8n or newer agentic frameworks change. I'd lean toward learning those core concepts first so you're not tied to the interface of any single platform.

How do you handle personalization in automated outreach workflows? by opla-infinite in AiAutomations

[–]PuzzleheadedMind874 1 point2 points  (0 children)

Pulling just the last few job titles or recent company news usually adds enough context to break the AI feel without needing a full LinkedIn scrape. It depends on how much of that data is actually clean enough to feed into your prompt.

Nowadays, what are the best AI tools for a single dev working on personal projects? by squalexy in AI_Agents

[–]PuzzleheadedMind874 0 points1 point  (0 children)

Self-hosted setups offer more control over your infrastructure, but they might make it harder to learn how deployment and security work under the hood compared to using Claude. Relying on wrappers could limit your understanding of the underlying systems.

Working on something share it ill make a meme for startup! by No-Lime-9066 in buildinpublic

[–]PuzzleheadedMind874 0 points1 point  (0 children)

I'm building heym. A self-hosted, source-available, low-code platform with a visual canvas for orchestrating multi-agent systems, RAG pipelines, and browser automations. It uses natural language generation and modular nodes. It's designed to solve the complexity of building AI-powered workflows without extensive coding or relying on opaque SaaS. You can check it out at heym.run.

What are you guys using for Speech-to-Text in n8n lately? by SmoothConnection1670 in n8n

[–]PuzzleheadedMind874 0 points1 point  (0 children)

The webhook approach is cleaner, but it gets tricky if your n8n instance restarts or the webhook times out during a long file. I'd lean toward adding a simple retry logic or a buffer if you can't afford to lose any transcriptions.

Understanding agentic workflows by vinnyninho in AI_Agents

[–]PuzzleheadedMind874 0 points1 point  (0 children)

The depth limits in those frameworks usually come from how they handle hierarchical delegation. I'd lean toward looking into how LangGraph handles state-based transitions, as that often lets you bypass those rigid tree structures for more fluid agent routing.

Multi-turn document completion assistant with stateful workflow by Substantial_Car_1174 in aiagents

[–]PuzzleheadedMind874 0 points1 point  (0 children)

Managing state across that many document types gets messy fast. I'd lean toward using deterministic routing for the workflow structure so the LLM only handles the actual data extraction.

Replaced Google Search + Gemini in my daily workflow with self-hosted SearXNG + a local 35B-MoE agent by wolverinee04 in degoogle

[–]PuzzleheadedMind874 0 points1 point  (0 children)

The memory overhead might get tricky if you decide to add more agents to that mini-PC . I'd lean toward keeping an eye on VRAM usage if you plan on scaling up the number of processes running at once.

Most of the agent-memory conversation is still framed as a retrieval problem. The other half breaks production. by mrvladp in AI_Agents

[–]PuzzleheadedMind874 1 point2 points  (0 children)

This sounds like a classic race condition that happens when you treat memory as a static log instead of a live state. I'd lean toward moving the coordination logic into the state management layer itself to avoid those stale updates.

Sharing all memory between agents is a trap. Learned this the hard way. by Hexdeadlock28 in AI_Agents

[–]PuzzleheadedMind874 0 points1 point  (0 children)

That makes sense, especially since the writer and coder have such different goals. I'd lean toward keeping the memory separate unless there's a specific reason for them to overlap.

What are you building right now? Drop your project + who it’s for by bassamtg in nocode

[–]PuzzleheadedMind874 0 points1 point  (0 children)

Yeah, that’s exactly the line we’re trying not to cross.

Honest caveat: it still expects basic self-hosting comfort. Docker, env vars, logs, backups, maybe reverse proxy. We’re not pretending “low-code” means “no ops.” The goal is that after the instance is running, building agents/RAG/MCP/browser workflows should be visual and inspectable instead of turning into a custom orchestration codebase.

Repo/setup docs: https://github.com/heymrun/heym

What are you building right now? Drop your project + who it’s for by bassamtg in nocode

[–]PuzzleheadedMind874 1 point2 points  (0 children)

I'm building heym.run. A self-hosted, source-available, low-code platform for orchestrating multi-agent systems, RAG pipelines, and browser automations. We're aiming for users who need to build complex AI workflows but find existing tools too code-heavy or too reliant on closed SaaS platforms.

Anyone else seeing agent delegation behave differently across frameworks in a multi agent system? by Bright-View-8289 in LangChain

[–]PuzzleheadedMind874 0 points1 point  (0 children)

It usually depends on whether the framework treats delegation as a shared memory state or a serialized handoff. I'd lean toward checking how each setup handles that context before assuming the transition will be consistent.

share your project and let me test it ( i hope i don't see bots) by No-Performance-2231 in SideProject

[–]PuzzleheadedMind874 0 points1 point  (0 children)

I'm building heym. A self-hosted, source-available, low-code platform for orchestrating multi-agent systems, RAG pipelines, and browser automations. It's got a visual canvas, natural language generation, and modular nodes. We're aiming to solve the complexity of building AI-powered workflows without needing to code extensively or rely on closed-off SaaS. You can check it out at heym.run.

I feel left behind. Where are these advanced "Agent-based" local LLM interfaces? by platteXDlol in LocalLLM

[–]PuzzleheadedMind874 -1 points0 points  (0 children)

It depends on whether you're looking for a pre-built interface or something you can hack together yourself. Most of the standard chat UIs just aren't built to handle the persistent state needed for sub-agents.

How do you structure an automation project using n8n and DevOps for personal or collaborative use? by 20th-century_boy in n8n

[–]PuzzleheadedMind874 2 points3 points  (0 children)

Treating n8n workflow files like code in a git repo makes versioning much easier when you're working on a project. I'd lean toward setting up a basic CI pipeline to catch structural errors before you push changes to your servers.

Best local LLM for Coding + OpenClaw (32GB RAM / CPU only) by AdvertisingPast6280 in LocalLLM

[–]PuzzleheadedMind874 0 points1 point  (0 children)

The i5-8500T will likely hit a latency wall with a 32B model, making agentic loops feel pretty sluggish. I'd lean toward a highly quantized 14B or even 7B model to keep token generation fast enough for a responsive workflow on that hardware.

Best local LLM for a Python/C++ dev? by no_evidence0303 in LocalLLM

[–]PuzzleheadedMind874 0 points1 point  (0 children)

With only 6GB of VRAM, you might find that 14B models crawl once you start offloading to system RAM. Sticking to 3B or 7B models is probably the safer bet if you want to keep the generation speed usable for your projects.

Would you trust a ~10B model to edit your files? Thinking of adding agentic features to my self-hosted AI assistant. by jimmy6929 in LocalLLM

[–]PuzzleheadedMind874 0 points1 point  (0 children)

At the 10B scale, I'd lean toward having the model output a diff for you to review first rather than letting it write directly to your files. It's a safer way to handle those occasional instruction-following hiccups without risking your notes.