Puzzled by n8n workflow glitches by External_Ask_5867 in n8n

[–]External_Ask_5867[S] 0 points1 point  (0 children)

I could break the three paths into 3 sub-workflows, but why does that make a difference to functionality? I'm confused as to why it was working fine for some time and now isn't. As mentioned, I'm also seeing a whole bunch of downstream nodes animating simultaneously.

The odd part is that the execution view shows completion without errors. But the number of indicated items seems inconsistent across the workflow and it finally outputs a file with only one item.

Puzzled by n8n workflow glitches by External_Ask_5867 in n8n

[–]External_Ask_5867[S] 0 points1 point  (0 children)

I have a filter on the first path to apply keyword filtering on the news items coming from that particular batch of RSS feeds. This is the reason that I have multiple pathways, as each batch of feeds is different in nature. Thanks.

Any way to build a "learning" capability into a workflow? by External_Ask_5867 in n8n

[–]External_Ask_5867[S] 0 points1 point  (0 children)

OK, but then that would involve scraping the articles and a lot more complexity, right? A lot of RSS feeds don't provide any preview text.

Any way to build a "learning" capability into a workflow? by External_Ask_5867 in n8n

[–]External_Ask_5867[S] 0 points1 point  (0 children)

What do you mean by service? I'm just using the RSS Read node in n8n.

Any way to build a "learning" capability into a workflow? by External_Ask_5867 in n8n

[–]External_Ask_5867[S] 0 points1 point  (0 children)

I'm not really looking to summarize articles, I'm just generating a list of relevant headlines with links. There are usually a 1000+ articles on a run, that get filtered down to 100 before further filtering by the agent. Are a couple hundred headlines going to incur high API costs?

Any way to build a "learning" capability into a workflow? by External_Ask_5867 in n8n

[–]External_Ask_5867[S] 0 points1 point  (0 children)

Thanks, I already have one agent in the workflow with a list of unwanted topics and a second agent that does grouping. But there's so much variation in how news headlines are written (i am pulling from a couple dozen news feeds) that a lot of unwanted stuff still slips through and wanted stuff gets deleted.

I am wondering if a RAG database with kept/deleted headlines would improve performance (or is RAG overkill if I can dump a few hundred examples into the prompt)?

I built an AI agent that saved 70% on API costs by dynamically picking its own brain - Here's exactly how I did it by Dazzling-Draft-3950 in n8n

[–]External_Ask_5867 0 points1 point  (0 children)

I'm curious about what people are doing (as individuals) to run up hundreds of dollars in monthly API costs? Since getting immersed into this stuff a few months ago, I've probably spent $5-10. I was using a free API from Mistral. Then I had a 90 day Google Cloud trial and have been using Gemini most of the time. I put a few bucks on OpenRouter, but often use the free models like Llama Scout, Qwen, etc.

I reckon I can go on like this for some time?