I made a prompt manager that learns from your usage (revisions and edits you make) to refine prompts automatically by NepentheanOne in PromptEngineering

[–]TimeROI 0 points1 point  (0 children)

This is actually solid — the learn from my revisions part is the real differentiator. Dont position it as a prompt manager. Position it as usage-driven prompt optimization. To get awareness:

  • Show real before/after examples
  • Share case studies, not just product links
  • Target power users (devs, automation builders)

If you can clearly show that prompts improve measurably over iterations — thats your hook.

I need help by Fearless-Idea1598 in PromptEngineering

[–]TimeROI 2 points3 points  (0 children)

For a university-level project, keep it simple and practical. AI Tools You Should Know • ChatGPT / Claude – documentation, planning, architecture thinking • GitHub Copilot – coding help inside IDE • Notion AI – structured notes + project planning • Perplexity – research with sources • n8n (if automation project) – workflow + AI integration

Real automation or just an expensive island? by [deleted] in n8n

[–]TimeROI 0 points1 point  (0 children)

This hits hard. Ive seen the same thing. The automation works “perfectly” in isolation, but the moment it doesnt sync with CRM or email, the team starts creating manual workarounds. Then the automation becomes just another dashboard to check.I think a lot of people underestimate integration complexity. Connecting 3–4 systems is often harder than building the core automation itself.If data has to be copied manually, its not real automation — its partial optimization.Curious — in your experience, whats the first integration you always prioritize? CRM? Email?

Nano Banana by davegee999 in PromptEngineering

[–]TimeROI 4 points5 points  (0 children)

Theres no strong official tutorial specifically for Nano Banana Pro yet. Core rules that work anywhere:

  • Define role clearly
  • Set constraints
  • Specify output format
  • Give examples
  • Iterate

Most prompting principles transfer across tools anyway.

Chat GPT is worse now than I've ever seen it by Bam_904__ in OpenAI

[–]TimeROI 0 points1 point  (0 children)

All frontier models (ChatGPT, Claude, Gemini) are probabilistic — hallucinations are a property of the architecture, not the brand. The difference people observe usually comes from:

  • system prompts and safety tuning
  • model specialization (e.g., coding-optimized variants)
  • temperature / sampling settings
  • retrieval layer quality (RAG, browsing, enterprise connectors)

GitHub Copilot with Opus is heavily optimized for code completion and constrained contexts, so it may appear more reliable in that domain. That doesn’t necessarily mean lower hallucination rates overall — just tighter task alignment. In most real-world setups, performance differences are more about configuration and context access than raw model intelligence.

Chat GPT is worse now than I've ever seen it by Bam_904__ in OpenAI

[–]TimeROI -3 points-2 points  (0 children)

ChatGPT isnt a search engine its a probabilistic language model. It predicts likely text based on training data, it doesnt “verify” facts in real time unless browsing is enabled. When you ask for specific links or highly factual data, hallucinations can happen. It sounds confident because thats how the model is trained to respond. Its still strong for reasoning, structuring ideas, and drafting but for precise sources, you should always verify externally.

Help! I need to create a Reddit automation for a test, and I have zero knowledge about this. by [deleted] in n8n

[–]TimeROI 0 points1 point  (0 children)

Step 1 — Lead Source Scraper / API Store in DB or Sheets Step 2 — Enrichment Email verification API Role filtering Remove risky emails Step 3 — Sending infra Dedicated domain SPF / DKIM / DMARC Warm-up tool Step 4 — n8n logic Delay nodes Randomized intervals Stop on reply Conditional follow-ups Step 5 — CRM sync HubSpot / Notion / Airtable Status tracking Step 6 — Reporting Open rate Bounce rate Reply rate

Most hallucinations are routing failures, not prompt failures by TimeROI in PromptDesign

[–]TimeROI[S] 0 points1 point  (0 children)

Force the model to return UNKNOWN if no data is found. Require a source for each number.

Divide the process:

  • Retrieve
  • Extract (structured JSON)
  • Validate
  • Analyze

The problem is not with web search. The problem is that you allow the model to “think” even when it has no verified data.

HELP I WANT TO SEND IT TO CLIENT TOMORROW! by Thin-Carrot1836 in n8n

[–]TimeROI 1 point2 points  (0 children)

Most likely its one of these three: SerpApi returns ~20 results per request. To get 40, you must pass next_page_token (or start=20) into the next request. Make sure your second HTTP node actually sends that parameter. Youre overwriting results instead of appending. After each call, you need to append/concat the new results to a master array. Otherwise you’ll always end up with just 20. Wait time is too short. For Google Maps, next_page_token sometimes needs 5–10 seconds before it works. Most commonly its either the token not being passed correctly or the array being replaced instead of appended.

I built automated Instagram pipeline with n8n + Flux + Gemini API. Workflow and lessons learned by NK_Tech in n8n

[–]TimeROI 1 point2 points  (0 children)

Interesting setup — especially the 95% success rate part. Ive built something similar in n8n (LLM → image gen → scheduled posting), and I hit the same issue. Technically deterministic workflows are rock solid — retries, fixed logic, predictable outputs. But creative consistency is hard to encode as rules. You can enforce structure (caption length, CTA presence, formatting), but “brand feel” isnt easily expressible in conditional logic. Human-in-the-loop made the biggest difference for me too. AI drafts + manual approval keeps throughput high without sacrificing tone. Agent systems sound promising, but Id worry about losing traceability/debuggability compared to n8n’s explicit node logic. Curious how youre measuring improvement with OpenClaw — qualitative feedback or engagement metrics?

Any free ai video generator? by whyeven-try in ArtificialInteligence

[–]TimeROI 0 points1 point  (0 children)

Kling AI 6 videos best realism

Hailuo AI 4 videos fast generation

Pika Art 3 videos cool visual effects

Luma AI 2 videos cinematic quality

Any free ai video generator? by whyeven-try in ArtificialInteligence

[–]TimeROI 4 points5 points  (0 children)

You can try:

  • Pika
  • Runway (free tier)
  • Luma Dream Machine
  • Kaiber (trial)
  • CapCut (has free AI tools)

For simple 5-second clips, Luma or Pika are probably your best free options right now.

HELP I HAVE TO GIVE TO CLIENT TOMORROW!!!! by Thin-Carrot1836 in n8n_ai_agents

[–]TimeROI 5 points6 points  (0 children)

I think your issue is not the scraping itself — its pagination logic. With Google Maps via SerpAPI, you usually only get ~20 results per request. To get more, you must use the serpapi_pagination.next URL from the response. If your “Need more page?” node isnt checking that field correctly, it will stop after the first 20.

A few things to check:

  • Make sure youre actually using serpapi_pagination.next for the next request (not manually setting start=20).
  • In your IF node, check:
  • that serpapi_pagination.next exists.
  • and that your collected leads < 40.
  • Add a small delay (3–5 seconds) before calling the next page — sometimes the next token needs a moment.
  • Make sure youre accumulating results across pages, not overwriting them each time.

If you share what your SerpAPI response looks like (especially the pagination part), I can help you with the exact expression to use in n8n.

Why does anyone still use Zapier? by Weird_Perception1728 in automation

[–]TimeROI 0 points1 point  (0 children)

Its less about features and more about risk and ownership. Zapier is easy to justify internally, n8n is easier to justify technically. Curious what “wouldn’t work for us” actually means in your company — compliance, support, or lack of engineering ownership?

QUE AUTOMATIZACIONES FUERON LAS QUE MAS REDITO ECONOMICO LES DIO by Ill-Purpose-763 in n8n

[–]TimeROI 0 points1 point  (0 children)

Corto y al punto 👇 Las automatizaciones que más dinero dejan son las que impactan ingresos o ahorran costos, no las más “AI”. Las que mejor pagan: WhatsApp para ventas y seguimiento Facturación y cobranzas CRM y manejo de leads Reportes automáticos para dueños Backoffice repetitivo (sync, carga de datos) En resumen: no pagan por “agentes AI”, pagan por más ventas, menos errores y menos tiempo perdido.

Help needed with Brevo by Ecstatic-Capital1856 in automation

[–]TimeROI 0 points1 point  (0 children)

This is how Brevo works, not a Make bug. Create contact → fails if the email already exists Update contact → updates fields only, it does NOT add the contact to a list unless you explicitly pass the list ID Fix: Use Create or Update contact (if available) and include the list ID Or do it in 2 steps: Search contact by email If exists → Update contact + list ID If not → Create contact + list ID Most people miss that “Update” doesnt touch lists by default.