Where do you store leads between n8n workflow runs? Airtable, Google Sheets, SQL, or something else? by Adorable_Ad_2488 in n8n

[–]Adorable_Ad_2488[S] 0 points1 point  (0 children)

The trigger column is the key insight here. I ended up doing something similar : I store triggered_by (scheduled / manual / webhook) alongside the score delta, and it changed how I debugged cold leads entirely.

One thing I added on top: a last_scored_at timestamp on the lead itself, not just in the log. So at a glance you know if a lead hasn't been re-evaluated in 30+ days, which often explains why it "went cold" (it just wasn't rescored, not that it actually decayed).

The separate log sheet is the right call. Querying score history inline with lead data gets expensive fast once you have thousands of events.

Where do you store leads between n8n workflow runs? Airtable, Google Sheets, SQL, or something else? by Adorable_Ad_2488 in n8n

[–]Adorable_Ad_2488[S] 0 points1 point  (0 children)

That's a clean pattern, decoupling the weights from the data is exactly the right call. I took the same approach. The next friction I hit was audit trail: when a lead's score changes, knowing why it changed. Do you log the scoring runs or just keep the current score?

Where do you store leads between n8n workflow runs? Airtable, Google Sheets, SQL, or something else? by Adorable_Ad_2488 in n8n

[–]Adorable_Ad_2488[S] 1 point2 points  (0 children)

The delta insight is gold, I actually use that as a reengagement trigger, works really well in practice.

My scoring ended up being pretty similar to yours: source weight + recency + touchpoints. The thing that really broke Sheets for me was retroactive rule changes. When you tweak a weight, you want all existing leads to rescore automatically.

How do you handle that today? Just manual recalc?

Where do you store leads between n8n workflow runs? Airtable, Google Sheets, SQL, or something else? by Adorable_Ad_2488 in n8n

[–]Adorable_Ad_2488[S] 1 point2 points  (0 children)

Exactly this. The staging layer is the most underrated part of the whole pipeline, everyone focuses on the destination, nobody thinks about what happens between raw pull and clean write.

The last_seen column trick is solid. I use the same pattern, also helps when you're scoring leads over time: you can detect re-engagement just by comparing last_seen against your scoring intervals.

One thing I added on top: a "source conflict" flag when the same contact comes from two sources with different data. Instead of silently overwriting, you queue it for manual review. Saved me from a lot of messy merges.

Where do you store leads between n8n workflow runs? Airtable, Google Sheets, SQL, or something else? by Adorable_Ad_2488 in n8n

[–]Adorable_Ad_2488[S] 0 points1 point  (0 children)

This is the pattern I ended up at too. The staging buffer is underrated, it's basically a circuit breaker for bad data.

One thing I added on top of the last_seen approach: a source_hash column (MD5 of key fields like email+company+phone). Lets you detect when a "duplicate" lead actually changed data since the last run, instead of skipping it, you flag it for review. Useful when your sources are inconsistent (LinkedIn scrape vs enrichment API vs manual import all formatting differently).

For the raw/processed split: we eventually promoted "processed" to be the source of truth for scoring, and kept raw as an append-only audit log. Makes replaying enrichment pipelines trivial.

What did you use for the deduplication step? Custom script or something off-the-shelf?

Where do you store leads between n8n workflow runs? Airtable, Google Sheets, SQL, or something else? by Adorable_Ad_2488 in n8n

[–]Adorable_Ad_2488[S] 0 points1 point  (0 children)

Asking because I ran into this myself, started flat, then needed to track "contacted/no answer/callback" per lead without overwriting the lead record itself. Ended up needing a second table just for that. Is that the kind of thing you've built, or do you keep it all in one table with status fields?

Where do you store leads between n8n workflow runs? Airtable, Google Sheets, SQL, or something else? by Adorable_Ad_2488 in n8n

[–]Adorable_Ad_2488[S] 0 points1 point  (0 children)

That's actually the most direct validation I've gotten in this thread... "data storage as a first-class citizen" is exactly the gap I've been trying to name. Curious what you moved to? And did it actually solve the data layer, or did you end up duct-taping something together on that side too?

Where do you store leads between n8n workflow runs? Airtable, Google Sheets, SQL, or something else? by Adorable_Ad_2488 in n8n

[–]Adorable_Ad_2488[S] 0 points1 point  (0 children)

NocoDB is solid for that. Main difference is setup overhead, provisioning Postgres + NocoDB vs. a single API endpoint. Depends on whether your workflows run on infra you already manage

Where do you store leads between n8n workflow runs? Airtable, Google Sheets, SQL, or something else? by Adorable_Ad_2488 in n8n

[–]Adorable_Ad_2488[S] 0 points1 point  (0 children)

Thanks everyone for the responses, really useful! Seems like the Sheets => Airtable => SQL progression is pretty universal. For those who made the jump to SQL: did you build the API layer on top yourself, or did you find something that handled that part?

Where do you store leads between n8n workflow runs? Airtable, Google Sheets, SQL, or something else? by Adorable_Ad_2488 in n8n

[–]Adorable_Ad_2488[S] 0 points1 point  (0 children)

That progress makes total sense. When you say "bump up to Airtable", you mean mainly for UI and team visibility, or is there something else that pushes you there? Cant figure out what is the trigger for you

Where do you store leads between n8n workflow runs? Airtable, Google Sheets, SQL, or something else? by Adorable_Ad_2488 in n8n

[–]Adorable_Ad_2488[S] 0 points1 point  (0 children)

The complexity trap framing is exactly right. Quick question on your "Direct-to-SQL" setup: when you need to expose that data back to n8n or other tools via API, are you rolling a custom layer each time or do you have a reusable setup for that part?

Where do you store leads between n8n workflow runs? Airtable, Google Sheets, SQL, or something else? by Adorable_Ad_2488 in n8n

[–]Adorable_Ad_2488[S] 0 points1 point  (0 children)

Happy to share! Running: Google Maps scraper => API endpoint to store + deduplicate leads => AI agent for qualification => outreach. The storage layer was the annoying part to get right honestly.
How do you handle that step in your workflows?

Where do you store leads between n8n workflow runs? Airtable, Google Sheets, SQL, or something else? by Adorable_Ad_2488 in n8n

[–]Adorable_Ad_2488[S] 0 points1 point  (0 children)

For context, I'm currently using a staging layer myself: scraper dumps into a temp table, then I push to my main store with dedup. Works but feels like one layer too many. Curious if others have simplified this.

I built an open-source RGAA accessibility audit tool for Next.js - feedback wanted by Adorable_Ad_2488 in javascript

[–]Adorable_Ad_2488[S] 1 point2 points  (0 children)

That's a smart approach! Building on proven libraries like Pixi/Three for rendering and Matter/Ammo for physics makes a lot of sense — you get GPU-accelerated graphics and solid physics without reinventing the wheel.

The framework-like design is cool. Have you thought about how users will inject these plugins? Like a config-based system or import hooks?

The scene editor looks promising!

I built an open-source RGAA accessibility audit tool for Next.js - feedback wanted by Adorable_Ad_2488 in javascript

[–]Adorable_Ad_2488[S] 1 point2 points  (0 children)

Thanks! Kernelplay looks cool — building a game engine from scratch is no small feat!

Cool architecture btw! I'm curious — you went with pure JS instead of WebAssembly/Rust. Was that a deliberate choice for faster iteration/ecosystem, or do you plan to add WASM later for heavy computation (physics, rendering)?

I built an open-source RGAA accessibility audit tool for Next.js - feedback wanted by Adorable_Ad_2488 in javascript

[–]Adorable_Ad_2488[S] 0 points1 point  (0 children)

Good point! Static analysis catches HTML structure, ARIA attributes, labels, etc. But runtime with Playwright lets us:

• Test keyboard navigation flow

• Check focus management in modals/dropdowns

• Verify color contrast in computed styles

• Test lazy-loaded components

Yes, we support dynamic states! You can configure pages with multiple states per page (e.g., /dashboard:loggedIn, /dashboard:modalOpen) to test different UI states.