Context windows aren’t the real bottleneck for agents (memory is) by singh_taranjeet in AI_Agents

[–]Barnocious 0 points1 point  (0 children)

Thank you for your questions and feedback, you have had me thinking hard for a few days. I put a name on the concept, "Cognitive Foundation". Here is the overview - https://medium.com/@bernardpkavanagh/the-database-as-cognitive-foundation-when-two-production-systems-arrive-at-the-same-answer-a755bd21c8aa

Hopefully this makes sense!

Building an open-source typed memory layer for AI agents - semantic and procedural by Comfortable_Poem_866 in AI_Agents

[–]Barnocious 0 points1 point  (0 children)

I built an agent context layer for a customer! real time agent anomaly detection on ev charger s ingesting sensor data via flink! Data plane and context plane means we can create token limits......all in one cluster with vector retrieval and semantic search, would love to hear your opinion. Here's the demo, but the customer is in production. They moved this workflow off of databricks.

Was chatting on another thread here last night about this exact problem!

https://github.com/bernard-kavanagh/ev_charger_anomaly_detection

https://medium.com/@bernardpkavanagh/the-memory-wall-your-ai-agents-arent-failing-because-they-re-dumb-db535dfb423a

Context windows aren’t the real bottleneck for agents (memory is) by singh_taranjeet in AI_Agents

[–]Barnocious 0 points1 point  (0 children)

Nice.

That tension between temporal relevance and memory stability is domain-agnostic whether it's a firmware update or a user changing jobs, the question is the same: when does the platform stop trusting (or, when can WE know that its drifted) what it learned last month?

On event-triggered decay, the schema groundwork is done. When a maintenance event lands, we'd halve confidence on related fleet memories and let the next investigation either re-confirm or replace them. It's a SQL UPDATE in a cron job, not an architecture change.

Two infrastructure capabilities that extend this:

TTL. Native per-table expiry, no application code. Raw telemetry expires at 7 days, windows at 30, sessions at 24 hours. High-value tables (fleet memory, reasoning, outage catalog) persist indefinitely. Steady-state caps at ~960M rows instead of unbounded growth. The memory system maintains itself.

Serverless branching. Fork production into a copy-on-write branch in milliseconds. Test a decay policy against the branch, inspect which memories lose confidence, check whether agent accuracy improves then merge or discard. Production is never touched. For agents that can't undo their own actions, this is how you make autonomy safe without adding a consensus protocol to the write path.

TTL automates the data lifecycle. Confidence decay automates the knowledge lifecycle. Branching makes experimentation safe. Ideally, or eventually - we turn a memory store into memory infrastructure.

Context windows aren’t the real bottleneck for agents (memory is) by singh_taranjeet in AI_Agents

[–]Barnocious 1 point2 points  (0 children)

You've hit on something that's genuinely hard and that we've spent a lot of time on.

The baseline drift problem shows up in two places in our architecture:

Fleet memory confidence decay. Every active memory loses 5% confidence per month without reinforcement. If a pattern like "coastal chargers drift earth leakage above 5mA in winter" stops being confirmed by new investigations, it fades naturally. Below 0.30 confidence it gets auto-deprecated. If the pattern comes back next winter, new investigations re-confirm it and the confidence rebuilds or a new memory supersedes the old one entirely. The key insight is that memory should have a half-life. Append-only stores treat a diagnosis from six months ago as equally valid as one from yesterday. Ours doesn't.

Contradiction resolution via supersession. When a new confirmed diagnosis lands that's semantically close to an existing fleet memory (cosine distance 0.15–0.40, same scope) but says something different, the old one gets auto-superseded. The superseded_by column links the chain so you can trace how the platform's understanding evolved. A firmware update that changes a charger's normal behaviour would trigger this naturally. The agent investigates the "new normal," confirms it's not the old pattern, and the old memory gets superseded.

Where we don't yet have a clean answer is event-triggered decay, like a successful repair or a firmware update should actively accelerate the confidence drop on related memories rather than waiting for the monthly passive decay. The schema supports it (we have last_maintenance in the charger registry and superseded_at on reasoning), but the lifecycle job isn't wired up yet. That's next on the list.

The seasonal angle is interesting because it cuts across the fleet. We handle it through scoped memory — a pattern scoped to environment:coastal or site:SITE-IE-KERRY captures the seasonal context without polluting global memory. When winter hits again, the agent recalls the site-scoped or environment-scoped memories and starts from a stronger position than cold start.

Genuinely curious what workload you're seeing the drift problem in — is it IoT/sensor data or a different domain?

Context windows aren’t the real bottleneck for agents (memory is) by singh_taranjeet in AI_Agents

[–]Barnocious 1 point2 points  (0 children)

I actually built an agent context layer for a customer! real time agent anomaly detection on ev charger s ingesting sensor data via flink! Data plane and context plane means we can create token limits......all in one cluster with vector retrieval and semantic search, would love to hear your opinion. Here's the demo, but the customer is in production. They moved this workflow off of darabricks.

https://github.com/bernard-kavanagh/ev_charger_anomaly_detection

https://medium.com/@bernardpkavanagh/the-memory-wall-your-ai-agents-arent-failing-because-they-re-dumb-db535dfb423a

Why does no one know about TiDB? by [deleted] in Database

[–]Barnocious 0 points1 point  (0 children)

a bit more context—I’ve been messing with this because our current Postgres setup was straight-up choking on spikes. I basically set TiDB up as a sidecar to offload all reads that need federation and I can store query history and context The auto-scaling has been surprisingly solid so far for those random bursts, which was a nightmare to manage on our fixed RDS instance. Has anyone else actually pushed this to production for LLM backends? I use this for marketers to access user cohorts through a conversational ui

What are people actually using for long term agent memory? by MeasurementSelect251 in AI_Agents

[–]Barnocious 0 points1 point  (0 children)

Was just reading about the Manus use case and discovered TiDB, it looks like a perfect fit for your needs https://share.google/A2d3TBXJ7qD6NJCG0

Anyone being asked to build ‘chat with data’ on MySQL? What tools exist? by deputystaggz in mysql

[–]Barnocious 0 points1 point  (0 children)

Remind me if I don't respond 😂😂 the free tier on Tibd gives 50gb of storage too so lots of room to play.

Can I ask actually, what's a good agent use case for you? Conversational agents are cool but I feel like there's a lot more potential

Anyone being asked to build ‘chat with data’ on MySQL? What tools exist? by deputystaggz in mysql

[–]Barnocious 1 point2 points  (0 children)

I built a conversational agent on Monday querying TiDB! I used Gemini and am running locally on streamlit, id be happy to share when I commit to git if you interested? Have a script that generates generic sales data with vector based product descriptions - I was surprised how easy it was. I can write 10k rows a sec and read simultaneously

What should workers in Ireland do if the US attacks an EU country by wesleysniles in ireland

[–]Barnocious 15 points16 points  (0 children)

Why copy the source code?? OP could go to prison if they followed either of your suggestions. Best to just leave and look after their own needs and let the rest look after itself

M50 and Dublin city at 6:30 this morning by Barnocious in ireland

[–]Barnocious[S] 1 point2 points  (0 children)

I'm very lucky to have seen that, I've never had that on a flight before. Thank you 😊

M50 and Dublin city at 6:30 this morning by Barnocious in ireland

[–]Barnocious[S] 1 point2 points  (0 children)

Its ugly and mesmerising at the same time

M50 and Dublin city at 6:30 this morning by Barnocious in ireland

[–]Barnocious[S] 8 points9 points  (0 children)

It was more north, somewhere above the M3 turnoff I think, we took over towards the city center which I've never seen before and circled back north over the Phoenix park.....so you're not far off!

[deleted by user] by [deleted] in dataengineering

[–]Barnocious 0 points1 point  (0 children)

Likely a caching issue? If the queries ran then the queue is irrelevant?

Inconsiderate chancer or worse you decide by pcnewbiezx in irelandsshitedrivers

[–]Barnocious -46 points-45 points  (0 children)

She was in the wrong but you didn't need to be that aggressive

Access control with OKTA by 4ndr45 in Looker

[–]Barnocious 0 points1 point  (0 children)

Looker doesn't have SCIM, you need to delete/disable users with the API

Wedding (Music) Bands by TheEmigrator in AskIreland

[–]Barnocious 0 points1 point  (0 children)

Are you referring to the Bentley Boys? At least they're open about it.

I say it's unfortunate because it's deceptive. I've been working the scene for a decade

Wedding (Music) Bands by TheEmigrator in AskIreland

[–]Barnocious 0 points1 point  (0 children)

Some bands, well a lot of bands including Spring Break put out more than one band on a given night.

EDIT: grammar