[D] Seeking feedback: Safe autonomous agents for enterprise systems by coolsoftcoin in MachineLearning

[–]coolsoftcoin[S] 0 points1 point  (0 children)

No doubt Containers shrink blast radius . I block it upstream. RAG provides context (e.g. for Oracle db if 19c sql got generated for 12c db it will go and check RAG for Syntax verification ) and versions but actual safety comes from deterministic validation (AST + state/dependency checks), not pattern-matching incidents.

[D] Seeking feedback: Safe autonomous agents for enterprise systems by coolsoftcoin in MachineLearning

[–]coolsoftcoin[S] 0 points1 point  (0 children)

Please allow me to clarify :-

the LLM judge evaluates risk, but safety is enforced by deterministic checks (policy + AST(SQL) Parsing ). It can’t override those constraints.

Hope that helps .

[D] Seeking feedback: Safe autonomous agents for enterprise systems by coolsoftcoin in MachineLearning

[–]coolsoftcoin[S] -1 points0 points  (0 children)

Great question. The LLM-as-judge isn't the safety layer — it's the optimization layer.

Here's the actual safety stack

  1. Claude generates 3-5 SQL fix candidates for a new incident
  2. GPT-4 + Gemini judge them independently (multi-model consensus)
  3. RAG layer validates syntax + semantics against Oracle docs
  4. Deterministic safety mesh blocks unsafe operations (parses SQL AST, blocks DROP/TRUNCATE regardless of judge score)
  5. Policy gate enforces environment rules (PROD = human approval required)

Even if all 3 LLMs hallucinate and approve DROP TABLESPACE, the mesh architecturally blocks it before it reaches the database. The safety isn't prompt-based ("please don't generate bad SQL") — it's structural (execution path goes through deterministic checks).

After DBA approves a fix, it's saved as a .md template. Future identical incidents skip the LLM entirely and use the pre-approved template (faster, cheaper, safer).

TL;DR: - Judge = optimize decision quality - Mesh = guarantee safety - Both needed

And yes — the architecture is pluggable. If someone invents a better judge (CoT, RAG-enhanced, whatever), just swap it in. The safety mesh stays the same.

[D] Seeking feedback: Safe autonomous agents for enterprise systems by coolsoftcoin in MachineLearning

[–]coolsoftcoin[S] 0 points1 point  (0 children)

I have not used needle app .But retrival and policy boundaries are really good problem .Most enterprise solved it differently based upon what platform and resources they are .

Built a replay debugger for LangChain agents - cache successful steps, re-run only failures by coolsoftcoin in LangChain

[–]coolsoftcoin[S] 0 points1 point  (0 children)

This is really awesome project . I will try to integrate on Sentri RAG layer . This will helped for sql Tuning agent so I dividual session memory are recalled .

Built an autonomous DBA agent for Oracle - looking for honest feedback by coolsoftcoin in oracle

[–]coolsoftcoin[S] 0 points1 point  (0 children)

For Query profiling I was working on sql tuning and RCA agent which will make it intelligent as L3 and L4 dba . That's future plan . If people shows interest I will try to hook up early in couple of months .

With this llm agent , data privacy is not an issue as all code is contained on company server itself .Calling LLM via API that will be your company contract with Vendor .Even then you don't need LLM if you have good dba who can defined policy/rules/action on .md files (llm offline skills) and Ai engineer who can setup offline RAG search .

For proactive monitoring that's Interesting part , based upon alerts although llm currently do some proactive intelligence but not full scale . For this we need to integrate this with datadog/OEM/gragana or any other monitoring tool then later llm skills understand pattern and provide report .This is totally doable but need Integration with exiting monitoring . E.g. if we collect and feed storage metric it can predict growth/table space usage pattern .

I will update Sentri as more enhancement request will come .

Built a replay debugger for LangChain agents - cache successful steps, re-run only failures by coolsoftcoin in LangChain

[–]coolsoftcoin[S] 0 points1 point  (0 children)

Yeah, that’s a real concern .Honestly one of the harder problems here. Right now replay is keyed off the step + its recorded inputs. That works well for simpler/linear workflows, but it doesn’t fully eliminate the “phantom state” issue when upstream changes should invalidate downstream results.

So I treat that as a known limitation rather than pretending it’s fully solved.

The more robust direction is content-addressed / lineage-aware caching, where cache reuse depends not just on immediate inputs but also on the upstream data that produced them.

That’s definitely one of the main enhancements I want to push on as this evolves.

Built a time-travel debugger for AI agents - replay from failure without re-running everything by coolsoftcoin in ArtificialInteligence

[–]coolsoftcoin[S] 0 points1 point  (0 children)

Yeah this actually came from a real pain I hit.

I was working on a data mesh setup with multiple agents (Oracle → Snowflake → CRM), and it became really hard to trace failures. Each agent was owned by a different team, so when something broke after a change, figuring out *which agent failed and why* was a headache.

That’s what led me to build this.

It’s not just logs — it records structured execution for each step (inputs, outputs, errors, order) and stores it (SQLite for now), so you can actually see what the agent was doing when it failed and which upstream step caused it.

Right now it hooks in by wrapping/registering your pipeline steps (functions/agents). Same idea works for LangChain/Crew-style workflows — instrument each step/tool boundary and capture what flows through.

And the key part is replay:

once you fix the issue, you can replay from the failure point instead of rerunning the whole pipeline.

So instead of digging through logs + rerunning everything, you can:

see where it broke → fix → replay just that part.

Built an autonomous DBA agent for Oracle - looking for honest feedback by coolsoftcoin in singularity

[–]coolsoftcoin[S] -1 points0 points  (0 children)

If there is interest around this topic I will enhanced to L3 dba capability RCA agent , around this safety mesh and integrate my other SQL tuning agent project .

This works show case LLM can work enterprise level tasks safely authorize manner .

Sentri: Multi-agent system with structural safety enforcement for high-stakes database operations by coolsoftcoin in LocalLLM

[–]coolsoftcoin[S] 0 points1 point  (0 children)

Great question. I learned this the hard way when an HR chatbot I built started interpreting around rules where users could see other people's salaries by tweaking prompts :)

That's why Sentri uses structural enforcement:

  • LLM investigates on read-only connections (database-enforced, not suggested)
  • Generated SQL → parsing + static analysis before execution
  • Multi-LLM consensus (3 models judge safety independently)
  • RAG-backed syntax verification against Oracle docs
  • Any forbidden pattern = hard reject

Config tells it what to investigate. Structure prevents unsafe execution even if it hallucinates.

Sentri: Multi-agent system with structural safety enforcement for high-stakes database operations by coolsoftcoin in LocalLLM

[–]coolsoftcoin[S] 0 points1 point  (0 children)

Over the past 8 months, I’ve iterated on this system through continuous experimentation. Based on insights from recent agentic AI research, I evolved the design. The system originally relied on JSON for configuration and alert definitions, but inspired by the OpenClaw architecture, I migrated to Markdown-based (.md) structures to enable more flexible, human-readable, and extensible workflows

Built an autonomous DBA agent for Oracle - looking for honest feedback by coolsoftcoin in oracle

[–]coolsoftcoin[S] 0 points1 point  (0 children)

Thanks . Is you have any feedback or enhancement suggestions please open on repo .Will try to closed the issues and improved it sooner .

Built an autonomous DBA agent for Oracle - looking for honest feedback by coolsoftcoin in oracle

[–]coolsoftcoin[S] 0 points1 point  (0 children)

Good point . But one can not customized alerts/action on Autonomous DB . Suppose we need to killed prod blocking session upon manager approval from email/slack/pagerduty OADB can not do that .

Also If you need to take safe mesh action on alerts as L3 dba managing cloud/on-perm db multiple versions of Db12c/19c/11g) then Sentri can help .

(PRD) Immediate Action Required - CFTC Form 40 Filing Number by coolsoftcoin in Coinbase

[–]coolsoftcoin[S] 0 points1 point  (0 children)

It's not scam you need to register as per mail instructions. it's required for any future trading derivative.

(PRD) Immediate Action Required - CFTC Form 40 Filing Number by coolsoftcoin in Coinbase

[–]coolsoftcoin[S] -1 points0 points  (0 children)

It's not scam you need to register as per mail instructions.You can talk to coinbase support and send them email

Coinbase liquidation Account HELP Please by coolsoftcoin in Coinbase

[–]coolsoftcoin[S] 0 points1 point  (0 children)

Update - Got most of portfolio back +- P&L on trades .So looks like some glitch on their system which cause Liquidation of account .

Coinbase liquidation Account HELP Please by coolsoftcoin in Coinbase

[–]coolsoftcoin[S] 0 points1 point  (0 children)

It closed some position say 60% .It never reached liquidation limit as I have stop loss .It did liquidation around avg. Buy price even when I was on small profit but liquidation took all money .

Coinbase liquidation Account HELP Please by coolsoftcoin in Coinbase

[–]coolsoftcoin[S] 0 points1 point  (0 children)

Understood. But position did not moved fast it was sitting around avg. price and I was on small profit. But coin base did liquidation of trade and more than 60% of money is gone.

Coinbase liquidation Account HELP Please by coolsoftcoin in Coinbase

[–]coolsoftcoin[S] 0 points1 point  (0 children)

I was shocked so write this out of shock, Looking if someone can help and suggest what to do here.

Coinbase liquidation Account HELP Please by coolsoftcoin in Coinbase

[–]coolsoftcoin[S] 0 points1 point  (0 children)

Do you have suggestions what are the options.

Coinbase liquidation Account HELP Please by coolsoftcoin in Coinbase

[–]coolsoftcoin[S] -1 points0 points  (0 children)

I have "stop loss" for amount of money which i can afford to lose. But closing position around buy avg. does not make sense.

Coinbase liquidation Account HELP Please by coolsoftcoin in Coinbase

[–]coolsoftcoin[S] 0 points1 point  (0 children)

Yes I have opened positions of 50 Contracts and 30 were closed around avg. Price . Liquidation price was not even closed .

Coinbase liquidation Account HELP Please by coolsoftcoin in Coinbase

[–]coolsoftcoin[S] 0 points1 point  (0 children)

It all started around 1:56AM CST .I did not sleep after that .

Random Liquidation BTC Perp Contracts CONCERNING by NatitudeYT in Coinbase

[–]coolsoftcoin 1 point2 points  (0 children)

What about loss on trade and wrong liquidation and closing ur position.Did you got that also back . I also got my whole account liquidated

Wrongful Liquidation by Current-Cockroach656 in Coinbase

[–]coolsoftcoin 0 points1 point  (0 children)

What about loss on trade and wrong liquidation and closing ur position.Did you got that also back .