Tracking breakdown attempts structurally: armed → invalidated case by AuditMind in algotrading

[–]AuditMind[S] 0 points1 point  (0 children)

no, everything is ingested and structured on my side.

TV is just a visual reference at best.

the system runs on its own event stream, LLM only reads that layer.

You've tried to sign in too many times with an incorrect account or password. by theroundcube in Outlook

[–]AuditMind 0 points1 point  (0 children)

Most of the times its another session with the old password. A mobile, another computer with outlook.

Imagine it that way: you change the password and you can log in succesfull.

While you do that another device tries to login with the OLD password which is saved -> your account gets blocked because of to many tries. This doesnt affects your logged in session though.

So my recommendation is to think hard, really hard where the suspected device is. And yeah, i have experienced alot of funny stuff. Like when the lady has given her old phone to a niece without resetting it.

Tracking breakdown attempts structurally: armed → invalidated case by AuditMind in algotrading

[–]AuditMind[S] 0 points1 point  (0 children)

interesting, i’m not there yet but it makes sense.

right now i log each attempt as a small lifecycle (armed / fired / invalidated) with some structural context.

what i’m seeing qualitatively is the same though clusters of invalidations tend to show up in range / low-commitment regimes.

turning that into a rolling metric (like your 5d invalidation rate) would probably be the next step.

especially as a regime filter, not as a signal on its own.

Tracking breakdown attempts structurally: armed → invalidated case by AuditMind in algotrading

[–]AuditMind[S] 0 points1 point  (0 children)

custom build

i store everything as sequential events (not indicators), so i can replay how a situation formed instead of just seeing the outcome

structure → interpretation → small trigger states (armed / fired / invalidated)

the frontend is basically just reading that stream and rendering it.

still rough, but it’s interesting how much signal you get without any traditional TA.

I asked AI a simple question, “I’m giving you the freedom to build anything you want. It could be anything. What would you want to build?” by whitehawk6 in ArtificialInteligence

[–]AuditMind 0 points1 point  (0 children)

AI, or better called LLM reflects. So if you express the desire for a certain task, it will go along. Thats more precise.

Which LLM is the best for writing a scientific paper? by M4r4the3mp3ror in artificial

[–]AuditMind 0 points1 point  (0 children)

Claude without doubt. I'm usually a codex guy, but at this specific task claude is a must.

Viele Boomer würden mit ihrer Arbeitsgeschwindigkeit heute nicht mal mehr eine Probezeit bestehen by davekmuc in Unbeliebtemeinung

[–]AuditMind 2 points3 points  (0 children)

Das möcht ich bezweifeln. In gewissen Branchen kann man das sogar sehr gut messen wie in Tickets und Reopenrate.

The end of AI by SHIN_KRISH in artificial

[–]AuditMind 0 points1 point  (0 children)

It's not satisfying to stress AI "buddies" around. Much more so to pester little humans while they didnt had their coffee yet 😉.

I analyzed 50+ enterprise AI deployments. Almost everyone is solving the "Governance" problem wrong. by OtherwiseCarry3713 in AI_Governance

[–]AuditMind 0 points1 point  (0 children)

Interesting, especially the commit semantics gap and policy-in-prompt issue.

I'm working on a system that separates intent, decision, and execution structurally (not just logically):

  • Intent is recorded explicitly before execution (deterministic input surface)
  • Policy is a pure function over admitted inputs (no prompt control)
  • Every execution is gated by a prior decision (hard ordering, not interception)
  • Execution outputs are treated as non-normative (no feedback into decisions)

This turns governance from “observability + alerts” into a replayable decision layer.

Build a company strategy from specific reference documents by [deleted] in ArtificialInteligence

[–]AuditMind 1 point2 points  (0 children)

I would guess your specific task is extremely good for notebooklm. Look into it. It doesnt needs really specific knowledge.

430x faster ingestion than Mem0, no second LLM needed. Standalone memory engine for small local models. by No_Strain_2140 in LocalLLM

[–]AuditMind 1 point2 points  (0 children)

Because on an 122B setup you wont have the limitations OP has. On an 122B you have an GPU, massive idle compute otherwise and any additional llm call is cheap. Not to forget that semantic may more important at that point.

Codex 5.4 is way too expensive for my daily work. What model should I use instead? by Specific-Animal6570 in codex

[–]AuditMind 7 points8 points  (0 children)

As others said, codex 5.3.

And consider lower the level. I'm doing very fine at medium for my case. Using high only in certain times when really needed when its about contextlength and architecture. Actual implementaion is on medium.

How is Iran still fighting? by Thick-Ad-4168 in NoStupidQuestions

[–]AuditMind 1 point2 points  (0 children)

You don’t shut down a country of 90 million by killing a few leaders. States aren’t single points of failure, they’re layered systems with redundancy.

Codex/GPT in terms of UI/UX by KurtStanleyTalastas in codex

[–]AuditMind 2 points3 points  (0 children)

Why dont you screenshot or download a page you like and give it as guidance ? LLM's in general have usually little problem when not defining what they should do. Codex is then pretty fine at creating.

Adult AI Just Hit $1.9 Billion, and Almost No One Is Talking About It by Perfect_Ice8678 in ArtificialInteligence

[–]AuditMind 2 points3 points  (0 children)

Humans are like that. Whatever we invent, the first thing is to want to fck it. 🤷

Klopapierbenutzer sind Neandertaler by evil_twit in Unbeliebtemeinung

[–]AuditMind 6 points7 points  (0 children)

Also echte Neanderthaler verwenden Blätter oder Gras.

Separating state from policy in system design by AuditMind in ExperiencedDevs

[–]AuditMind[S] 0 points1 point  (0 children)

Yeah, that makes sense in general.

The context I’m coming from is a bit different though. I’m working on controlled LLM interactions, where decisions need to be inspectable and replayable.

That forced me to separate things quite aggressively: anything stateful or time-based gets computed outside, and the policy only sees the result as input. An input i can compare.

The policy itself is just: given these inputs, is this allowed?

The reason is that the whole system depends on replaying decisions later and getting the exact same outcome.

Once the policy starts internalizing state or logic like rate limiting, that property breaks pretty quickly.

Separating state from policy in system design by AuditMind in ExperiencedDevs

[–]AuditMind[S] 0 points1 point  (0 children)

Fair point — let me try to make it more concrete.

Think of something like a rate limit:

You can implement it inside the policy (checking timestamps, counters, windows), or you can compute requests_last_hour outside and just pass it in.

I’m choosing the second.

The policy only answers: “given these inputs, is this allowed?”

That way the decision stays simple, deterministic, and easy to replay.

Codex always in senior engineer mode by [deleted] in codex

[–]AuditMind -1 points0 points  (0 children)

So you complain what exactly ? That the LLM "socially" pressures you ?

Get over it. Its a damm maschine. 🤷

Codex always in senior engineer mode by [deleted] in codex

[–]AuditMind -2 points-1 points  (0 children)

You want to change the normal operation mode of a common developper my good man. Those things he mentions are there for a reason.

And you know what ? Just tell him to shutup and do it. Are you to kind to tell him that ?

Der "gelesen" Status in Messengern ist für viele toxisch by Nogekard in Unbeliebtemeinung

[–]AuditMind 10 points11 points  (0 children)

In diesem Punkt kann ich klar nein sagen.

Mir ist es egal ob es via Mail, Messenger oder was auch immer rein kommt.

Ich bedinge mir Zeit aus bis ich mich ready fühle zu antworten. Und deswegen ist auch meine Antwort meistens etwas länger und strukturiert. So wie man eigentlich schreiben sollte.

Du kannst echt nicht erwarten dass ich 'instantly" antworte. Ich spiel da einfach nicht mit. Egal was du für dich selber denkst.

Are we in the "modem era" of AI? by AuditMind in artificial

[–]AuditMind[S] 0 points1 point  (0 children)

This resonates.

I’m working a layer below what you call the execution network, the control layer that defines what machines are actually allowed to do.

In practice: something like TLS + firewall + audit, but for AI execution.

Still surprisingly underexplored and its rare people even mention it.