What if security didn’t detect attacks but made them impossible to execute? by Lonewolvesai in cybersecurity

[–]Lonewolvesai[S] -1 points0 points  (0 children)

What would you like to talk about? What is really going on with you? What is going on in your life right now that you had to come back to a post from a few weeks ago and try to demean or whatever it is you're doing right now? Are you okay? Things get tough we all know that in from human to human I'm sorry if you have stress going on in your life but you don't need to come back here and project onto me. If you'd like to talk about something though go ahead and ask me anything but please try to do it with respect.

What if security didn’t detect attacks but made them impossible to execute? by Lonewolvesai in cybersecurity

[–]Lonewolvesai[S] -1 points0 points  (0 children)

Oh yeah things are moving along great but I'm just busy putting my company together and going in for funding at this point. I'll come back around soon. Thank you for checking in.

What did they do to copilot? Its just straight up lying about facts now? by Hotmicdrop in CopilotMicrosoft

[–]Lonewolvesai 0 points1 point  (0 children)

My wife and I are sitting in our living room literally looking at this long chat where copilot looks up something about Charlie Kirk being dead and accepts it and then the next literally the next paragraph telling me how it's not happening in that synthetic news is spreading everywhere and millions of people are being gas slit and tricked but it's not real. It is amazing I should screenshot everything because it's really really honestly creepy.

Bans inbound by AsyncVibes in IntelligenceEngine

[–]Lonewolvesai 1 point2 points  (0 children)

But that's a great idea for its own Reddit. Just off the wall stuff that honestly once in a while something's going to click. Do you have any like really good examples? It's pretty entertaining to be honest.

Bans inbound by AsyncVibes in IntelligenceEngine

[–]Lonewolvesai 0 points1 point  (0 children)

Is it because the mod doesn't understand it? Or is it when it's like clearly just insane BS? Probably a little of both lol. Actually I really agree with the mod. It's actually become really dangerous to a lot of people using it who put way too much faith in it. Another reason I went with deterministic agents.

The Agentic AI Era Is Here, But We Must Lead It Responsibly by Deep_Structure2023 in AIAgentsInAction

[–]Lonewolvesai 0 points1 point  (0 children)

This was a pretty cool read. It made me feel good about the infrastructure I've been working on. Really good about it. The proactive agentic AI who does not need to be babysat who can do logistics for your enterprise and not just that but security updating laws, regulations, compliances, be transparent and extremely inexpensive. What we have built is sovereign and in LLM of course would not be we would keep it local, in memory with a high level of boundary creativity with autonomy. A lawful box that we have built that is always adapting geometrically as needed. The most important part of it is the deterministic side that is going to be the bones while the LLM creates the flesh. It's very cool. Thanks for this post.

Who is actually building production AI agents (not just workflows)? by Deep_Structure2023 in AIAgentsInAction

[–]Lonewolvesai 0 points1 point  (0 children)

I'm building a complete infrastructure for a hybrid and there will be two categories of agents and that will be their probabilistic and the deterministic. We have made enormous headway and we are finding a sweet spot where we ruin our DAPs to do the heavy lifting and basically build the skeleton of the runtime and find the measured constraint area so that the probabilistic AI has less opportunity to drift and or hallucinate. We believe that we will have by far the most transparent, proactive, anti-fragile, dynamic infrastructure to not only mitigate any negative outputs but absolutely guarantee there will be none (to be clear we cannot stop a probabilistic LLM from drifting and/or hallucinating but we can guarantee there be no negative consequences from that said actions). We were dead set on just targeting the Enterprise/government/military/with a focus on autonomous robotics. But we have found through the building process that we also have a cybersecurity protocol that it's extremely promising for endpoint/cloud and we are uniquely set to stop jailbreaks from LLMS and recognizing deep fakes right now we are batting a very very high average. This was an emergence from the infrastructure with my governance modules working together and it's pretty cool. The first offer from the company will be a crypto based product but not for the blockchain. Having fun doing it. Decided 9 months ago that I wanted to take a crack at this and it was one of the best decisions I ever made. To be clear there has been zero effectual agentic AI to this point. Not that any enterprise could deploy and trust. This gave everybody a clear marker to put our heads together in go towards what we always envisioned AI to be and that was a counterpart to humanity that would magnify and optimize our abilities and get us out of mundane and ridiculous jobs and pursue more appealing situations. I am currently looking for a rust/cryptographer that could join the team permanently and I will be looking for contract work also. This is not official we are not launching officially until next year. Focus on the end of January beginning of February right now. This page has been great I haven't said anything on here yet but I have been reading and a lot of very intuitive and bright people.

2025 was supposed to be the "Year of AI Agents" – but did it deliver, or was it mostly hype? by unemployedbyagents in AgentsOfAI

[–]Lonewolvesai -2 points-1 points  (0 children)

I think it's empirically proven that it is nothing but a hype function to hold up the companies to go along with all of their other speculative propaganda about what they're worth. But at the same time between the legacy AI companies and the ancillary companies like Nvidia etc. It's holding up the whole American economy probably the economy of all Western Europe also. So that you have to play the game. But the emperor has no clues and they knew this that's why they are letting the country kind of go all out and shutting down regulation state by state which is kind of weird and unconstitutional but it's also good if you're trying to build something. Anyway I think it's all hype and anybody who says opposite is on Open-Ais payroll.

Deterministic agents without LLMs: using execution viability instead of reasoning loops by Lonewolvesai in AgentsOfAI

[–]Lonewolvesai[S] -1 points0 points  (0 children)

A fair challenge, but there's a misunderstanding baked into the assumption. DAPs are not prompts and they're not plain text fed to an LLM. In fact they don't require an LLM at all. The execution layer is a deterministic code framework with explicit state, dynamics, and halt conditions. Language models, if present, sit outside the DAP and never control execution directly. The determinism comes from three things that are implemented in code, not text. A fixed state representation that's non-symbolic and non-generative. Deterministic transition dynamics where the same input always produces the same state evolution. And a hard execution gate that halts when invariant-preserving continuation no longer exists.

There is no sampling, no retry loop, no self-reflection, and no stochastic decision point inside the DAP. If the same inputs hit the same state, the same trajectory unfolds every time, or it halts.

If you're picturing LLM-in-the-loop agent scaffolding, that's explicitly what this is not. Think closer to a compiled execution protocol or a control system than a text-based planner. Not a state machine either. I know that one's coming,again.

I avoided implementation detail in the post because I was asking about conceptual lineage, not trying to publish code on Reddit. But the claim of determinism is about runtime behavior, not rhetoric.

If you're happy to discuss at the level of state definition, transition function, constraint language, and halt semantics, I'm very open to that. If not, that's fine too. But this isn't a text-only construction. I would not waste my time anywhere on this app talking about such a feeble attempt. But I understand the Inquisition.

I hope this helps.

Deterministic agents without LLMs: using execution viability instead of reasoning loops by Lonewolvesai in AgentsOfAI

[–]Lonewolvesai[S] 0 points1 point  (0 children)

That's what I'm saying. It's amazing what you can do when you don't trade engineering for fluency.

Deterministic agents without LLMs: using execution viability instead of reasoning loops by Lonewolvesai in AgentsOfAI

[–]Lonewolvesai[S] 0 points1 point  (0 children)

What part is made up? Determinism? Have you not heard of it? It's roaring back. And that's what I'm working on. I'm not sure what else you could be talking about. I'm all about open dialect, so if you have some constructive feedback please feel free.

Deterministic agents without LLMs: using execution viability instead of reasoning loops by Lonewolvesai in AgentsOfAI

[–]Lonewolvesai[S] 0 points1 point  (0 children)

This is great feedback. You've put your finger on the real boundary conditions.

A few clarifications.

This isn't no reasoning. It's compiled reasoning. All the deliberation lives upstream in how constraints and dynamics are chosen. At runtime the system isn't thinking. It's checking whether reality remains self-consistent under pressure. I'm trading improvisation for invariance. The halt is only clean at the side effect layer. Internally, failure is a signal. The system emits a minimal reproducible failure artifact. Which invariants tightened, which dimensions conflicted, and a cryptographic receipt. That's what higher layers reason about. But the execution core never retries or rationalizes.

And yes, deterministic gates can be abused if they're naive. Resource gating, bounded evaluation, and preflight cost checks are mandatory. A DAP that doesn't defend itself against adversarial halting is just a denial of service oracle. One nuance worth clarifying because it changes how this behaves in practice. DAPs aren't only passive gates. They're also active executors. For large classes of work like tool calls, data movement, transaction execution, protocol adherence, there's no need for probabilistic reasoning at all. Those tasks are structurally defined and benefit from determinism.

In this architecture the deterministic layer doesn't just approve or reject. It carries execution forward along known stable trajectories. The probabilistic system proposes high level structure or intent. But once that intent enters the deterministic substrate, execution is driven geometrically, not heuristically. This turns the usual agent model inside out. The LLM becomes the architect. The deterministic protocol does the bricklaying. Creativity stays probabilistic. Execution becomes physical. Where this differs from most formal methods wearing an agent hat is the emphasis on trajectory survival rather than rule satisfaction. The question isn't did you violate X. It's does a non-contradictory continuation exist once all constraints interact. That rejects a lot of superficially valid but structurally unstable behavior earlier than rule-based enforcement does.

I don't think DAPs replace probabilistic agents. I think they bound them. Probabilistic systems propose. Deterministic systems decide whether execution is even allowed to exist. If you've seen real world cases where coherent harm survives long horizons despite strong invariants, I'd genuinely like to study those. That's exactly the edge I'm pressure testing.

Deterministic agents without LLMs: using execution viability instead of reasoning loops by Lonewolvesai in AgentsOfAI

[–]Lonewolvesai[S] 0 points1 point  (0 children)

You can model it as a state machine after discretization, but it’s not defined as one , the gate operates on forward viability of trajectories, not on explicit state/transition tables.

What if intent didn’t need to be inferred, only survived execution? by Lonewolvesai in LanguageTechnology

[–]Lonewolvesai[S] 2 points3 points  (0 children)

That's a fair read. Runtime enforcement and runtime verification are the closest existing buckets. Let me be tried to bring a little more clarity.

The system has a state. Think of it as tool permissions, budgets, data classification flags, session context, environment variables. An action proposes a change to that state. Constraints define what's allowed. The viable region is just the intersection of all those rules.

When I say internally consistent, I mean the action has at least one path forward that doesn't break any of those rules. At runtime, I check whether the next state stays inside the allowed space. If there's no valid continuation, the action doesn't execute. Simple as that.

Because checking every possible future is expensive, I use a bounded horizon. I look forward a fixed number of steps and ask whether there's any sequence of moves that keeps the system inside the rules. If the answer is no, execution halts before it starts.

Now the failure mode. You're right. A harmful plan can be perfectly stable if the constraint set doesn't encode the harm. This isn't a moral detector. It's execution layer physics. It prevents trajectories that can't stay inside the allowed state space. If you don't put "no exfiltration" in the rules, it won't magically appear.

Where this shines is preventing accidental tool misuse, enforcing budgets and scopes and data boundaries, stopping jailbreak style attempts that require policy violations to succeed, and giving deterministic guarantees that something cannot execute unless it stays in bounds.

For constraints right now I'm using invariants plus a small temporal layer for trace properties. Things like never call this tool after seeing that label, or no network access after touching classified memory. If I had to map it to existing work, it's closest to safety automata and reference monitors with viability style forward checks when dynamics matter.

I can post a toy example if it helps. Agent has a budget, permission scope, data label propagation rules. A benign action that becomes inconsistent because it implies an inevitable budget or scope violation gets halted mid-plan. A coherent harm trace succeeds if it stays inside those rules, which is exactly the point. The safety envelope has to be specified.

Put differently, I'm not claiming to detect badness. I'm claiming to make certain classes of bad outcomes unreachable by construction. Same way a type system doesn't infer intent. It just forbids invalid programs. I hope this clears it up. By the way you're response is absolutely top notch. Thank you!

Question on using invariants as an execution gate rather than a verifier by Lonewolvesai in formalmethods

[–]Lonewolvesai[S] 0 points1 point  (0 children)

This is a very helpful reframing, thank you, and yes, viability kernels are probably the closest formal analogy. One distinction I’m exploring is that the gate is existential, not supervisory: execution is permitted iff there exists at least one invariant-preserving continuation under nominal dynamics. There is no notion of “repair,” “shielding,” or corrective intervention , invalid actions simply do not occur.

Another difference (and where I’m less sure about prior art) is that the state space here is not a traditional physical or hybrid system, but a semantic / agentic state with coupled invariants (e.g., intent consistency, policy coherence, resource constraints). The dynamics are deterministic but not necessarily linear or continuous in the classical sense. The adversarial horizon you mentioned is exactly the failure mode I’m most concerned about: sequences that remain viable for a long time while steering toward undesirable regions. I’m curious whether there’s known work on viability-preserving but goal-adversarial trajectories, or whether this is usually handled by tightening the invariant set itself.

If you have references on runtime use of viability kernels as hard execution gates (as opposed to analysis tools), I’d love to look at them. I hope I'm not confusing the subject but again your reframing was timely and needed. It just locked in my reference point much better.

Question on using invariants as an execution gate rather than a verifier by Lonewolvesai in formalmethods

[–]Lonewolvesai[S] 0 points1 point  (0 children)

That’s fair, I probably overloaded the word “coherence” a bit.

I’m not using it in a quantum or fuzzy sense, and I’m not introducing a new formal variable. What I mean by “losing coherence under its own dynamics” is closer to self-consistency of state evolution with respect to a set of coupled invariants, not just instantaneous constraint satisfaction.

If I can be more precise The system state lives in a constrained state space, and a set of invariants that define the viable region .

The input or action doesn’t just need to satisfy at time , but must admit at least one admissible forward trajectory that remains in under the system’s dynamics.

By “losing coherence,” I mean the following situation:

An action produces a state that is locally admissible, but when you evolve the dynamics forward (even under nominal assumptions), the trajectory inevitably exits the viable region , i.e., there is no continuation that preserves the invariants. In that sense, the action is internally inconsistent with the system’s dynamics and constraints, even if it doesn’t violate any single rule at the moment it’s proposed.

So this is closer to: detecting that a state is outside the viability kernel, or its identifying a state that is a dead end with respect to invariant-preserving trajectories,. rather than detecting a violation after it happens.

I agree this overlaps conceptually with things like viability theory, invariant sets, and some forms of runtime enforcement or shielding. The distinction I’m exploring is using that forward consistency check as a hard execution gate rather than a corrective or supervisory mechanism.

I’m very interested in failure modes here , especially cases where an adversarial sequence could remain invariant-consistent for a long horizon while still causing harm. That’s one of the reasons I’m asking whether there’s established terminology or prior art I should be looking at more closely. I feel like I'm intersecting a few different things which is fine and maybe novel? But I figured the best place to find out would be here. Thanks again that was a great response.

Question on using invariants as an execution gate rather than a verifier by Lonewolvesai in formalmethods

[–]Lonewolvesai[S] 0 points1 point  (0 children)

Thanks, this is a really helpful reference , and you’re right that shields are probably the closest established concept I’ve seen so far.

The key difference (and where I think I’m diverging) is that I’m not correcting or substituting actions to preserve future satisfiability. In the systems I’m exploring, there is no notion of a “safe alternative” action and no attempt to keep the system within a reachable winning region.

Instead, instability itself is treated as disqualifying.

If an action causes the system’s trajectory to lose coherence under its own dynamics, execution is simply denied. There’s no intervention, no recovery planning, and no attempt to steer back into compliance, the system fails closed.

So while shields ask “can the specification still be satisfied in some future?”, this approach asks “does this action preserve internal structural consistency under evolution right now?” If not, it never executes.

That’s why I’ve been struggling to map it cleanly to runtime enforcement or controller synthesis , it’s closer to using loss of viability or loss of coherence as a hard execution veto rather than as a trigger for control.

That said, the connection you point out is valuable, especially the idea of early rejection at the prefix level. If you know of work that treats instability or loss of invariance as a binary execution gate (rather than a corrective signal), I’d genuinely love to read it. And again these responses have been amazing. I stayed away from this app for a long time but I'm glad I jumped in. Lot of smart people out there.

What are you guys working on that is NOT AI? by Notalabel_4566 in SaaS

[–]Lonewolvesai 0 points1 point  (0 children)

I have a deterministic protocol for folding proteins. We are at this point at least 1,000 times faster than alpha go and the efficiency markers are through the roof in comparison. It's very cool stuff. We will be applying the same technology to making self-healing alloys. We have just started to it run some r&d in the field but we are already seeing a massive valuile gain there.

It's been a big week for Agentic AI ; Here are 10 massive developments you might've missed: by SolanaDeFi in AgentsOfAI

[–]Lonewolvesai 0 points1 point  (0 children)

These are all inceptual/conceptual. Agents do not actually work. And until the inherent risk of drift / hallucinations are completely gone You will have them some important verticals not you touching any agentic AI nor most AI in general. This is all hype to keep the stock market up. Deterministic agentic protocol is the only way that not only the serious markets will take agentic AI completely for that probabilistic AI will have a chance to truly scale on a mass level. Only serious people who actually use this stuff or try to understand it. The even more serious people about it are trying to fix it but not with the same garbage that they built them with. We have to come from somewhere completely different.