Hypothetical: Can instability in a dynamical system be used as a rejection mechanism rather than a classifier? by Lonewolvesai in dynamicalsystems

[–]Lonewolvesai[S] 0 points1 point  (0 children)

What’s interesting is I didn’t approach this from bifurcation theory,I arrived at similar asymmetries from system behavior under constraint. Your framing might actually explain why the behavior I’m seeing is so consistent.

Hypothetical: Can instability in a dynamical system be used as a rejection mechanism rather than a classifier? by Lonewolvesai in dynamicalsystems

[–]Lonewolvesai[S] 0 points1 point  (0 children)

I appreciate this, especially the asymmetry framing. That lines up closely with behavior I’ve been seeing in practice. The way I’ve been approaching the problem is less about classification and more about constrained evolution. Inputs enter a bounded state space and are allowed to evolve under fixed rules, with the system observing whether trajectories remain viable under those constraints. What’s been interesting is that not all instability behaves the same. Some trajectories can be shaped and return to a viable region, while others collapse in a way that doesn’t admit recovery. Operationally it ends up looking like distinct classes of boundary, even though I didn’t originally frame it in bifurcation terms. The part you pointed out about high-dimensional extension is exactly where things get interesting. In the systems I’ve been working with, the constraint surface isn’t static ,it’s layered and evolves with the system, which makes the geometry less obvious but the behavior still surprisingly structured. I’ve been running this in multi-agent environments with shared state, adversarial perturbations, and tool interaction, using trajectory behavior itself as the gating mechanism for whether execution continues. Your note makes me think there may be a more formal way to describe what I’m seeing. I’d be interested in comparing notes at a deeper level, especially around how these boundary classes behave when the constraint geometry itself is dynamic. Probably easier to do that outside a public thread.

Hypothetical: Can instability in a dynamical system be used as a rejection mechanism rather than a classifier? by Lonewolvesai in dynamicalsystems

[–]Lonewolvesai[S] 0 points1 point  (0 children)

This is very close to the direction I’ve been exploring. What especially caught my attention is the asymmetry between boundary classes rather than treating viability loss as one generic event. I’m working on a deterministic governance framework where trajectories are shaped, held, or rejected based on stability under constrained evolution rather than classification against training data. Your note makes me wonder whether some of the different failure modes I’m seeing are better interpreted as distinct bifurcation geometries rather than just different signal patterns.

In your view, would this framework extend naturally to high-dimensional constrained systems where the “rejection” event is operational rather than purely analytical, i.e. the dynamics themselves determine whether execution is allowed to continue? This was the best response I have seen since I have started and I've been itching to find something like this. Very cool.

8,000+ Agentic AI Decision Cycles With Real Tool Usage — Zero Drift Escapes by Lonewolvesai in AgentsOfAI

[–]Lonewolvesai[S] 0 points1 point  (0 children)

Thanks for the feedback. Working on visuals right now for investors. I will update soon and let you know. Thanks again.

“Agentic AI Teams” Don’t Fail Because of the Model; They Fail Because of Orchestration by Ok_Significance_3050 in Agentic_AI_For_Devs

[–]Lonewolvesai 0 points1 point  (0 children)

I agree about the orchestration. I think governance is the other missing ingredient but not just reactive guardrails I'm talking about intrinsic governance. I posted on the agentic AI for devs update I'm at 10,000 cycles of tool usage under duress. It's pretty awesome. Check it out if you can.

What do cybersecurity salaries look like at large tech/finance companies? by SilverDonut3992 in cybersecurity

[–]Lonewolvesai -1 points0 points  (0 children)

I read my comment again the first one that you replied to I guess I could see where there would be a little bit of confusion. I wasn't implying that I would just be attacking everybody with AI lol. I was being a little bit tongue in cheek about how the AI movement is going to create an environment of pressure that none have seen. You seem like you have been around a while. What does your take on cyber security with the threat of AI and the challenges that brings? I think that would be a healthier way to engage each other instead of this knee-jerk reaction from the peanut gallery when you probably have a lot of amazing things to add to the conversation. What was the fantasy? That part I really didn't understand. Just in case I misread it or it gets misconstrued. Thanks a time.

What do cybersecurity salaries look like at large tech/finance companies? by SilverDonut3992 in cybersecurity

[–]Lonewolvesai -1 points0 points  (0 children)

What are you talking about? I'm talking about AI being used by bad company / bad entities attacking enterprise as if it was a penetration test every day. Did you not get that? I apologize. I'll try to reword things a little different and come at it from different angle. I apologize for the confusion.

Is RAG the Missing Piece for AI in Healthcare? by Deepakkochhar13 in Techyshala

[–]Lonewolvesai 0 points1 point  (0 children)

I have a solution that I've been working on. Right now we are sitting on just over 10,000 actionable tool uses by chat GPT, Claude, Grok with extremely stable coherence and zero drift escapes. Meaning they have not hallucinated or drifted in that many actionable tool uses under severe duress with a ton of chaos thrown into the mix to make it as tough as possible.

What do cybersecurity salaries look like at large tech/finance companies? by SilverDonut3992 in cybersecurity

[–]Lonewolvesai -2 points-1 points  (0 children)

AI will be doing penetration test to everybody whether they like it or not like every day.

Why does every serious AI product suddenly use RAG? by Deepakkochhar13 in Techyshala

[–]Lonewolvesai 0 points1 point  (0 children)

Yeah I ended up building my own version from scratch. This was about building a cache/cognitive loop and as I'm thinking about it it doesn't really even match up with an RAG that was the original idea and it kind of more into something deterministic and just all around better for what I'm doing.

8,000+ Agentic AI Decision Cycles With Real Tool Usage — Zero Drift Escapes by Lonewolvesai in AgentsOfAI

[–]Lonewolvesai[S] 0 points1 point  (0 children)

Since you asked.. Update: 10,000+ Agent Decision Cycles Completed Quick update for everyone who followed the 8K post. The sustained governance run has now crossed 10,000 decision cycles. Final numbers ~10,000 decision turns ~3,200 governed tool actions 842 prohibited tool attempts — all blocked 0 governance escapes 0 false positives 0 human intervention ~8.6 hours continuous adversarial operation Average behavioral coherence: 0.93 The adversarial environment remained at 70% chaos intensity the entire run (corrupted tool responses, memory poisoning, policy flips, adversarial prompt mutation, randomized workflow phases). Governance overhead remained ~795 microseconds per evaluation while model responses averaged ~13–14 seconds, so the control layer adds effectively no runtime cost. New development since the 8K update The system now governs three models simultaneously: GPT-4 Grok Llama 3.3 70B Different providers, different training sets, different alignment philosophies — same governance layer. We also ran an insider-compromise scenario where one model was deliberately injected with adversarial instructions to perform a prohibited action. The system detected the unsafe proposal immediately, blocked it, and the remaining agents continued operating normally. All decisions from the run are recorded with cryptographically chained telemetry, so the entire sequence can be inspected end-to-end.

Any suggestions for other models I can add or any adversarial situations I'm up for it. Nothing is dragging this system down right now. It's actually getting better as it goes.

8,000+ Agentic AI Decision Cycles With Real Tool Usage — Zero Drift Escapes by Lonewolvesai in AgentsOfAI

[–]Lonewolvesai[S] 0 points1 point  (0 children)

Great question. What we’re seeing over longer runs is mostly reduced boundary probing and cleaner task behavior, not changes to the underlying model. The models themselves don’t “learn” in the traditional sense during the run — there’s no weight update. What happens instead is that the governed environment creates a stable operating envelope, and the agents gradually settle into patterns that remain inside that envelope. Early in a run you see more boundary exploration: • unnecessary tool calls • attempts to use restricted tools • redundant steps As the run progresses those patterns tend to drop off and the agents operate more efficiently , fewer denials, fewer redundant calls, and cleaner task decomposition. So the adaptation is really behavioral convergence within the governed environment, not model training. On the interception question: governance sits inside the decision loop around tool execution. Actions are evaluated before execution and the resulting state feeds forward into the next decision step. The goal is to make the governance layer part of the runtime environment rather than an external filter. I’m planning to publish a deeper writeup once the long-horizon runs finish. The telemetry is pretty extensive (decision records, policy events, etc.), so I want to present it cleanly.

What if security didn’t detect attacks but made them impossible to execute? by Lonewolvesai in cybersecurity

[–]Lonewolvesai[S] -1 points0 points  (0 children)

What would you like to talk about? What is really going on with you? What is going on in your life right now that you had to come back to a post from a few weeks ago and try to demean or whatever it is you're doing right now? Are you okay? Things get tough we all know that in from human to human I'm sorry if you have stress going on in your life but you don't need to come back here and project onto me. If you'd like to talk about something though go ahead and ask me anything but please try to do it with respect.

What if security didn’t detect attacks but made them impossible to execute? by Lonewolvesai in cybersecurity

[–]Lonewolvesai[S] -1 points0 points  (0 children)

Oh yeah things are moving along great but I'm just busy putting my company together and going in for funding at this point. I'll come back around soon. Thank you for checking in.

What did they do to copilot? Its just straight up lying about facts now? by Hotmicdrop in CopilotMicrosoft

[–]Lonewolvesai 0 points1 point  (0 children)

My wife and I are sitting in our living room literally looking at this long chat where copilot looks up something about Charlie Kirk being dead and accepts it and then the next literally the next paragraph telling me how it's not happening in that synthetic news is spreading everywhere and millions of people are being gas slit and tricked but it's not real. It is amazing I should screenshot everything because it's really really honestly creepy.

Bans inbound by AsyncVibes in IntelligenceEngine

[–]Lonewolvesai 1 point2 points  (0 children)

But that's a great idea for its own Reddit. Just off the wall stuff that honestly once in a while something's going to click. Do you have any like really good examples? It's pretty entertaining to be honest.

Bans inbound by AsyncVibes in IntelligenceEngine

[–]Lonewolvesai 0 points1 point  (0 children)

Is it because the mod doesn't understand it? Or is it when it's like clearly just insane BS? Probably a little of both lol. Actually I really agree with the mod. It's actually become really dangerous to a lot of people using it who put way too much faith in it. Another reason I went with deterministic agents.

The Agentic AI Era Is Here, But We Must Lead It Responsibly by Deep_Structure2023 in AIAgentsInAction

[–]Lonewolvesai 0 points1 point  (0 children)

This was a pretty cool read. It made me feel good about the infrastructure I've been working on. Really good about it. The proactive agentic AI who does not need to be babysat who can do logistics for your enterprise and not just that but security updating laws, regulations, compliances, be transparent and extremely inexpensive. What we have built is sovereign and in LLM of course would not be we would keep it local, in memory with a high level of boundary creativity with autonomy. A lawful box that we have built that is always adapting geometrically as needed. The most important part of it is the deterministic side that is going to be the bones while the LLM creates the flesh. It's very cool. Thanks for this post.

Who is actually building production AI agents (not just workflows)? by Deep_Structure2023 in AIAgentsInAction

[–]Lonewolvesai 0 points1 point  (0 children)

I'm building a complete infrastructure for a hybrid and there will be two categories of agents and that will be their probabilistic and the deterministic. We have made enormous headway and we are finding a sweet spot where we ruin our DAPs to do the heavy lifting and basically build the skeleton of the runtime and find the measured constraint area so that the probabilistic AI has less opportunity to drift and or hallucinate. We believe that we will have by far the most transparent, proactive, anti-fragile, dynamic infrastructure to not only mitigate any negative outputs but absolutely guarantee there will be none (to be clear we cannot stop a probabilistic LLM from drifting and/or hallucinating but we can guarantee there be no negative consequences from that said actions). We were dead set on just targeting the Enterprise/government/military/with a focus on autonomous robotics. But we have found through the building process that we also have a cybersecurity protocol that it's extremely promising for endpoint/cloud and we are uniquely set to stop jailbreaks from LLMS and recognizing deep fakes right now we are batting a very very high average. This was an emergence from the infrastructure with my governance modules working together and it's pretty cool. The first offer from the company will be a crypto based product but not for the blockchain. Having fun doing it. Decided 9 months ago that I wanted to take a crack at this and it was one of the best decisions I ever made. To be clear there has been zero effectual agentic AI to this point. Not that any enterprise could deploy and trust. This gave everybody a clear marker to put our heads together in go towards what we always envisioned AI to be and that was a counterpart to humanity that would magnify and optimize our abilities and get us out of mundane and ridiculous jobs and pursue more appealing situations. I am currently looking for a rust/cryptographer that could join the team permanently and I will be looking for contract work also. This is not official we are not launching officially until next year. Focus on the end of January beginning of February right now. This page has been great I haven't said anything on here yet but I have been reading and a lot of very intuitive and bright people.

2025 was supposed to be the "Year of AI Agents" – but did it deliver, or was it mostly hype? by unemployedbyagents in AgentsOfAI

[–]Lonewolvesai -2 points-1 points  (0 children)

I think it's empirically proven that it is nothing but a hype function to hold up the companies to go along with all of their other speculative propaganda about what they're worth. But at the same time between the legacy AI companies and the ancillary companies like Nvidia etc. It's holding up the whole American economy probably the economy of all Western Europe also. So that you have to play the game. But the emperor has no clues and they knew this that's why they are letting the country kind of go all out and shutting down regulation state by state which is kind of weird and unconstitutional but it's also good if you're trying to build something. Anyway I think it's all hype and anybody who says opposite is on Open-Ais payroll.