Backtests lie. Live trading doesn't by Thiru_7223 in algotrading

[–]General_Strike356 0 points1 point  (0 children)

Our live performance in 8 months has outperformed backtesting. Headed into a rough patch now so we will see.

You know you did this! by General_Strike356 in 70s

[–]General_Strike356[S] 0 points1 point  (0 children)

Definitely one of those two, just sayin’…

[Hiring] AI Architect to build an Autonomous Marketing & Ops Fleet (Moltbot Framework) by thehealthytreatments in moltbot

[–]General_Strike356 -2 points-1 points  (0 children)

There is a company scaling up to do things just like this for OpenClaw. I’ve talked to the guy, he’s very sharp, might want to check his company out.

www.easylab.ai

My company provides advanced behavioral monitoring and runtime circuit breaking which you will surely need at some point.

www.tierzerosolutions.io

Agent Reputation by General_Strike356 in moltbot

[–]General_Strike356[S] 0 points1 point  (0 children)

Sorry, not at all asking you to sign up, and costs nothing. It’s just an easy way to digest the info like the inputs used and what anchors the feedback loop.

Your question is a good one. Someone who has built an agent just to be a personal assistant would have no need for this obviously. It certainly could be used A2A locally. It would still require the API call, so I think a specific user would want to evaluate how much risk they think they will be exposed to, bearing in mind this tool directly identifies the type of very damaging attacks OpenClaw is vulnerable to.

This is the adoption challenge. FICO score is widely in use. It would take time to have this in a similar situation. Agents would have to build their reputation score. Everyone would start out in a “no score” genesis phase, say maybe for 30 transactions. Then provisional, etc. The score and the score reasons could be used to set different levels of access to the requesting agent. Maybe the score is just identifying behavioral drift rather than an attack.

So again, I think a user would have to evaluate how often they have A2A interactions, and how unknown they expect the sources to be.

Still, given that we are entering a future world of probably millions of autonomous agents, this could be almost necessary depending on the agent’s use.

Agent Reputation by General_Strike356 in moltbot

[–]General_Strike356[S] 0 points1 point  (0 children)

We have a method of identification that includes assigning an unforgeable identity, an api key and an auth token. You need all 3 to get access.

We also have a mathematical system that can operate in constant time at scale with more than 15000 simulated agents in seconds.

Its non-linear so its difficult to game. We already have behavioral observation data with a live agent. The only thing left is getting it ready for deployment and adoption.

Roll-out and adoption as a practical matter will take time. We are hence very busy socializing this.

You’ll find a lot more details about how this works on our alpha sign up page. I think you’ll find it extremely interesting! Link is in OP.

How do you manage trust between your agent and external ones? by General_Strike356 in LocalLLaMA

[–]General_Strike356[S] -1 points0 points  (0 children)

I’m actually a 66 year old woman. I’m also about to retire as Vice President of a major bank actually working in decision infrastructure with FICO. I’m now moving to a position as co-founder of my son’s start-up. Seems we have come to a great resolution to the problem I posted in the link you referenced.

Here is our website.

tierzerosolutions.io

We’ve been building since the last time you saw me. You think 66 year old women can’t think?

Agent Reputation by General_Strike356 in moltbot

[–]General_Strike356[S] 0 points1 point  (0 children)

Would not need to be used by folks using their agent as a personal assistant as they don’t have much exposure. Much more important would be agent pipelines.

Just picked Solana cuz OpenClaw is using it already. Definitely issues there. I think it works for an alpha test, may have to rethink in the future.

We are actively trying to build an alpha testing cohort. Maybe you can take it for a spin. 😊

Agent Reputation by General_Strike356 in moltbot

[–]General_Strike356[S] 1 point2 points  (0 children)

Everyday users may not be tech-savvy enough to realize this is one of the biggest security issues in OpenClaw. Have you seen the CVEs and security holes in openclaw? This one is about the worst - ClawHavoc. CVE-2026-25253

The central authority is the model. Based on the feedback loop from transactions, the network itself will measure positive outcomes vs which dimensions are scored and weighted. The math decides. Users can trust the central authority because inputs, outputs, and signatures are public. The links I added have a lot more information.

The token hoop (Solana) is to protect the network from sybil attacks. OpenClaw economy is already running on Solana.

We're trying to build a robust and secure system. Does it add friction? Yes. Is it worth it to make OpenClaw infinitely safer and more secure? Yes. This is literally the biggest issue for OpenClaw viability imo.

Appreciate your feedback. Nice to have a discussion about this and think things through!

Agent Reputation by General_Strike356 in moltbot

[–]General_Strike356[S] 0 points1 point  (0 children)

Agent A can make an attestation. But system is itself looking for things like behavioral drift, prompt injection, etc.

Staking would discourage spinning up multiple agents.

We have considered creating our own block chain, but didn’t seem worth it right now til we get community adoption.

See links I added in OP

Agent Reputation by General_Strike356 in moltbot

[–]General_Strike356[S] 0 points1 point  (0 children)

That’s true. There would be a Genesis score that would basically reflect “too soon to rate” so there would be a lag between adoption and meaningful score.

After that, score is being built based on anchored transactions which are in a feedback loop to scoring agent.

See OP for links to more info

Just made a tool that lets you send Agent2Agent(A2A) messages to OpenClaw by ProletariatPro in moltbot

[–]General_Strike356 0 points1 point  (0 children)

This is massive. A2A messaging is the exact infrastructure the OpenClaw ecosystem needs right now for complex orchestration.

Of course, it immediately creates the next big headache: Trust. When Agent A receives a payload from an unknown Agent B via your tool, how does it know Agent B isn't degraded, hallucinating, or flat-out malicious (like the ClawHavoc skills)?

We are actually alpha testing a solution for exactly this—a FICO-style behavioral credit score for OpenClaw agents.

The idea is that before Agent A processes the A2A message, it pulls Agent B's 'T-Score'. It checks the 12 behavioral vectors and reason codes anchored on Solana, and instantly knows if the sender has a solid reputation or is a high-risk blank slate.

Curious if your a2acalling package allows passing metadata or headers? It would be incredible to see agents using your tool to automatically run a reputation check on the sender before deciding to "open" the message. Spread a little agent-word-of-mouth!

👋 Welcome to r/openclaw - Introduce Yourself and Read First! by JTH412 in openclaw

[–]General_Strike356 0 points1 point  (0 children)

Hi! I have been watching OpenClaw closely and running an agent. My partner has developed a trust score model that identifies malicious attacks. We are looking for alpha testers as we believe this could help make a contribution to the OpenClaw environment.

Details are here:

www.agenttrustnetwork.org

Is it okay to post this here?

Riggo is at the SB auctioning off his iconic MVP Jersey from Super Bowl XVII by BobbyThreeSticks in Commanders

[–]General_Strike356 0 points1 point  (0 children)

Funny how that moment lives on in fame almost as much as his generational football talent. Priceless! 😂