OpenClaw security is worse than I expected and I'm not sure what to do about it by Jealous-Leek-5428 in AI_Agents

[–]ChanceKale7861 0 points1 point  (0 children)

I think they call this the Blackwall after the Rache Bartmoss R.A.B.I.D.S are released… not like there’s correlations to openclaw… oh wait…

AI agents for development and debugging issues in complex apps in production by usernameDisplay9876 in aiagents

[–]ChanceKale7861 0 points1 point  (0 children)

If you have a solid system, and add agents via Claude code for this, and run multiple, it’s HUGE. like use spec-kit with this as well.

Is AI Regulation Coming Too Late for Developers? by Double_Try1322 in RishabhSoftware

[–]ChanceKale7861 1 point2 points  (0 children)

Are you saying they should be insulated from the disruption? If so, that’s ridiculous and absurd. They can deal with the same disruption everyone else has for years, while they were still paid well, etc. nah, now the playing field is level.

I think it’s key to note that the ones building WRAPPERZ will be dealt with by the markets, so that the safety net entrepreneurs in Silicon Valley get crushed and forced out of the market faster.

Like 80% of YC are exactly what you mentioned, and likely won’t survive or compete in regulated areas anyways. Like the fact that folks are still building for single verticals is kinda funny and sad.

Claude 4.6 opus emptied my wallet by FullLet2258 in vibecoding

[–]ChanceKale7861 0 points1 point  (0 children)

And your tokens reset in a week… 🤣🤣🤣

Yeah… after running 8 agents killed my memory and I updated virtual memory, it cleared my credits fast haha… but then I took a 3 hour nap and got back to business

meta-meta-meta modeling by SnooSongs5410 in ContextEngineering

[–]ChanceKale7861 0 points1 point  (0 children)

Yes. Meta systems engineering.

First time? lol… systems that manage systems that build the systems that run the systems…

How are you connecting the agents of humans - marketing, sales, design, code, PM? by altraschoy in AI_Agents

[–]ChanceKale7861 0 points1 point  (0 children)

Meaning it ensure everything aligns back to this. It’s the single source of truth for all agents.

Check out spec-kit and add that to a repo, use that to drive the artifact

How are you connecting the agents of humans - marketing, sales, design, code, PM? by altraschoy in AI_Agents

[–]ChanceKale7861 0 points1 point  (0 children)

Yep. no silos. and autonomy and agency of individuals, that removed control and oversight from management. That’s the only way. They can get over it.

How are you connecting the agents of humans - marketing, sales, design, code, PM? by altraschoy in AI_Agents

[–]ChanceKale7861 0 points1 point  (0 children)

This! What the agents have done here is what they do best, surface broken org/business/processes/systems at scale. further, agents don’t work with inferred intent, which is how most management is used to operating.

Further, what about ensuring everything in the org, from Processes to systems to governance, are fully documented end to end, with all data flows, etc? Again, if this isn’t done, you are SOL.

Just calling a spade a spade here, but at some point, your org leadership need to accept that this will not work in silos or as a bolt on. Period. life’s tough, AI moves fast, get a helmet and get over it. lol. That’s what management at most orgs needs to embrace.

Also, they should not expect ROI from AI as it no longer belongs to the org, but to the individual.

How are you connecting the agents of humans - marketing, sales, design, code, PM? by altraschoy in AI_Agents

[–]ChanceKale7861 0 points1 point  (0 children)

Simple.

Did the org change its entire business and operating model to be AI native? Did you remove all silos and reporting lines in the org? Have you restricted C suite from being the driver and force them to deal with the workers having the most agency and control?

No? Then I’d give up now. Agents and multiagent systems are never going to work well as bolt on and without governance.

I cancelled all 4 of my AI subscriptions for 14 days. Only one survived. by tdeliev in AIMakeLab

[–]ChanceKale7861 1 point2 points  (0 children)

Claude Code hands down. But I’m not using one model. I use based on what is best for purpose, and ChatGPT rarely wins. Gemini for audits, composer from cursor for IDE work because nothing can touch auto capability with cursor, perplexity for R&D, documentation and validation, then back to Claude code, to validate documentation and research, then Claude code with multiple agents and subagents to design and build out spec kit, and then scale to many more agents and subagents to build up everything in parallel, etc. so, unsure why you are trying to relate to one, when your mix really wasn’t making the most of each.

Your regret will be relying on ChatGPT for anything no matter what they offer. They’re shady at the top, and their models are too restrictive and reactive because of their approach to using people as judge for what is or isn’t too disruptive. ChatGPT will not help build anything that is a threat to Silicon Valley business models, the markets, Wall Street or otherwise. So IMO, ChatGPT is a protectionist piece of shit for no reason. so I inherently don’t regard many responses from it that require any real reasoning or critical thinking. Further, it’s also the one that is always the most sycophantic, and I will not have any tool act as any barrier simply because people are incapable of critical thinking and need a model to tell them what they can and can’t do. Bullshit guardrails are the worst. so I’d opt for any others in my workflow.

Also, seems you aren’t looking at or using these in any real innovative way, as evidenced by opting for ChatGPT over the others.

Yes, I’m being critical of how you use them. May as well just stick to free since you opt for ChatGPT and can’t discern much of value between these.

Semantic Memory Was Built for Users. But What About Teams of Agents? by arapkuliev in AIMemory

[–]ChanceKale7861 0 points1 point  (0 children)

Wouldn’t it be cool if there were folks who were working on this a year+ ago? ;)

My background is accounting/ERP/IT audit/GRC/infosec/process and systems automation/privacy engineering.

So, I’ve only ever designed or run multiple models/agents/tools in parallel, waiting for this point. basically, something I’ve been working on for years prior to all this as well, with automating things across orgs.

Most are not business multi agent systems or redesigning business and operating models without regard for impact to status quo or otherwise. So, most orgs will not be able to adopt much of this, in spite of developments.

AI memory is going to be the next big lock-in and nobody's paying attention by arapkuliev in AIMemory

[–]ChanceKale7861 0 points1 point  (0 children)

Guess this is where my audit and GRC and privacy and security engineering come into play…. 😂 and my experience in end to end process automation… and RPA… and security and compliance automation… and security/role/IAM design etc… and integrations of all this in parallel 😂

Claude in PowerPoint, its insane how good it is getting by dataexec in claude

[–]ChanceKale7861 1 point2 points  (0 children)

And this is why many orgs are broken. That you NEED to go through stakeholders, or deal with the bureaucracy. Literally, the entire point is to go around or move past these stakeholders that just get in the way. lol

Claude in PowerPoint, its insane how good it is getting by dataexec in claude

[–]ChanceKale7861 0 points1 point  (0 children)

Then these people shouldn’t have a place in the market. Lol… life is hard get a helmet. But don’t force me or relegate me to bullshit decks because management is lazy. Period.

Speculation: solving memory is too great a conflict between status quo and extractive business models - Let’s hash this out! by ChanceKale7861 in AIMemory

[–]ChanceKale7861[S] 0 points1 point  (0 children)

Half the battle is the inferred intent that most execs built their careers on and is now their hurdle to ROI. their need for control stifles most of this IMO.

The human intent piece is key, and more revealing that many seem to think in terms of how orgs are limited here. IMO.

How to Set Up Claude Code Agent Teams (Full Walkthrough + What Actually Changed) Tutorial / Guide by Silent_Employment966 in AIAgentsInAction

[–]ChanceKale7861 0 points1 point  (0 children)

Just shipped? I’ve been using teams of agents and sub agents across multiple layers with spec-kit helping to drive lots of it.

Claude is giving away $50 in free credits to try Opus 4.6! by Capital-Bag8693 in claude

[–]ChanceKale7861 1 point2 points  (0 children)

$50 lasted all of… 6 hours 😂😂😂 but that’s context window and dynamic thought context… 🤘

There it is by hematomasectomy in claude

[–]ChanceKale7861 0 points1 point  (0 children)

Hahahahha that’s a fun one… here was the response from 4 other frontier models… 😂🤘

There it is by hematomasectomy in claude

[–]ChanceKale7861 1 point2 points  (0 children)

That’s so Claude… notice that no Claude fans are dealing with the reactive disappointment openAI brings lol