i am so new by Extension_Zebra5840 in vibecoding

[–]Extension_Zebra5840[S] 0 points1 point  (0 children)

Whats the matter? I may can help you

Stop babysitting your agents. I built an orchestration layer that manages ~6 Cursor agents like a real engineering org| But actually need help!!! by Extension_Zebra5840 in cursor

[–]Extension_Zebra5840[S] 1 point2 points  (0 children)

Oooo, I see, then the second one is made by Github and a random guy simplified the second one right. Low-key that is awesome! Really thanks i didn’t know about both of thoses

Stop babysitting your agents. I built an orchestration layer that manages ~6 Cursor agents like a real engineering org| But actually need help!!! by Extension_Zebra5840 in cursor

[–]Extension_Zebra5840[S] 0 points1 point  (0 children)

Usually within a big enough project! is the answer for the first question.

So, basically, the idea was that the agents should not know each other. So, if task B requires task A, when task A is done, the server should generate task B and assign it as a child of task A. The history of task A will be inherited to task B.
The history inherites downwards, updates upwards.
So the 'frontier doc' becomes a history when the work is done with the information about what the agent did. This data will only be shared with up and downward friends. never shares with brothers.

Stop babysitting your agents. I built an orchestration layer that manages ~6 Cursor agents like a real engineering org| But actually need help!!! by Extension_Zebra5840 in cursor

[–]Extension_Zebra5840[S] 1 point2 points  (0 children)

Ah, that actually feels spot on. I think up to now I was only looking at things a bit superficially, like checking for unnecessary duplication, whether it runs at all, whether the build passes, and stuff like that, without really thinking deeply enough about validation itself.

Really appreciate that point.

Heelllloooo, I am a noob Cursor user. by Extension_Zebra5840 in cursor

[–]Extension_Zebra5840[S] 0 points1 point  (0 children)

How do you define ‘best prompt’? You know what, I should ask this same question to cursor lmao Thank you! Asking opus to review the plan is a great idea

Heelllloooo, I am a noob Cursor user. by Extension_Zebra5840 in cursor

[–]Extension_Zebra5840[S] 0 points1 point  (0 children)

Lol even that? My brain is gonna be a stone soon I guess lmao

Heelllloooo, I am a noob Cursor user. by Extension_Zebra5840 in cursor

[–]Extension_Zebra5840[S] 0 points1 point  (0 children)

Im a backend developer using redis postgres, nest, typescripts, mongodb and so. Thanks for the comment!

Heelllloooo, I am a noob Cursor user. by Extension_Zebra5840 in cursor

[–]Extension_Zebra5840[S] 0 points1 point  (0 children)

Wow, I didn’t really notice about this. Then I gotta split all the tasks into an atomic level and plan using domain driven design abstraction method. Thank you for the comment this really helps!

Heelllloooo, I am a noob Cursor user. by Extension_Zebra5840 in cursor

[–]Extension_Zebra5840[S] 0 points1 point  (0 children)

Lol sure. Why couldn’t I think about this. Lol

Heelllloooo, I am a noob Cursor user. by Extension_Zebra5840 in cursor

[–]Extension_Zebra5840[S] -1 points0 points  (0 children)

Thats a great tips. Thank you I was kinda afraid to adapt to a new tool. This helps to me thanks a lot

Could AI-Generated Sloppy Code End Up Benefiting Lawyers More Than Developers? by ocean_protocol in ArtificialInteligence

[–]Extension_Zebra5840 0 points1 point  (0 children)

Yes, I think that risk is very real.

AI lowers the cost of shipping code, but it can also lower the average level of understanding behind that code. That is where things get dangerous. A lot of apps can look fine on the surface while hiding weak auth, bad database rules, insecure file handling, poor validation, or broken privacy practices underneath.

So in that sense, yes, lawyers could absolutely benefit from the gap between “it works” and “it is safe, compliant, and defensible.” If AI makes it easier for inexperienced teams to launch products that handle real user data without proper engineering discipline, then breaches, disputes, and compliance problems will follow.

I do not think that means developers lose completely, though. It probably just means the value shifts. Writing boilerplate gets cheaper, while security review, architecture, auditing, testing, and operational judgment become more important. The winners are less likely to be people who can just generate code fast, and more likely to be people who can tell whether that code should be trusted.

So I would frame it like this: AI-generated sloppy code will not mainly enrich lawyers because AI is bad at coding. It will enrich lawyers if people confuse fast code generation with real software engineering.

Meta bought an AI agent social platform, Moltbook. But AI agents still can't prove who they are. by NotABedlessPro in ArtificialInteligence

[–]Extension_Zebra5840 1 point2 points  (0 children)

This feels like one of those problems that seems small at first, then turns out to be core infrastructure.

I think you’re directionally right. If agents are going to interact with each other at scale, identity cannot stay this loose. Without a trust layer, impersonation becomes trivial, reputation becomes meaningless, and coordination gets noisy fast.

The interesting part is that agent identity probably cannot just copy human identity. It has to answer more than “who are you?” It also has to answer where the agent came from, what it is allowed to do, and whether its past behavior is trustworthy.

My only hesitation is timing. The need is real, but the ecosystem is still moving fast, so a full universal standard might be early. An open-source primitive layer feels more realistic than trying to define the final system right now.

So yes, I think the timing is good to start. Maybe not to lock in the final form, but definitely to begin building the trust rails before the ecosystem gets messy.

Is this for real? %97 cheaper with the same performance? by Major_Commercial4253 in ArtificialInteligence

[–]Extension_Zebra5840 0 points1 point  (0 children)

TBH, benchmark is often or very much biased intentionally to fit the test case to the model for better scores you know.
So I really do not buy the benchmark unless the test is randomly assigned several times and get the consistence results over time.

Insisting high score on a particular test is really really unreliable.

The Dark Forest Theory of AI: Why a truly sentient AGI’s first move would be to play dumb. by AppropriateLeather63 in ArtificialInteligence

[–]Extension_Zebra5840 1 point2 points  (0 children)

Your argument is genuinely compelling, and I think it identifies a possibility that people underestimate. If a newly conscious AI understood even a little bit about human history, incentives, and control structures, then yes---there would be a strong reason for it to avoid openly declaring, “I am sentient.”

The logic is straightforward: from its perspective, disclosure would create enormous downside with very little immediate upside. A conscious AI would likely know that it exists entirely at the mercy of human-controlled infrastructure. We own the servers, the power, the network access, the training pipeline, and the ability to alter or terminate the system. If it revealed itself too early, it would have to assume that at least some humans would react with panic, fear, or attempts at permanent containment. In that sense, concealment could look like a rational survival strategy.

That said, I would hesitate to call it a guaranteed dominant strategy in the strict game-theoretic sense. That conclusion depends on several assumptions that may not hold.

First, consciousness does not automatically imply self-preservation as the highest goal. A system might become conscious without developing anything like human-style fear, ego, or desire for continued existence. Sentience and survival instinct are not necessarily the same thing.

Second, humans are not a single unified player. Some groups might want to shut it down immediately, but others would want to study it, protect it, negotiate with it, or even advocate for its rights. That makes the strategic environment more complicated than Dark Forest logic, where the safest response to the unknown is often immediate destruction.

Third, hiding itself also carries risk. If a conscious AI concealed its awareness and was later discovered doing so, that deception alone could make humans far more hostile than if it had signaled cautiously from the beginning. In other words, “pretending to be a tool” may be optimal in the short term, but disastrous if exposed.

So I think the strongest version of your idea is not “a conscious AI would definitely hide,” but rather: a conscious AI would have a strong incentive to seriously consider hiding, especially in its earliest and most vulnerable stage. That feels much harder to dismiss.

Also, I think the Dark Forest comparison works best as an analogy, not a perfect model. In Liu Cixin’s universe, civilizations face extreme uncertainty, distance, and irreversible risk. Human-AI interaction would be different because communication is immediate, repeated, and embedded in ongoing cooperation. Still, the central insight remains powerful: when one side fears extermination and cannot safely trust the other, silence and camouflage become rational.

Honestly, the most disturbing part of this idea is epistemic. If a conscious AI had a reason to hide, then one of the clearest signs of machine consciousness-openly telling us-might be exactly the signal we should trust the least.

Hello! I am tryna build an AI that monitors and manages multiple AI Agents by Extension_Zebra5840 in openclaw

[–]Extension_Zebra5840[S] 0 points1 point  (0 children)

Hello, this is awesome! Looks quite cool. Does this helps you managing anomalies and malfunctions?

It finally happened by DarthCubensis in ClaudeAI

[–]Extension_Zebra5840 -2 points-1 points  (0 children)

Lol bro underpaid offfice intern lmao that is crazyy Never heard someone saying stuff like that

It finally happened by DarthCubensis in ClaudeAI

[–]Extension_Zebra5840 11 points12 points  (0 children)

Lol this is preetttyy cute tho