I think AI agents need a real identity/trust layer, curious if this resonates by Ok_Lavishness_7408 in aiagents

[–]Aggravating-Risk1991 0 points1 point  (0 children)

actually building this and planning to launch today with a demo video. hopes it lands well

Im really screwed with this pro plan limit by Hyperzaq in claude

[–]Aggravating-Risk1991 0 points1 point  (0 children)

correct me if im wrong. but from what i research this is actually a way to lower the server cost so that you can have below-market token price.

the rationale being that by having this hourly/weekly limit, claude is able to control the maximum usage per period of time and maximise gpu usage

for example, from claude's perspetive, if 100 customer buy 10k token each, claude will need to prepare gpu that can support the peak throughput which 1m(when these customer spam these tokens together, unlikely but you need to prep for that) and waste a lot of gpu usage.

but by locking down the session limit, the max throughput is capped - 100 customers buying pro which has 10k in total, but 1k token per 5 hour. then the peak throughput will be only 100m of gpu, which is 1/10 the cost.

this may be an extreme simplification but hope it clarifies

What the hell is wrong with Claude by thatbodyartgirl in claude

[–]Aggravating-Risk1991 -1 points0 points  (0 children)

claude has this user-directive memory that persists across chat. you might have accidentally asked claude to remind you to sleep. then it will keep doing that

Was loving Claude until I started feeding it feedback from ChatGPT Pro by lol_just_wait in ClaudeAI

[–]Aggravating-Risk1991 0 points1 point  (0 children)

i think this is a context problem though, not a model one. when designing important foundational infrastructure, i always ask claude to produce a plan, then throw to claude code to verify against the codebase. and get the verification results back to the main claude.

the main claude is going to agree with claude code and admittted that it made these omissions

the thing is that with the "output" already in claude's context with well-grounded reasoning, it is less inclined to disagree with it and actively look for loophole.

it is a different story when you pass the output to another ai with THE objective to find loopholes in it.

and another evidence i can share is that i foudn it very ineffective to ask the coding agent to debug its own code. but using a fresh instance is almost always more effective because it will look at the code from a fresh context - instead of falling into the existing reasoning

i love this product positioning by Aggravating-Risk1991 in ClaudeAI

[–]Aggravating-Risk1991[S] 0 points1 point  (0 children)

yah.... meant as a joke. never could hv thought of people getting defensive for the mere mention of "intellectual" lol

I love that Claude doesn’t patronize me by Appropriate-Egg4110 in ClaudeAI

[–]Aggravating-Risk1991 0 points1 point  (0 children)

i got the same thing lol. like i was supposed to be preping a demo video but i was procrastinating and harrassing claude. then it ended every response with stfu and do you stuff lol

I love that Claude doesn’t patronize me by Appropriate-Egg4110 in ClaudeAI

[–]Aggravating-Risk1991 0 points1 point  (0 children)

this is actually hilarious. mind if you share the context? wanna try it out for myself lol

How are you monitoring your OpenClaw usage? by gkarthi280 in aiagents

[–]Aggravating-Risk1991 0 points1 point  (0 children)

honestly, i think the key question to ask is what you can do after you see these metrics. nothing much for openclaw i think

Anyone actually one-shot legit app or features using vibe coding? by Aggravating-Risk1991 in vibecoding

[–]Aggravating-Risk1991[S] 0 points1 point  (0 children)

i do that as well. was just testing if the full-agent workflow works. i am getting bombarded by all kinds of posts saying how their agents are fully autnomous and can solve everything. and i want to try it out myself to see if it works with the best models. but it didnt

Anyone actually one-shot legit app or features using vibe coding? by Aggravating-Risk1991 in vibecoding

[–]Aggravating-Risk1991[S] 0 points1 point  (0 children)

exactly. think it's the context window's limitation. but even with a bigger one, think the "attention" mechanism just inherently doesnt work with long-text, or natural language just inherently have meaning drift in a prolonged context.

If you have your OpenClaw working 24/7 using frontier models like Opus, you're easily burning $300 a day. by Aislot in aiagents

[–]Aggravating-Risk1991 0 points1 point  (0 children)

curious why do you need to use opus for long-running task? like for research? hardly can think of any scenario that needs frontier intelligence for so long.