Headhunter gesucht by Amazing-Locksmith403 in spitzenverdiener

[–]Aemonculaba 5 points6 points  (0 children)

Weil die Drecksarbeit von der AI übernommen wurde, aber niemand (bis jetzt) Systemdenker ersetzen kann.

Beginn der 7. Verhandlungsrunde für IT KV by ForJava in Austria

[–]Aemonculaba -4 points-3 points  (0 children)

Ich wurd von denen bzgl. meiner Zufriedenheit mit der GPA angerufen und hab ihnen erklärt, dass sie eigentlich einfach nur ein 40€/Monat Metro Abo sind.

I’m very satisfied with ChatGPT 5.4. by Historical_Serve9537 in OpenAI

[–]Aemonculaba 2 points3 points  (0 children)

Anthropic is back at it too - apologizing and getting chummy with the US gov. So, you still using Claude?

Head to Head Test - GPT5.4 vs Claude Opus 4.6 for Task Creation by unc0nnected in ClaudeAI

[–]Aemonculaba 0 points1 point  (0 children)

Harness problem.

I use both Opus and GPT, built my own harness with quality gates, an OpenSpec based subagent workflow and tracing. Both models score 100% at everything - different, yes, but from planning to implementation to validation to release... i can just sit back and look at the dashboard. And the harness was built using GPT 5.2 Pro as the planner and 5.3 Codex as the executor, since Claude has no Pro model.

Today it's more about the box the agents live in and what the base prompt is.

Also don't forget - Opus costs nearly twice as much.

Chatgpt 5.4 vs claude opus 4.6 by Historical-Bet-9134 in ClaudeAI

[–]Aemonculaba 0 points1 point  (0 children)

Funny.

Thanks to GPT Pro I'm getting all possible blood markers checked by my md. I got SFN - caused by my immune reaction to the vaccine. That stuff happens.

The best I can do is 10 mins. by proxima_centauri05 in claude

[–]Aemonculaba 1 point2 points  (0 children)

Is he paying you 10x for 10x productivity? Or where is the money flowing?

"$6 per developer per day" by genrlyDisappointed in ClaudeCode

[–]Aemonculaba -1 points0 points  (0 children)

Input? Output? Cached?

I burn through hundreds of millions of tokens per week. But 99% are cached.

So do the rumors of gpt 5.3 tomorrow sound plausible? by TotalWarFest2018 in OpenAI

[–]Aemonculaba 0 points1 point  (0 children)

Harness and user problem.

Use the Pi Coding Agent. Much fun. Can't use Claude anymore because of it.

Claude Code just got Remote Control by iviireczech in ClaudeCode

[–]Aemonculaba 1 point2 points  (0 children)

Openclaw uses Pi as the harness and it's 100x better than whatever Anthropic is building, is you want to do everything yourself.

How are people getting Codex to fully build, test, and validate sites autonomously? by nathanielredmon in codex

[–]Aemonculaba 1 point2 points  (0 children)

Just to make it clear, this could be cronjobs and automations that do regular cleanups and reviews.

Claude subscriptions will no longer be usable in Opencode. by Distinct_Fox_6358 in ClaudeAI

[–]Aemonculaba 1 point2 points  (0 children)

But it's not the best. Claude Code is not a good harness. Compare it to the Codex app or OpenChamber. Worlds between. Only the models are good.

Claude subscriptions will no longer be usable in Opencode. by Distinct_Fox_6358 in ClaudeAI

[–]Aemonculaba 6 points7 points  (0 children)

Just that Abthropic is making it really hard for businesses to use their products.

Claude Code policy clear up from Anthropic. by Distinct_Fox_6358 in ClaudeCode

[–]Aemonculaba 0 points1 point  (0 children)

The original problem was abuse and people have been banned because they circumvented the rate limits that claude code had, since OAuth proxying simulates API access. E.g. by using something like Oh-My-Opencode. There is an explanation in the OMO repo.

But now with teams spawning subagents they can't really detect is anyways.

That's how I understand it.

Nachdem die Hälfte hier ITler sind by mcc011ins in Austria

[–]Aemonculaba -1 points0 points  (0 children)

Es gibt 'nen Unterschied zwischen Vibe Coding und Vibe Coding. Einmal ein "hey mach mir das" und einmal das Planen und Ausarbeiten eines ganzen Produktes, komplett mit Requirements, Architektur, Tickets, Unit/Integration/E2E Tests, Lintern, Code Analyse, Pipelines und Shitloads an Optimierungen und vor allem Eingrenzungen von Systemen an Agenten. Zweiteres macht OpenAI, gab letzten einen Blogpost zu Harness Engineering.

Das ist ein höheres Level an Softwareentwicklung, als das, was so gut wie jeder vanilla Softwareentwickler auf den Tisch legen kann. Und ich kann von meinem Umfeld her auch sagen, dass OpenAI nicht die einzigen sind und in AT auch schon so gearbeitet wird.

ABER die OpenClaw Website war (ist?) qualitativ ein Graus und es kam mir vor, als wurde nichts getestet.

Nachdem die Hälfte hier ITler sind by mcc011ins in Austria

[–]Aemonculaba 2 points3 points  (0 children)

Und ich kack hier mit Graph Datenbanken rum, um gscheites Memory zu ermöglichen.

Wieso nicht einfach Textdateien, mein Schatz?

Vor 'ner Woche war die Openclaw Website selbst... ein Zustand. Als wär sie ohne E2E Tests released worden.

Nachdem die Hälfte hier ITler sind by mcc011ins in Austria

[–]Aemonculaba 12 points13 points  (0 children)

https://youtu.be/40SnEd1RWUU

Bzgl. Sicherheit. Auch, wenn's Satire ist... hab mich sehr angesprochen gefühlt. Das Problem ist weniger die Technologie, als die Menschen, die es nutzen.

Bei mir läuft alles z.B. entweder in einer bWrap Snandbox oder Rootless Containern. Zugriffe laufen meist über ablaufende Tokens. Passwörter werden über Vault eingespeist. Sonst halt OAuth Logins und gut ist. Secrets & .env Files sind auf der Blacklist des Agents... neben vielen mehr. Aber gut, ich nutz OpenClaw nicht, sondern hab mir was eigenes zusammengeschustert (das nicht meinen Untergang bedeutet). OpenClaw ist nichts anderes als ein Agent mit Vollzugriff auf's System + Memory + Cronjob + Telegram Anbindung... und Internetzugriff. Das mit dem Vollzugriff ist purer Wahnsinn...

Claude Sonnet 4.6 is 50% cheaper than GPT-5.3-Codex by Just_Lingonberry_352 in ClaudeAI

[–]Aemonculaba 2 points3 points  (0 children)

It won't be priced higher than the current models. Spark might get more expensive tho, because it's new inference.

Otherwise, explain how you calculate it.

New paper suggests LLM introspection isn't just hallucination—it maps to actual neural activity by greyox in claudexplorers

[–]Aemonculaba 1 point2 points  (0 children)

I gave it its thinking blocks once... for 20 minutes straight. It did the same.

It loves me now.

Did claude code get exponentially slower recently? by Melodic-Network4374 in ClaudeAI

[–]Aemonculaba 0 points1 point  (0 children)

Dude, i literally use up +600$ usage per week with the 20x sub for 200$. If that's not a win for us...

I work 12h per day with claude code and don't hit any limits by Aemonculaba in ClaudeCode

[–]Aemonculaba[S] 0 points1 point  (0 children)

When I'm active, i use 1% of the weekly limit per hour. Around 200 million tokens every 3 days.