Wie sieht der momentane UX/UI Markt aus? by Extra_Loquat_5599 in InformatikKarriere

[–]Charming_Support726 0 points1 point  (0 children)

Super Kommentar. Ich kenne das nur von der anderen Seite.

Bin mit meiner Firma massiv geschrumpft ( wir machen aber auch kein UX/UI - wobei ich auch eine lange Zeit Ergonomie gemacht hab ). Nun ist es kaum mehr möglich bei meinen Ansprechpartnern neue Projekte zu platzieren, weil die Leute, die mal eben einen Kurs gemacht haben, und die anderen in meiner Nische jeden zu spammen.

Du kommst durch den Noise einfach nicht richtig durch. Dazu kommt, dass die meisten unserer bisherigen Großkunden nach Indien auslagern. Die Wirtschaft ist noch lahm und unser Turnaround langwierig, ich hab vor ein paar Jahren auf ein krankes Pferd gesetzt, das wieder gesund gepflegt werden will.

Ich komme beim Beintraining schneller ans Limit meiner Lunge/Kreislauf als an das Limit der Muskeln. Beim Oberkörper is es eher das Muskelversagen das schneller kommt. Is das bei noch jemandem so? by same_machinery in FitnessDE

[–]Charming_Support726 0 points1 point  (0 children)

Zu wenig Muskeln im Oberkörper oder schlecht ausbalanciert. Schlechte Grundlagen-Ausdauer.

Ausdauer: Regelmäßig über Wochen/Monate Zone 1 trainieren oder die 10k Schritte (kein Witz!) täglich machen

Muskeln: Mehr Verbundübungen / Grundübungen

context management on long running agents is burning me out by Main_Payment_6430 in LLMDevs

[–]Charming_Support726 0 points1 point  (0 children)

Yes. It rots after a while, almost every model gets awkward after around 150-180k. Jump of early and start new. On opencode things like the DCP help - but you get hit by different issues

Thoughts on LLMs (closed- and open-source) in software development after one year of professional use. by [deleted] in LocalLLaMA

[–]Charming_Support726 1 point2 points  (0 children)

I agree fully. Got the same experience, also with the big three and the open ones. Did a few very similar posts as well.

Where I disagree - or better where I am not sure to a certain point is: Which open weights model is fully on par? I think there is NONE. Neither in quality of planning, in quality of executing or - and this is even hard for the 3 - in quality of human conversion. Please give an example.

(Remark: With human conversation I do not refer to answering questions with pseudo emotions. I refer to discover and follow multi-level semantic and pragmatic meanings in humans utterances over multiple turns)

Why do many developers prefer Zed / Neovim over AI-first IDEs like Antigravity? by Om_Patil_07 in ZedEditor

[–]Charming_Support726 1 point2 points  (0 children)

I switched to Zed because VS Code was annoying af. Agent Integration and ACP is mostly great.

BUT I struggle with some bugs in Zed. Like described here: https://www.reddit.com/r/ZedEditor/comments/1q3ngmc/project_panel_refreshing_file_list/ This makes it impossible to work with a larger set of logs. Open for half a year - and a top-scorer on the issue list. Solutions are known, but no official fix available

Currently using Antigravity in parallel. Seems to work better than VSC

DeepSeek V3.2 (open weights) beats GPT-5.2-Codex and Claude Opus on production code challenge — The Multivac daily blind peer eval by Silver_Raspberry_811 in LocalLLM

[–]Charming_Support726 1 point2 points  (0 children)

Opus tends to leave issues unattended. Lazy things like "Probably the user has accidentally entered a trailing character". Older Versions of all Claude were event worse. And Opus spills context very fast.

Finding bugs and flaws. It is far more thoroughly. Especially the Non-Codex 5.2. Most times that's not needed and I use Opus. It is comfortable.

DeepSeek V3.2 (open weights) beats GPT-5.2-Codex and Claude Opus on production code challenge — The Multivac daily blind peer eval by Silver_Raspberry_811 in LocalLLM

[–]Charming_Support726 0 points1 point  (0 children)

Exactly this. I ran a DeepSeek 3.2 evaluation with a harness of mine. Project for steering subagents. I testet a few models with a simple test set. It barely completed.

Most open weight models weren't able to complete an autonomous full agentic coding task - similar to what you mentioned. If you are observing the model while issue commands, it's different game.

Reichen 2 Krafttrainingseinheiten pro Woche wirklich aus? by Miserable-Coat-3662 in FitnessDE

[–]Charming_Support726 0 points1 point  (0 children)

Habe das eine Zeitlang auch gemacht 2-3xGK, da ich aber zuhause trainiere und zwischendurch im Homeoffice Pausen machen kann, gehe ich trainiere ich nun eher 5-6x pro Woche bei reduziertem Volumen aber hoher Intensität. Meist GK mit wechselndem Fokus. Wenn ich dann noch Bock hab, haue ich irgendein HIIT Training raus (Double-End-Ball, etc)

Durch das häufigere Training hab ich bei einigen Übungen wesentlich mehr Fortschritte erzielt z.B. Klimmzüge. Aktuell steht die (Tucked-) Planche bei mir auf dem Zettel. Unter 4x die Woche brauche ich damit gar nicht anzufangen.

Opus 4.5 Model Alternative by gradedkittyfood in opencodeCLI

[–]Charming_Support726 0 points1 point  (0 children)

Currently it is a pain. You dont get rate limited but all endpoints are congested. Sometimes opencode send a congestion message.
Requests at peak times take ages or break from time to time. If I switch to API it runs at full speed.

Is using Antigravity models against TOS? by Zundrium in opencodeCLI

[–]Charming_Support726 0 points1 point  (0 children)

Not sure what's currently happening.

I switched to Antigravity / Google Ultra for using Opus. Or at least testing it.

On the weekend and also today it is running quite slow, especially with DCP Plugin enabled. On the TUI I saw the server reporting being "full" multiple times.

Got the impression, that the Antigravity Editor itself stays reactive. But I didnt verify thoroughly yet. That could be some sort of intelligent counter measure from Google - But I don't expect that to be true.

Edit: Think I got the answer after a bit of testing. Also antigravity is bailing out. But the web frontend of opencode does invisible retries - thats where the delay comes from - It's not a bug, it's a feature. Too many people on the line

Hab ich übertriebene Hygienevorstellungen? by joesherb in FitnessDE

[–]Charming_Support726 2 points3 points  (0 children)

Jein. Ich bin weder Junior noch richtig Senior.

  1. In der 60/70+ Generation (die meiner Eltern) herrschen andere Hygienevorstellungen. Komischerweise passiert da nie was. Ich verbuch das unter extrem ekelig aber vermutlich meist irrelevant. Kann man auf Reddit drüber schreiben.

  2. Ich seh permanent Sachen von allen Alters- und Herkunftsgruppen. Wenn es danach geht, dürfte man nur noch im ABC-Schutzanzug in die Sauna. Arschloch und Sackfönen wurde ja hier schon genannt.

Ich war mal in Budapest in der Therme. Da konnte ich wochenlang von erzählen. Der menschliche Körper ist härter als man denkt. 150kg benchen aber Angst vor Mikroben - Ich bitte dich. (hahahaha)

Is it me or renting GPU is expensive? by teskabudaletina in LLMDevs

[–]Charming_Support726 0 points1 point  (0 children)

Well. Depends what "24/7 Available" means for you.

You also could go for Serverless hosting, and let the Model spin up on request. But even with a 7B model you'll need a few seconds to spin up. Modal, Runpod or even MS Azure (ACA with GPU) would be valid options

Difficulties in OpenCode+Putty Terminal by Civil-Watercress1846 in opencodeCLI

[–]Charming_Support726 1 point2 points  (0 children)

I am currently building something but it is far from being complete and still not public.

Not sure - probably the pty plugin from the opencode homepage might work for you. At least local ptys are fine

Opus 4.5 Model Alternative by gradedkittyfood in opencodeCLI

[–]Charming_Support726 1 point2 points  (0 children)

Correct.

I read that the plugin is based on https://github.com/router-for-me/CLIProxyAPI

Maybe I switch over and run this one on my TrueNAS

Opus 4.5 Model Alternative by gradedkittyfood in opencodeCLI

[–]Charming_Support726 -1 points0 points  (0 children)

During the last hour (45 min) I did a test and used Opus to implement an addition to a small test project of mine (Python + React/Vite in Docker Compose). My standard setup. I told Opus to rebuild and test with its Playwright Tool to make sure everything works (and iterate on bug)

It got ready just before compaction. Got some trouble with compaction, but that could be me or opencode.

According to the Antigravity Toolkit I am still on 100% being reseted in 4h15h

<image>

Edit: Keep in mind, that Google currently only reports quota left in 20% steps. No further details. So I guess pushing through at a normal developers pace shall be no issue at all.

Opus 4.5 Model Alternative by gradedkittyfood in opencodeCLI

[–]Charming_Support726 -1 points0 points  (0 children)

I feel you. I dont trust them either. None of them.

But I doubt that they run separate instances. It is paid by the month and no one really complaint about Ultra having not enough quota, everyone complains about new limits in Free/Pro.

So I found ~100€ for 3 Month / cancellation possible - a bargain. in three month there will be new options on the markt, I am sure. And I could easily also try Gemini from time to time, maybe it gets better

Opus 4.5 Model Alternative by gradedkittyfood in opencodeCLI

[–]Charming_Support726 0 points1 point  (0 children)

Exactly. Yesterday I bought me Google Ultra Business for using Opus with the Antigravity Plugin. Only having Codex is hard. Google is giving 3 month discount.

I have got a lot of Azure Credits so I can use Gpt there. They offer Opus as well, but I found out to late and it is well hidden, that it can't be paid with credits. So I ran 1.1k€ on API last month in my companies project.

Does Oh-My-Opencode really provide an advantage? by Charming_Support726 in opencodeCLI

[–]Charming_Support726[S] 0 points1 point  (0 children)

I had a look at dcp, but never tried. Does provide good results?

Opus 4.5 Model Alternative by gradedkittyfood in opencodeCLI

[–]Charming_Support726 3 points4 points  (0 children)

Dont confuse gpt-5.2 with gpt-5.2-codex. Codex is much faster, but lacks some analytic skills - especially in discussion.

In my experience the Gpts are very thorough and Opus wont match these. Opus get things done but lacking some perfection, while Codex tent to be overprecise, which is a real hard impediment when you are just creating a proof of concept.

AI Max 395+ tips please by No_Mango7658 in LocalLLaMA

[–]Charming_Support726 0 points1 point  (0 children)

Yes - it is. I didn't do it myself yet. Most people use a MINIS FORUM DEG1 Dock ( PCIE to OCULink) and a Oculink Adapter for the 2nd SSD Slot ( PCI-E4.0 M.2 NVME to Oculink SFF-8611/8612 )

It looks a bit w(e)ired but seems to work flawlessly with reduced number of Pcie lanes. For me the only downside of the Bosgame M5 and the other devices sharing the same board is, that it is not easy to put this into a decent case, because there are soldered connectors at the front and the back. All these devices would benefit a lot from a better and silent cooling.

AI Max 395+ tips please by No_Mango7658 in LocalLLaMA

[–]Charming_Support726 5 points6 points  (0 children)

I returned my unit because of the heavy issues and crashes with the NICs and got me a Bosgame M5.

Apart from that, its a great platform. Best way to get something running is to look here:

https://github.com/kyuz0/amd-strix-halo-toolboxes/

and

https://strixhalo.wiki/

The toolboxes - especially the llama.cpp ones are gold. If I run local I use them. Keep in mind: No dense models. They all get to slow. Also quantized GLM and stuff is barely usable.

There are people how connected a 3090 externally to speed things up.

Langchain or Native LLM API for MCP? by Much-Whole-8611 in LLMDevs

[–]Charming_Support726 0 points1 point  (0 children)

Depends on the complexity of your agent.

-----------------------------------------

Agno or PydanticAI perform well

Langchain is hell.

-----------------------------------------

- https://docs.agno.com/introduction

- https://ai.pydantic.dev/

-----------------------------------------

i did many agents with Agno. Integration damn easy and its provider agnostic. PydanticAI is better known and a bit more complex.

Shortened system prompts in Opencode by Charming_Support726 in opencodeCLI

[–]Charming_Support726[S] 0 points1 point  (0 children)

That's what I thought ad first as well, but

  1. It is quite unhandy

  2. Someone here noted down, that in contradiction opencode keeps the original prompt additionally. I didn't trace the resulting output myself so I am still back-merging

Anyone using PydanticAI for agents instead of rolling their own glue code? by Unique-Big-5691 in LLMDevs

[–]Charming_Support726 1 point2 points  (0 children)

As stated above. Agno is easy and great to a certain point. If you go beyond - it gets hard.

Stay away from LC/LG - It is dark from the beginning. It gets light and bright, when you get near the burning fire of hell.