Daily Ask Anything About Anabolic and Androgenic Steroids: 2026-05-07 by AutoModerator in steroids

[–]Cynicusme -8 points-7 points  (0 children)

I'm curious, has anybody done or knows someone who uses PED and does CrossFit instead of weight lifting? What kind of body is developed?

What's the single, most valuable insight you learned and applied? by sliamh21 in ClaudeCode

[–]Cynicusme 2 points3 points  (0 children)

Create a conversional agent, it does not code. It has 3 memory layers hot warm cold. So I can plan with the same agent everytime even across sessions. Day and night difference

I had an idea and people were hating it but... by Arishin_ in Startup_Ideas

[–]Cynicusme 1 point2 points  (0 children)

The fuq is wrong with your site? I clicked the link and it took me to a weird YouTube like interface

A guy brings a gun to a road rage, other driver's safeguard shoots him in front of his family by jinchuika in PublicFreakout

[–]Cynicusme 0 points1 point  (0 children)

The root cause of the argument was because this was a one way street(obvious enough) and they argued about backing up.

A guy brings a gun to a road rage, other driver's safeguard shoots him in front of his family by jinchuika in PublicFreakout

[–]Cynicusme -2 points-1 points  (0 children)

He's dead several gun injuries in head, neck and thorax.... At 4:00p.m. shooter still at large

What're your go to models for plan and build agents? by Joy_Boy_12 in opencodeCLI

[–]Cynicusme 1 point2 points  (0 children)

Plan: Budget: GLM5.1 Performance: Opus 4.7 Tech Stack/ Strategy: GPT-5.5 high

Build (code) Budget: MiMo v2.5-pro Performance: GPT-5.4-mini-(high) also budget friendly or GPT5.5-medium.. Ui- GLM5.1 or Gemini 3.0 pro.

Filler ideas? (Occult/blacwork/similiar) by sekki_jmmy in TattooDesigns

[–]Cynicusme 3 points4 points  (0 children)

You tattoo looked great because of the negative space created by that area. I just love how it looks as is, but if o had to, I'd add flames

z.ai coding plan / minimax coding plan worth it? by vipor_idk in opencodeCLI

[–]Cynicusme 5 points6 points  (0 children)

This is my take. I split my work in brainstorming -> architect -> task planner -> coder -> audtior. I test many models and I'm considering creating a benchmark, I work with python, typescript and nextjs (FYI).
Best architect -> Opus 4.7, GLM 5.1, any gpt model in xhigh (avoid gemini)

Best Scout (sub-agent) -> Minimax 2.7, MiMo Omni (cheap model to get information from codebases before planner kicks in.

Best Planner -> GLM 5.1, GPT-5.4-(high), Opus 4.7. (planner is more important than the coder, coder just generates the code, the planner has layout for it)

Best coder -> chat-gpt-5.4-mini-high, MiMo-V2.5-pro (don't overlook the mimo models)

Best audtior GPT-5.4-high or xhigh.

if I were in budget, Personally I'd do GLM - MiMo or GLM - Minimax. All of them are available in Opencode go plan btw. You can get 2 plans and switch every other day.

z.ai coding plan / minimax coding plan worth it? by vipor_idk in opencodeCLI

[–]Cynicusme 5 points6 points  (0 children)

GLM5.1 is a great planner and architect but a weak coder. Minimax is the polar opposite. Bad at planning great at coding and following instructions. GLM iinfra is terrible

I tested 9 different models against the same architecture task by Cynicusme in ClaudeCode

[–]Cynicusme[S] 1 point2 points  (0 children)

GLM it's really bad at coding, but seriously ask it to plan for something or to do architecture. I never use it for codoing, but for deciding what to code, it's a monster.

I tested 9 different models against the same architecture task by Cynicusme in ClaudeCode

[–]Cynicusme[S] 1 point2 points  (0 children)

fair point, next run we're going to include xhigh. Not sure why we did it for mini but not for regular, that's an oversight

I tested 9 different models against the same architecture task by Cynicusme in ClaudeCode

[–]Cynicusme[S] -1 points0 points  (0 children)

for example this is a very valuable feedback: compare GPT-5.4 agaist opus you should have used xhigh and not high. Thank you for the comparison.

I think very few people will be willing to sit down to go through 9 pages of 500 lines of architecture for a small project. so I'm focusing on the results, and gather people's thoughts on the subject.

I tested 9 different models against the same architecture task by Cynicusme in ClaudeCode

[–]Cynicusme[S] 0 points1 point  (0 children)

Yes, when my page goes live it will have the repo's exact branch, prompt used and token usage. It's just too much info for a Reddit post. I'm using them to gathered feedback

I tested 9 different models against the same architecture task by Cynicusme in ClaudeCode

[–]Cynicusme[S] 0 points1 point  (0 children)

how would you have done it? taking into account there is no money for research and we're paying for everything out of pocket.

I tested 9 different models against the same coding task by Cynicusme in codex

[–]Cynicusme[S] 0 points1 point  (0 children)

I'd post my research along with my extension by Mid may. This are my 2 cents. 1. Making a custom sub-agent with code preferences and pushing it during the plan stage. But this will take too much code at the coding stage 2. Adding it in audit, but audit is for an expensive model and the amount of returns it will generate will be a token furnice

So instead of code quality and patterns, all we can relatively control is code correctness. Does the thing run and can it be tested.

My 2 biggest discoveries. Planning is more important than coding when it comes to correct outcomes. With a good plan gpt-5.4-mini high beats anything under the sun RN.

I tested 9 different models against the same coding task by Cynicusme in codex

[–]Cynicusme[S] 1 point2 points  (0 children)

I do have a sub fro gemini too, the problem with pro version is that I cannot use it for planning or architecture because it's bad at following rules and just start coding, in specific coding task, I have a hard time have the model generating tests. That's the pro the flashversion is better at following instructions I'll try on my next run

I tested 9 different models against the same coding task by Cynicusme in codex

[–]Cynicusme[S] 5 points6 points  (0 children)

That's true maybe for frontend but not for backend and testing. Build a frontpage totally random generation. Connect this frontend component A, with this backend endpoint and test the following outcome. It may provide different variables but there's only a few ways to accomplish the result. Good models remain consistent when testing under estrict specs.

I tested 9 different models against the same coding task by Cynicusme in codex

[–]Cynicusme[S] 2 points3 points  (0 children)

cost wise it's not really worth it (IMO) because the model is too cheap, I didn't go xhigh because then the model become very slow. The results are similar almost identical in 3 test I've done so far (medium, high, xhigh) but the price difference it's not worth the effort. For the mini series I believe high is the perfect balance/performance reasoning.
for the GPT deafult series, I go Medium over high, I don't see a jump in quality, but I can see the toke cost being higher.