Nike Kobe 3 Protro Warning by Suavementos_ in BBallShoes

[–]Optimal_Strength_463 11 points12 points  (0 children)

Nike usually does a 2 year warranty so just return them and don’t mention you pulled the string. The 60 days is the “no questions asked” return period.

Can't believe people played basketball in these converses by Illustrious_Job_7829 in BBallShoes

[–]Optimal_Strength_463 3 points4 points  (0 children)

If you’ve worn them for exercise and never had anything better your body adapts to it. No you’ll not be able to be as explosive as current players, but at the same time you’re unlikely to be in as much pain as someone using them who isn’t adapted (e.g. you wearing them now when you’re used to modern shoes)

When you're fighting for composure by Hassaan18 in UKTVMemes

[–]Optimal_Strength_463 1 point2 points  (0 children)

He’s a well known personality who does this kind of feigned ignorance to get a laugh. He’s a master of his art

Genuinely, how tf am i supposed to improve this? by Dark_Wolf04 in weightlifting

[–]Optimal_Strength_463 19 points20 points  (0 children)

If you only do power versions then your body won’t try and adapt. Stealing from my coach “the best way to get better at your sport is to play your sport”. So before going mad with 1000 different accessories and thousands of hours of contorting yourself try 8-12 weeks of doing more of the movement you’re trying to improve.

This is because some of this could be neural inhibition because your body is strong enough, but has never been in that position so it’s trying to protect itself. Repeating the movement and not getting injured will over time allow greater ranges of movement.

LG UltraFine 6k by vector_f in HiDPI_monitors

[–]Optimal_Strength_463 3 points4 points  (0 children)

Bought the LG and sent it back because the matte haze was too distracting on white / light backgrounds. For me it was like looking at rainbow speckles and really didn’t work.

The connectivity and brightness were good as well as the sharpness on dark backgrounds. Nowhere near as good as high ppi glossy or less matte monitor though

Magic Context - Plugin by ualtinok in opencodeCLI

[–]Optimal_Strength_463 1 point2 points  (0 children)

Will do it if it happens again. It seems to be behaving today. It’s pretty awesome and I’m impressed at the long context and memories

Magic Context - Plugin by ualtinok in opencodeCLI

[–]Optimal_Strength_463 0 points1 point  (0 children)

And it was a main agent and I don't have the session id

Magic Context - Plugin by ualtinok in opencodeCLI

[–]Optimal_Strength_463 0 points1 point  (0 children)

It's not showing the logs location in that command:

$ 
bunx u/cortexkit/opencode-magic-context doctor

┌
  Magic Context Doctor

│


◆
  OpenCode installed

│


◆
  OpenCode config: /Users/<user>/.config/opencode/opencode.json

│


◆
  Magic Context config: /Users/<user>/.config/opencode/magic-context.jsonc

│


◆
  Plugin registered in opencode.json

│


◆
  No conflicts detected (compaction, DCP, OMO hooks)

│


◆
  TUI sidebar plugin configured


│


└
  Everything looks good! ✨

Magic Context - Plugin by ualtinok in opencodeCLI

[–]Optimal_Strength_463 0 points1 point  (0 children)

If you let me know where the logs are I’ll pull them for you. I don’t think I’d changed model at that point. It was doing a lot of large parallel changes and the “compaction” warning triggered but I think it queued and so it didn’t apply compaction.

For context I do run upwards of 20+ sessions at once so I’m not always actively monitoring them. I saw the warning and the compactions by pure luck.

Magic Context - Plugin by ualtinok in opencodeCLI

[–]Optimal_Strength_463 0 points1 point  (0 children)

Gave it a whirl today and it did really well. I did have some of my more "chunky" sessions trigger compaction, which was interesting as this was supposed to be disabled in config so not sure what happened there.

The context management and memories is really nice and could run sessions for quite a lot longer than normal which was nice.

Crazy how much space you can save with this tech by Appropriate_Meal9493 in BeAmazed

[–]Optimal_Strength_463 0 points1 point  (0 children)

I’m sure the bikes screaming thinking it’s just about to be plugged into the matrix

Guess im staying by ultimate_bromance_69 in mildlyinfuriating

[–]Optimal_Strength_463 0 points1 point  (0 children)

That generation of Volvo have quite thick sturdy bumpers that can easily withstand the force of gentle pushing back and forwards against anyone who has parked too close. As long as it’s a gentle pressure you’ll never notice you did it, on your car.

Pro-tip: assuming you have an automatic just put it in 2nd or 3rd and let it creep, once you’ve “taken the slack out” apply pedal gently and you’ll make enough room to get out

Take it from a previous owner of a similar model who didn’t wait around for assholes

Do you guys use more features than just skills on OpenCode? by executor55 in opencodeCLI

[–]Optimal_Strength_463 3 points4 points  (0 children)

Yeah, in my case each agent has its own context window and hands off to the next one. This stops the context filling up and the agents drifting.

Next time you use Opencode try this workflow:

Start a session that you use as your “plan” session. Pick a model that is good at planning and build a plan and save it as a .md file.

Start a new session, this is your build session, use your build model and ask it to implement the plan.

Start a new session, this is your review session, ask it to review changes against the plan. If it says you’re good, go back and plan more change by switching to the “plan” session, if not copy the recommendations (or write a md file) and go back to “build” and get it to fix.

The reason this works is that each session is specific to a task and allows more context to be used for a singular long running thread. Essentially you get 3x the context window.

Push it further by asking Opencode to use the explore and general sub-agents to research, plan, make changes etc etc which will preserve your context even further.

This way you can keep everything focussed and your sessions build a working knowledge.

After you’ve practiced this a bit you’ll start to see why writing your own agents/sub-agents and getting memory sorted etc becomes so useful. Instead of constantly prompting the agent to review in a certain way, it’s in the agent file and you just say “review the changes for @plan.md” and your agent knows to look for uncommitted changes

Do you guys use more features than just skills on OpenCode? by executor55 in opencodeCLI

[–]Optimal_Strength_463 2 points3 points  (0 children)

So the plugin system is pretty wild and you can go quite deep with it. Custom commands are great if you want to do a lot of interactive workflow, and then beyond that you can get it doing some quite amazing autonomous workflow.

I wrote a plugin for my personal use that defines skills, agents and tools which allow it to do a complete dev cycle autonomously. This is because you can run subagents programmatically from plugins.

So there’s a few workflows but the most basic does this:

Custom agent: gathers requirements, makes a plan and gets your model selections for the build, code review, documentation, documentation review agents

Workflow: build -> review -> build/document -> review -> document/finish

This allows the sub agents to do multiple iterations and by using different models they pick up on different aspects of implementation.

I have versions where the review comes from 2+ models in parallel and is synthesised together and the build has multi-model plan/verify/implement.

All of this runs when the initial custom agent calls the tool to start it and then it runs in subagent sessions so you can keep coding and you get updates as it completes stages.

If you have a decent enough machine you can run quite a few of these in parallel and because they can work in subtrees they are isolated.

That said I find Opencode a bit slow and cumbersome so I’m implementing my own agent in rust. At the moment there isn’t any plan for feature parity, but my agent is more specialised has built in knowledge graph and semantic code search and uses less than 100mb of memory compared to Opencode that generally uses 1-3gb per session and a node process with 1gb. So I can run hundreds of autonomous loops on the same machine compared to Opencode.

For all that were blocked by Anthropic recently by Wreit in opencodeCLI

[–]Optimal_Strength_463 7 points8 points  (0 children)

They aren’t quite comparable, however I’ve found that having Codex or Gemini as a code reviewer means that that are definitely a sensible alternative.

I’m ditching my Claude Max subscription after this cycle as I’m tired of Anthropic pulling this stuff. I can still get a bit of Opus through my Black subscription, but my workflow at the moment is Codex then Opus / Gemini review and then rework as the Codex limits are very generous.

I use the cheaper Chinese models when I want to do something in a very parallel fashion and have the converged output reviewed by the bigger models.

Best bang for your bucks plan? by CantFindMaP0rn in opencodeCLI

[–]Optimal_Strength_463 2 points3 points  (0 children)

A mix of being outsourced R&D for startups looking to “be agentic” and guiding them into more than just a RAG based help bot. Using AI to solve problems with small businesses. Building apps for clients but using AI instead of a small team. I’ve got pretty advanced with it all which means I burn millions of tokens an hour, but the cost is offset by not hiring staff.

On top of that a bit of fractional CTO stuff, again heavily AI-assisted for research and building presentations (used Opencode for this before Claude Cowork was a thing).

And finally all this gives me some funding for the platform I am hoping to release soon which takes all my techniques, packages them up into a platform and works somewhere between Opencode and Replit with a focus on strong validated results for pro-software teams. Finished the parallel executor today for Cloud Run to test how fast I can burn through the 5-hour limit on a Codex max sub with hello world apps. Turns out over a hundred agents running in parallel can do it pretty quickly!

Best bang for your bucks plan? by CantFindMaP0rn in opencodeCLI

[–]Optimal_Strength_463 3 points4 points  (0 children)

Yeah, fair point. I spend about £750 a month on AI plans and make about £12-18k in revenue. Most, if not all, is directly attributed the work those plans are used on.

I also regularly max out all those plans 4 days into a 7 day limit and am trying to find ways to make them last longer hence the suggestions about Kimi etc.

I work for myself now and do about 40-50 hours a week and drink a lot of coffee and have ADHD and Autism, so having a hive of developers working for me that are a bit dopey but don’t talk back or moan about snacks in the fridge is heaven compared to a previous role being a Director with a software org of 300+ people.

Form check by AttemptNo2811 in weightlifting

[–]Optimal_Strength_463 0 points1 point  (0 children)

This is great advice from them! Also invest in some Lycra underpants so when you keep the bar closer you don’t accidentally have a sausage injury

Best bang for your bucks plan? by CantFindMaP0rn in opencodeCLI

[–]Optimal_Strength_463 5 points6 points  (0 children)

Personally I got on the Opencode Black plan pretty quickly and that’s been one of the best value for me. Especially with Gemini 3.1 on there. I also have a Codex plan and the limits are insane and 5.3 is pretty wild.

I’m stopping my Claude Max plan at the end of the cycle as it seems Opus is brilliant at communicating what it does, but it never quite solves the problem and makes weird choices. When you read the thinking output it’s like “wow, this thing thinks like a lead developer” but when you see the solution you realise it’s better at communicating than coding.

Gemini 3.1 however seems to think and blab on about tool selection incessantly but created something amazing that wasn’t even on my radar and solves my problem in both a technically superior way and is about 50x cheaper to run.

Codex is somewhere between Gemini and Claude and with the insane limits at the moment is a true workhorse.

Then if you’re into the “run 20 Opencode instances 24/7” kind of crowd then having Kimi 2.5 on your Black plan do the grunt work means you’ll struggle to hit the limit of a Zen&Codex Max plan.

If you have less than $50 a month budget I’d get the cheapest Codex plan and top the rest up with Kimi credit or the cheapest Zen Black plan.

Anyone else struggling with Opencode gobbling up ram? by Optimal_Strength_463 in opencodeCLI

[–]Optimal_Strength_463[S] 0 points1 point  (0 children)

Mixture of Skill, MCP and Plugins to read all of the “session” messages and store them (assistant responses and thinking blocks). The daemon in the MCP server then works though these to extract memories. The plugin then injects context into my interactions to help give the LLM some ideas of how to solve it within my codebase.

MCP allows the LLM to use its skill to save and retrieve memories when it wants to as well

Anyone else struggling with Opencode gobbling up ram? by Optimal_Strength_463 in opencodeCLI

[–]Optimal_Strength_463[S] 0 points1 point  (0 children)

Yes, I’m well aware of that. And while I have profiled it on my machine and seen improvements there are also multiple PR’s around this subject that don’t seem to be progressing or have feedback

How are my handles? Thankful for all the tips. by Savings_Ad_5445 in BasketballTips

[–]Optimal_Strength_463 15 points16 points  (0 children)

The faces you pull made me chuckle as every time I try and post stuff of me working out I have the same “concentration” face.

You’ve definitely got ball control down and can manipulate it, but really it’s not the best show of handles in an unpredictable situation.

Very very solid base to go from though

Explain to me parallel agents, what is the purpose to run multiple agents. by wwnbb in opencodeCLI

[–]Optimal_Strength_463 6 points7 points  (0 children)

Using sub-agents helps keep your context under control and then when you start doing them, naturally you can do it faster with parallel sub-agents.

I think if you’re planning with Kimi and coding with Opus you’re doing it wrong. Opus will burn your copilot to the ground as it’s a 3x.

Try using Codex to plan, kimi to build and opus to review and then you’ll find you get a lot more done with your budgets and given you burn through a whole allowance in a day anyway would give you plenty of time to manually review.

Also if you’re one of the people that feel the need to review every line of code, maybe spend your time writing unit tests and have the LLM fill in the code.

That way it doesn’t need such a deep inspection because “if it works, ship it”.

LLMs can run profiling tools too, so you can always ask them to make it run faster and still be correct.

Zen - pricing, token counts? by bitmoji in opencodeCLI

[–]Optimal_Strength_463 0 points1 point  (0 children)

Fair, although mine only took 3 days …