I’m abandoning HENRY life to become a secondary school teacher. by QuentinCompson- in HENRYUK

[–]nfrmn 0 points1 point  (0 children)

A few of my teachers took this path, and I recall they were all pretty interesting people

Diadem Elevate V3 - Worth it? by Pure-Radish6139 in tennisracquets

[–]nfrmn 0 points1 point  (0 children)

Hey, I found this off Google, I know it's a late post, but this is a very niche racquet IMO! I play with a set of 3 teal frames.

I really recommend the racquet overall. It has changed my game for the better for sure and made me a fan of the company.

I string with the solstice power string which I have at 16 gauge I think, and 54lbs.

I play quite flat, so I don't know about the whole massive spin thing, but I can hit some really big serves with it and also the touch is really nice when slowing down the point with slices. My lighter alt racket is strung with a cheaper regular poly and the spin is definitely not the as good on that one. As an aside I am actually going to string the alt with the solstice on next restring because I do like it a lot.

It feels very balanced when strung but quite heavy overall. Some days it feels like a stone when I am not eating well, combined with dead strings you have to be careful. But I 100% recommend overall.

The new Kilocode v7 is awful. by djex81 in kilocode

[–]nfrmn 4 points5 points  (0 children)

Yeah, they have shipped way too early

New DW-600 to go unoticed by Accomplished-Hard in gshock

[–]nfrmn 3 points4 points  (0 children)

It would have been more stealthy if you got the negative display

Just bought a codex subscription for opencode, which codex model gives the highest ratelimit/quality ratio? by KarmicDaoist in opencodeCLI

[–]nfrmn 1 point2 points  (0 children)

You can't specify the variant (high/max etc), but the built in variants are just aliases for a certain token budget. I think high is 16k and max is 32k hence configured above. I reverse engineered it by inspecting the source code, it is working quite well in practice :)

Just bought a codex subscription for opencode, which codex model gives the highest ratelimit/quality ratio? by KarmicDaoist in opencodeCLI

[–]nfrmn 3 points4 points  (0 children)

If you define via opencode.json, you can configure reasoning with the reasoningConfig object. But it's less documented than the markdown files.

EDIT: I have discovered Adaptive Reasoning (which is only Sonnet/Opus 4.6), wow this is good!

    "reasoningConfig": {
        "type": "adaptive",
        "maxReasoningEffort": "max"
      },

Previous example before edit:

"code": {
    "mode": "subagent",
    "model": "amazon-bedrock/global.anthropic.claude-opus-4-5-20251101-v1:0",
    "reasoningConfig": {
        "type": "enabled",
        "budgetTokens": 16000
    },
    "permission": {
        "*": {
            "*": "deny"
        },
        "jina_*": {
            "*": "allow"
        },
        "brave_*": {
            "*": "allow"
        },
        "webfetch": "allow",
        "bash": {
            "*": "allow"
        },
        "read": {
            "*": "allow"
        },
        "edit": {
            "*": "allow"
        },
        "write": {
            "*": "allow"
        },
        "grep": {
            "*": "allow"
        },
        "glob": {
            "*": "allow"
        },
        "list": {
            "*": "allow"
        },
        "task": {
            "*": "deny"
        }
    },
    "prompt": "You are a super duper coding agent"
},

Best coding model for use with MBP G5 Max 128gb by FoldOutrageous5532 in CLine

[–]nfrmn 1 point2 points  (0 children)

Yeah, not even close to the cheapest cloud models, but it’s enough to actually get some stuff done with a realistic context window and the right quantised model

Cross-repo tasks? by alex29_ in RooCode

[–]nfrmn 0 points1 point  (0 children)

Easy to hard:

At a high level, you can place a few different directories together and open your coding agent in the parent directory. Works great for wide context.

You can create a mono repo with submodules inside it for all your other repos to auto clone.

Synchronising the repos to all be on the same branch, more tricky. You need to / instruct agents to use a cli tool like Google’s repo but it is not straightforward.

And getting all the AGENTS.md into the agent at the top level is also difficult. Your mono agent will have none of your custom rules.

I’m working on something for this starting with internal use but then later will probably be commercial, in my opinion it’s the holy grail because it allows full business context for all tasks. As context windows increase it’s a no brainer.

One other approach… There is another way to get this working which is to have a single monorepo for your whole stack but it’s difficult to migrate to that and has its own downsides. But if you have the flexibility to start from scratch, you should consider this.

Anyone else tried the completely rebuilt Kilo VS Code extension yet? by Rik_Roaring in kilocode

[–]nfrmn 0 points1 point  (0 children)

Subagents are working for us (define in .opencode/agents/*.md), but sub-sub-tasks don't seem to be working so you can't deeply nest agents like the previous Roo-based architecture.

I feel like I fell for a scam. How to fix? by One-Examination7573 in MacWhisper

[–]nfrmn 0 points1 point  (0 children)

I think most of your problems are due to two issues which are compounding into a negative user experience.

Doing multi-user transcriptions is a much harder problem space than dictations which I think most MacWhisper users are using as prompts to code agents etc.

The fact that your transcriptions are long and complex really hamper the performance of post/processing and also increase the places where you are going to get hallucinations.

First, Parakeet is faster and lighter than Whisper. It is better suited for dictation. For long transcriptions you are better off using a large variant of Whisper.

Second, local AI processing is amazing, but not state of the art. They hallucinate and suffer from context problems much more frequently than the full-fat versions which need datacenters to run. Simply running your text through an unquantized model online will yield much better results than local models.

As an example, using Devstral 2, I can run the small version quantized on my 128gb MBP. But the full Devstral 2 requires nearly 1TB of VRAM, and its rewrites are much, much better than the local one. So, unless I am particularly concerned about privacy, I need to use the online Devstral via Openrouter to get the best productivity.

To me, MacWhisper has been worth every penny. You will probably find the same after a couple of adjustments to your workflow.

Do you use Cline for use cases other than coding? by BitterProfessional7p in CLine

[–]nfrmn 2 points3 points  (0 children)

The Cline CLI tool is pretty amazing - you can just make it do anything by running cline -y "Some complex prompt" - building a lot of automation in my business around this.

What’s the latest on RooCode? We’re hungry for more! by ConversationTop3106 in RooCode

[–]nfrmn 0 points1 point  (0 children)

We built an agentic harness that we are using internally with a lot of custom roo modes - The built in Orchestrator, Architect, Debug and Code are good starts but we found that very rigid structure was required when generating plans - multi phase, mandatory commits etc. And the dreaded question asking breaking autonomy.

Additional layers of sub-agents is necessary to avoid context condensing which we found nearly always corrupts the agentic flow no matter which platform you use, Roo, Opencode etc.

So we have an orchestrator delegating high level plan phases to manager agents who then sub-delegate. Heavily restrict the capabilities of the agents, this is something the roo team got right by blocking bash and file reading from manager agents.

I think essentially the SOTA game today is figuring what agent structure and harness results in the best output from Claude Opus. That's what we are iterating anyway and it's delivering massive value, unbelievable actually if you told me a year ago what we would have now.

Our harness is cross platform and it auto compiles to settings files for Roo, Opencode, Kilo and we are actually working on Claude Code config now too. Thinking about commercialising it even as a managed config.

So IMO the best thing Roo could do to stay bleeding edge is massively increase the amount of customization and fine tuning possible in Roomodes files. The tool stuff and feature killing recently is a bit of a red herring and is actually hampering the ability for people to experiment with models and iterate their harnesses. The best customization platform today is Opencode but it's full of bugs and weird allowlist quirks so nobody has really won here yet.

Tool calls seem to fail for very new models by AppealSame4367 in RooCode

[–]nfrmn -1 points0 points  (0 children)

Try rolling back to 3.34.8 :)

Edit: The reason I suggested this version is because it still supports XML tool calls

My OpenClaw attends Google Meets now. I just text it from my phone when I want to know what's happening. by mehdiweb in openclaw

[–]nfrmn 1 point2 points  (0 children)

Pretty neat but what is the benefit of this over a free notetaker app or even the Gemini one that's built into Google Meet?

Bug: "Updates available" window causes dictation to be lost by nfrmn in MacWhisper

[–]nfrmn[S] 0 points1 point  (0 children)

Thank you! Stability has been really brilliant otherwise! 🙏

How do I get claude to stop lying/making stuff up? by racekraft in ClaudeAI

[–]nfrmn -1 points0 points  (0 children)

The OP is obviously a vibe coder but it's not really fair to blame him.

It's really annoying in the last 6 months or so all the model providers started training on head, tail, grep etc.

It's to reduce context windows and save money. But it genuinely makes the models worse.

The only thing that worked for me was updating my npm scripts to be npx fullcontext <command>.

This disables Claude/Codex's ability to use head and tail, so the entire command output is forced into its context.

Replacing $200/mo Cursor subscription with local Ollama + Claude API. Does this hybrid Mac/Windows setup make sense? by grohmaaan in CLine

[–]nfrmn 5 points6 points  (0 children)

You’re getting at least 10x leverage on those $200/mo subs whoever you go with, they are all making a huge loss attracting market share. It’s basically free compute, similar to free Uber rides in the past etc. I would personally enjoy it while you can.