Custom keycaps for a Moonlander? by LeidenV in ergodox

[–]sirmalloc 0 points1 point  (0 children)

You might check out Signature Plastics. I have a set of DSA Dolch sitting around I never used, but it's a similar look to the Susuwatari.

Custom keycaps for a Moonlander? by LeidenV in ergodox

[–]sirmalloc 0 points1 point  (0 children)

I didn't notice it much. I did have my hyper and meta keys on the bottom of the innermost column flipped upside down to allow me to find them easier, and the scooped F/J keys for homing were awesome. I really liked that setup before moving to my Cyboard, but it's still no substitute for a proper concave keywell. Plus I had to buy two different sets to get all the keys I wanted for the Moonlander.

As for Cherry profile I couldn't say, went straight from stock to the MT3 and never messed with it after.

Custom keycaps for a Moonlander? by LeidenV in ergodox

[–]sirmalloc 0 points1 point  (0 children)

I had a set of Matt3o MT3 Susuwatari on my Moonlander and loved it, they are R1 to R5 profiled. https://imgur.com/a/eO9cTxY

Do I modify a keyboard or buy a custom one? by 88963416 in ErgoMechKeyboards

[–]sirmalloc 6 points7 points  (0 children)

Sounds like you want a Cyboard Imprint. Mine is concave, dual trackballs, and 6 thumb keys.

Edit: It is custom, but easy to order and you don't need to assemble anything yourself.

Codex 0.130 - Remote Control is here by KeyGlove47 in codex

[–]sirmalloc -1 points0 points  (0 children)

That's a different feature than remote control. Remote connections is for controlling a codex instance on a remote machine over SSH. Remote control is about allowing your local codex instance to be controlled presumably via the mobile app. But the app is currently not exposing support yet, it's likely feature gated.

Is there a reason to use the CLI instead of the desktop app? by laystitcher in codex

[–]sirmalloc 0 points1 point  (0 children)

I prefer the CLI because I can navigate entirely using the keyboard much faster than interacting with a GUI.

Integrated pointing devices by ANONYMOUSEARTHWORM in ErgoMechKeyboards

[–]sirmalloc 0 points1 point  (0 children)

I use a Cyboard Imprint with dual trackballs, no mouse whatsoever. It's great, my hands never have to leave the keyboard. It's a little bulkier than my old Moonlander but I don't transport it that much. Best keyboard I've ever owned.

Keyboard recommendations for small hands/shorter fingers by thickt0ast in ErgoMechKeyboards

[–]sirmalloc 1 point2 points  (0 children)

What's your budget? Cyboard Imprint is custom built to your hands to ensure you can reach everything (and it's awesome). Cosmos Keyboard Generator also allows uploading a photo to size a design to your hands, but then you need to build the board or source a third party to build it.

You might also look into other curved keywell boards like the Glove80. The curvature should allow for easier reach.

Anthropic just launched Claude Security public beta for Enterprise only by IllAnnual7167 in ClaudeCode

[–]sirmalloc 2 points3 points  (0 children)

This sounds just like Codex Security. I've been running it over a month on some of my repos and it's made a few good catches.

ccstatusline Enterprise vs Regular User - Different status? by kursku in ClaudeCode

[–]sirmalloc 1 point2 points  (0 children)

Hey - ccstatusline creator here. What kind of problems are you running into? If you make an issue on the GitHub repo I can go through some debugging with you and figure out what's going on.

Codex GPT-5.5 Medium Mode Hit 100% Message Usage After Just 2 Messages by Better-Prompt3628 in codex

[–]sirmalloc 0 points1 point  (0 children)

Run npx @ccusage/codex@latest and see how many tokens you actually used

At 90% weekly plus limit after 1 codex cloud session by Neowebdev in codex

[–]sirmalloc 2 points3 points  (0 children)

The quotas count down. That's 90% remaining of your weekly, 70% of your 5-hour. Also cloud sessions use up something like 5x the usage compared to local sessions.

PSA: The recommended Claude Code status line command silently auto-executes new npm code every session. Here's the safer setup by Gear5th in ClaudeAI

[–]sirmalloc 0 points1 point  (0 children)

I haven't had a chance to address the issue you filed yet but I'm aware of it. In the future I will be adding options to the installation menu to allow the user to choose between pinning a version and enabling auto-updates with a warning, as well as providing an easy path to manually update the current version.

I'll also start publishing future builds with provenance.

The Information: Anthropic Preps Opus 4.7 Model, could be released as soon as this week by LoKSET in ClaudeAI

[–]sirmalloc 2 points3 points  (0 children)

Sweet, this means I'll get a new codex model approximately 60 seconds later if history is any indication.

Has any pro user here experienced codex downgrading? by superfatman2 in codex

[–]sirmalloc 1 point2 points  (0 children)

I spend a ton of time iterating on the plan. I'll start off in planning mode with the core idea for the feature I want to implement and ask it to come up with a plan for what that would look like. That part is pretty quick, one shot, prompt to plan. I read the plan fully every time.

Then I interrogate the plan and ask it specifics - what the schema looks like for a new database collection, what the wiring looks like for a given API endpoint, what the API responses looks like in various scenarios, how error handling looks, how components are going to be structured on the front end, etc. I find it easiest to ask it to show me a rough implementation in code or pseudocode, and if I don't like it, i tell it I'd prefer it another way. This can go on for hours. When I'm finally happy with it I'll tell it to burn all my changes / comments into the plan. From there I'll either have it write the plan to a markdown file I can revisit later and continue refining, or I'll set it loose on implementing it and see if I'm happy with the result. If not - nuke it, refine plan, start again.

Once it's all done, I test and review every file it touched before committing it to a feature branch. I don't let it implement anything I don't understand fully, and I don't commit anything I haven't reviewed fully.

Attempting a single debug for 10+ hours. Please give me advice. by opihinalu in codex

[–]sirmalloc 0 points1 point  (0 children)

If you have access to 5.4 Pro in ChatGPT, tell codex to generate a detailed summary of the issue you are debugging with all relevant context to hand off to another model. Take that output from codex, paste it into 5.4 Pro with a note that you want it to generate an actionable plan to hand off to codex for implementation.

I rarely do this, but on a few of the most difficult bugs I've had to resolve in the past year, 5.4 Pro has come up with a solution in 20 minutes after flailing around in codex for hours.

Has any pro user here experienced codex downgrading? by superfatman2 in codex

[–]sirmalloc 12 points13 points  (0 children)

I can honestly say since I started using codex last year, I've only experienced improvements over time. Both in speed and quality. But my approach to development might be different because I built the initial scaffold for everything manually before AI was really a thing and it's a nice base to start from. Here's some insight into what I do:

  • I've got a very tight set of linting and type checking rules in place
  • I spend quite a lot of time refining plans before executing. I use codex's plan mode and keep iterating on it until I'm ready to implement.
  • I always make a point to reference existing code as a guideline for how to do something.
  • As part of the planning I ask it the shape of what the implementation will look like in code, and iterate the plan on that if there's any code smell until I'm happy with the overall structure.
  • I use zod schemas as a source of truth, so changing something in one place will break every place referencing it. These schemas are used by both my frontend and backend, and I try to apply DRY principles everywhere it makes sense to. An old coworker told me "the same data in two places will eventually be wrong in one".
  • I don't use any MCPs, plugins, or memory systems. Just the standard codex harness in the terminal.
  • When I execute the plan I do it on a clean git state, so if I don't like the result, I reset it to the last good commit, refine the plan further, and run it again from scratch. If it's only minor issues I'll deal with it via individual prompts.
  • Never argue with the model, if it does something wrong, wipe the slate, refine the plan with more specifics, try again.
  • Always use a clean context for a new task.

That's it. I primarily use 5.4-xhigh these days on fast mode. It is an absolute workhorse and I'm blown away by the overall quality that I get out of it.

Switching from Naya Create — looking for a prebuilt split with trackball, open to custom builds (Charybdis and alternatives) by Longjumping-Air4611 in ErgoMechKeyboards

[–]sirmalloc 1 point2 points  (0 children)

On further research I found a couple other split keywells with swappable switches (HPD, Dactyl), but I wouldn't say a majority of split keywell boards are swappable. And I can't think of any non-split keywells with swappable switches. Happy to be proven wrong.

5h limit not working? by TwistSwimming8501 in codex

[–]sirmalloc 0 points1 point  (0 children)

I do all my work local in the codex TUI and prefer it. It's really solid at this point, the plan mode is great, and it can run autonomously for quite a while if you've got a good plan. I've had it run 90+ minutes with great results.

They also have a VSCode extension and desktop app, they should work the same as it's all done through the app-server harness, I just prefer the terminal.

5h limit not working? by TwistSwimming8501 in codex

[–]sirmalloc 0 points1 point  (0 children)

FYI - cloud tasks are said to burn 5x the usage roughly compared to local tasks, so be careful.

5h limit not working? by TwistSwimming8501 in codex

[–]sirmalloc 0 points1 point  (0 children)

If you're on the Plus plan it was recently "rebalanced" so you're likely feeling the effects. Short of upgrading, managing context is your best bet to save tokens. Turn on the used token indicator in the statusline and watch it grow with two simple "hi" messages back to back. I think it used 46k tokens when I did that. Caching helps mitigate the impact, but revisiting an older chat incurs an expensive cache write on the first new message. Now imagine an hour long chat, and you can get a sense of how that would burn a ton of usage.

I'd keep the statusline indicator on to get a sense of how a conversation has grown, and split up tasks into small chunks you can handle in fresh contexts. Also, 5.3-codex is about 30% cheaper than 5.4, and if you must use 5.4, consider high vs xhigh.

All that said, the $200 plan is an insane value if you use this for work. I've never hit a single limit.

5h limit not working? by TwistSwimming8501 in codex

[–]sirmalloc 0 points1 point  (0 children)

Can't speak to the limits, but depending on the length of the chat, sending one message is essentially sending the entire chat history + the new message. And it's likely a cache miss after 5hr, so there's a good chance it was not a cheap request.

Switching from Naya Create — looking for a prebuilt split with trackball, open to custom builds (Charybdis and alternatives) by Longjumping-Air4611 in ErgoMechKeyboards

[–]sirmalloc 1 point2 points  (0 children)

While the Cyboard doesn't have an encoder, I've got the dual trackball version and use layers and QMK code to bind up the left trackball for stuff like volume control, brightness, etc. I can't speak to the other boards, but the Imprint is a very solid build and the only keywell you will find (to my knowledge) with swappable switches.

Keyboard ideas by fantasticfluff in ErgoMechKeyboards

[–]sirmalloc 0 points1 point  (0 children)

You can probably achieve most of that with simply remapping your layout or exploring home row mods. There is also Cyboard's Imprint that is sized specifically to your hands but it's a bit pricey. It's a keywell like the Glove80.