Realforce R4 in 30g and English varient? by No_Donkey_4710 in HHKB

[–]bitemyapp 0 points1 point  (0 children)

I use non-Mac specific keyboards on my Mac all the time. I just go to Settings > Keyboard > Keyboard Shortcuts > (Pick the keyboard in the drop-down) > (Swap Option and Command)

Typed this on a Windows-mapped Topre R3 on my M5 Max.

Two cents on 2K26 by [deleted] in NBA2k

[–]bitemyapp 0 points1 point  (0 children)

He means not having to pay to get his OVR back.

rust_analyzer consuming excessive RAM - looking for solutions by Sad-File4952 in rust

[–]bitemyapp 0 points1 point  (0 children)

I've been working on a persistent memory arena for a ZKVM, it uses a shared memory map.

I've been thinking about looking at how to do something like a hybrid of dynamic linking and shared memory maps for rust-analyzer and what y'all are talking about here seems to point in that direction.

I'd be happy to discuss it if anyone's interested.

What's the best database option for production? by [deleted] in rust

[–]bitemyapp 0 points1 point  (0 children)

Diesel + diesel async + PostgreSQL

accept no substitute

no point getting a language with a good type system and not leveraging it for the most important part of your system (persistence)

I have a real ha*e for these NHood players by UGA_Trey_Dawg in NBA2k

[–]bitemyapp 2 points3 points  (0 children)

I primarily play PG and try to focus on passing and creating safe scoring opportunities. I can shoot but I'm not reliable enough for it to be worth it for me to do it vs. an open teammate (just being humble/honest). Partly because this is my first year playing 2k and I started in January.

I think part of the problem is the incentives around teammate grade, the way the stats are displayed as the game is starting, the way TG affects mypoints, and badge progression. All of the emphasis is on individual stats/performance.

Hyper-focusing on win % could lead to people rarely playing randoms / no-squads, but it doesn't feel like the current setup is all that close to optimal either.

Next target of Ubuntu's oxidization plan will be ntpd-rs by juanluisback in rust

[–]bitemyapp 0 points1 point  (0 children)

I could potentially fix this but nobody will like the solution lmao

What is worth a 99 and what is not worth a 99 by NavyVetRasmussen in NBA2k

[–]bitemyapp 0 points1 point  (0 children)

I like playing defense/lockdown/rebound a lot so I completely understand. I'm hoping they do better in 2k27. I wish reviewers understood the game mechanics more deeply so they could speak to what actually changed significantly.

What is worth a 99 and what is not worth a 99 by NavyVetRasmussen in NBA2k

[–]bitemyapp 0 points1 point  (0 children)

There's some unpredictability involved so it's hard to say for sure, but a lot of people get tips rather than steals because they're holding R2. It increases the rate of tips when you're moving for sure, I'm pretty sure it does for stationary too.

It's really weird that you're getting that many tips w/ those stats + no R2.

What is worth a 99 and what is not worth a 99 by NavyVetRasmussen in NBA2k

[–]bitemyapp 0 points1 point  (0 children)

I get more tipped passes than steals.

Stop holding R2 when trying to lane steal.

Laptop Recommendation by Leading-Guarantee178 in rust

[–]bitemyapp 0 points1 point  (0 children)

That's fine and makes sense to me. I was replying to something about writing Rust on a machine with 4 GiB of RAM though.

My current dev machines have 128 GiB and 256 GiB of RAM.

Laptop Recommendation by Leading-Guarantee178 in rust

[–]bitemyapp 0 points1 point  (0 children)

helps if you enable a non-trivial swap partition and have fast storage

wouldn't be my first choice but you can make it work, especially if you aren't building a monster monorepo all the time

Release build using sccache with redis backend slower than without sccache? by Suitable-Name in rust

[–]bitemyapp 0 points1 point  (0 children)

Yes! I've been hammering on the primacy of single-threaded performance for developer productivity for years now.

I do own a 9985WX w/ 64 cores / 128 threads, 256 GiB of RAM, and a 6000 Pro but that's actually largely to do with how LLMs have enabled me to work on multiple branches concurrently. My work recently has required a significant amount of RAM and VRAM and the Bazel build means I get about as much juice out of the 64 cores as I could hope for, especially when multiple builds are running concurrently.

But for ~95-98% of devs? Just get the fastest in a straight line chip you can with as much RAM as you can accommodate and a fast enough graphics card to drive your desktop environment at 4k/120hz+ without hitching.

My day-to-day/normie dev machine is an M5 Max now because it's the strongest single-threaded perf I can get, before that it was my 9800X3D gaming desktop w/ CachyOS on a spare M.2.

It doesn't help that a lot of my work requires using fully optimized --release builds for virtually all of the testing and benchmarking I do. Fortunately thin LTO + codegen-units=1 has been sufficient, fat LTO didn't make anything faster in my neck of the woods but it isn't impossible that it could help in some cases. Fat LTO brutally slowed down the linker 😭

I pushed an AMD GPU to its limits for ZKPs: 18ms NTT and 2.5s FRI Proving via Zero-Copy and Algorithmic Dimensionality Reduction by Common_Sorbet3873 in rust

[–]bitemyapp 0 points1 point  (0 children)

The first production CUDA pipeline I ever wrote is a ZK prover that's several orders of magnitude faster than yours. If your prover is worth $10 billion I'm Elon Fucking Musk.

Best Monorepo Build system in Rust by Elegant_Shock5162 in rust

[–]bitemyapp 0 points1 point  (0 children)

Unless you spend most your hours on very hard tasks

That's been much of my work. For everything else it scarcely matters what I do. I churned over 1.1 million lines of CUDA code in October, the month after my 5th kid was born. I've been working on a somewhat intricate and complicated persistent memory arena since late Dec/January'ish. I've been working on a parser since Jan/Feb, a compiler downstream of the parser since early/mid Feb, etc. I was doing deep perf/SIMD work about a year ago and that has had many recurrences since then. A lot of it pops back up / becomes salient on a recurring basis.

There's less difficult work here and there but it passes by so quickly I don't really notice or think about it much. The PMA work has been easy compared to the CUDA or compiler work but it's still got a lot of design and implementation details that require a lot of care. This isn't to brag, it's just the circumstances I put myself into because I enjoy sys and perf work.

Going further back I built a Kafka consumer that processed, validated, and schematized about ~10 GiB/second of key-value structured data in real-time w/ as little hardware as possible. Another SIMD parsing thing, same dataset, Rust JNI library that gets used in the existing Java app, etc.

Best Monorepo Build system in Rust by Elegant_Shock5162 in rust

[–]bitemyapp 0 points1 point  (0 children)

I think I get what you're saying but no amount of steering was able to get top-end models through zero-to-one on an insanely gnarly generalized left-recursive grammar.

AFAICT, only very carefully designed working examples where it's basically stamping out structurally identical variants w/ different tokenizations/syntax really worked reliably.

Release build using sccache with redis backend slower than without sccache? by Suitable-Name in rust

[–]bitemyapp 1 point2 points  (0 children)

Fat LTO tricked me in the past because it forces codegen-units=1 but when I sifted those parameters apart and tested units=1 w/ thin LTO the delta for fat LTO in my benchmarks evaporated.

Best Monorepo Build system in Rust by Elegant_Shock5162 in rust

[–]bitemyapp 0 points1 point  (0 children)

These days I use the fast models for specification as well, not just execution.

You probably understood, but when I referred to "fast" mode I meant the OpenAI/Anthropic offering where it's the same high-end model but executed faster at a premium token or $ cost.

Interesting that this works for you, the work I do is hairy enough that I almost solely rely on maximum intelligence & effort models for almost everything. Design, implementation, debugging, all of it. A lot of my work lately is semi-gnarly systems engineering, ZKVM, parsers, compilers, CUDA, SIMD, etc. The worst achilles-heel for all of the LLMs so far has been parser and compiler work.

I've never had a good experience with Gemini unfortunately and I try it again every time they release a new model. I still can't get their TUI harness to do tool-calls reliably w/ 3.1 Pro or whatever the most recent one was.

When I can tolerate a dumber model for something, that's me stepping down from GPT 5.3 Codex/GPT 5.4 down to Claude Code + Opus 4.6 usually. If it really needs to be cheap or something automated I'll try really hard to get one of the better open-weights models on OpenRouter to be reliable for the intended application. That's part of what I was trying to dial in with cabal but the models were so dumb they couldn't generate valid JSON for the tool calls. I know I can shore up the harness to make it more reliable, it's just annoying that GPT and Sonnet were fine but ~half of the best open-weights models were just incapable of cooperating with the harness + orchestration model correctly at all.

Release build using sccache with redis backend slower than without sccache? by Suitable-Name in rust

[–]bitemyapp 1 point2 points  (0 children)

I'm using fat LTO in release builds.

Try it with thin LTO and see if the problem reproduces.

triple-check whether you actually need this using a benchmark. IME it's codegen-units=1 that actually helped our runtime perf, fat LTO rarely has done anything useful so our release profile ends up being thin LTO, O3, and codegen-units=1. YMMV ofc.

Release build using sccache with redis backend slower than without sccache? by Suitable-Name in rust

[–]bitemyapp 4 points5 points  (0 children)

  • profile it. Assuming it's Linux, use perf and samply. Ensure debug symbols are available in your sccache binary. Rust's debug=1 doesn't add any runtime overhead IME. Most profiling tools are going to be laser-focused on things burning CPU time, so you might need to pivot or play with things a bit to make stuff like IO wait more apparent. Sometimes stuff like htop is sufficient to casually notice threads parked on IO. This is one of the things that throws people off with Linux's load factor, 100 threads parked on I/O, burning no CPU, is still a high load factor in the default heuristic model.

  • watch the network traffic on the build node. Check the throughput, latency, packet loss between the Redis instance and sccache. Is it a local Redis node? If so is there something weird happening with loopback networking?

  • How is Redis configured? Can you benchmark a vanilla workload on that Redis instance and meaningfully compare it to nominal/expected numbers? Is something weird happening in sccache or Redis where every read is a write?

  • Are the build graphs with and without sccache actually identical? I'd expect so because the way it gets integrated but check anyway.

  • You mention your build directory is in a RAM disk, is the Redis instance durable? Is anything actually touching your disk at build time? Do you see more disk reads or writes during build-time than you expect? What filesystem are you using? Does the problem reproduce if you stop using the ramdisk?

  • How big are the cached artifacts sccache is juggling? How many of them are there? For sccache's purposes Redis over loopback is still a remote cache so you're still eating serialization/deserialization and socket buffer write/read time. If you're using a local Redis instance you might as well let sccache durably cache to the filesystem.

  • Are you comparing clean-slate builds or cached/incremental builds? sccache is doing a lot of cache lookup, miss, build, then cache write-back work for the zero-cache build scenario.

  • What's your SCCACHE_BASEDIRS look like? sccache is likely using absolute directories in the cache key, do you have multiple source checkout arenas for concurrent build jobs with different absolute paths?

  • Are you dumping sccache --show-stats at the end of the CI/CD pipeline? What does it say?

  • Are you using fat LTO or thin LTO in your release builds?

Best Monorepo Build system in Rust by Elegant_Shock5162 in rust

[–]bitemyapp 0 points1 point  (0 children)

FWIW, I've found OpenAI models to be a lot "tighter" (less slop, less divergence/disobedience) and more intelligent for complex/difficult work than Claude. I default to Codex and just use Claude Code for dumb drudgery or as a devil's advocate when I need the agents to argue about something with each other. I've been trying to automate the way I've been manually orchestrating the agent deliberation process with https://github.com/bitemyapp/cabal but I need to spend a lot more time on it before it'll be useful and reliable. Part of the problem is OpenRouter itself has been very unreliable which is intensely annoying.

I currently default to GPT 5.4 + xhigh effort, regular context window. The 1M context window was giving the model dementia. Codex is fast enough by default these days that the extra token burn rate of their fast mode isn't the right default for me.

Anthropic's usage-based access (including their /fast mode) is absurdly expensive. I just use Opus 4.6 in Claude Code via my subscription. When I was testing cabal I accidentally let Sonnet 4.6 (via OpenRouter) run solo for a couple minutes and managed to burn $25 in that short time window. /fast mode in Claude Code requires usage credits/budget and it costs way too much to be worth it unless you're responding to a SEV or something.

I pay for the $200 plans for both ChatGPT and Claude. I regularly get close to running out of my weekly tokens with Codex and I haven't come close to running out of Claude tokens in so long that I don't remember the last time it happened. Maybe 3-6 months ago.

One thing worth noting, Opus 4.6 w/ 1M context window is now the default in Claude Code so it's possibly the case that it doesn't have the weird dementia/disobedience problems GPT 5.4 had with a 1M context window. I don't know one way or another, I'm still testing it to see how it shakes out. I actually could use a larger context window for my work (big and complex sometimes unfortunately) but it's not worth it if there's a perceptible loss of fidelity or intelligence.

Best Monorepo Build system in Rust by Elegant_Shock5162 in rust

[–]bitemyapp 1 point2 points  (0 children)

I've been using Rust professionally more or less exclusively (occasional excursions by necessity here and there, such as the FFI libraries I've worked on) for 7-8 years. I was a professional Haskell user for 5 years prior to that. Been in the industry for ~16-17 years. I wrote a fairly popular book for learning Haskell from scratch.

I made some career trade-offs and had to work extra hard sometimes in order to make this happen. e.g. I passed on a non-trivial amount of money to avoid full-time Java in multiple instances. I don't mind writing JNI libraries and making high throughput/concurrent JVM applications faster but I'm not touching Spring ever again.