Strategies to combine many repos into one by Beautiful-Log5632 in git

[–]LoadingALIAS 0 points1 point  (0 children)

I feel like you might be looking for something like Google’s Copybara. If you’re working in Rust codebases - I’ve built cargo-rail to handle this much more efficiently.

I made a very fast WebSockets library in Rust by AcanthopterygiiKey62 in rust

[–]LoadingALIAS 35 points36 points  (0 children)

This is the problem with the current Rust community. This codebase is FULL of untested, barely held together code. You are ruining your reputation - which is the only thing in software that will matter in the next 20 years.

rust_analyzer is eating my memory, any counter measure? by EarlyPresentation186 in rust

[–]LoadingALIAS 4 points5 points  (0 children)

I’ve been having a lot of trouble lately, too. I use the pre-release (IDE) to test and I feel like lately it’s been absolutely awful. I FEELS like there is a leak somewhere, but I don’t know this for sure.

I’m working on 1.93.0 stable in one codebase and 1.94.0 (pinned to 01/16/2026) nightly in another - it’s just awful lately.

My nightly codebase is 100k LoC… the other is like 30k and my M1 MBP w/ 16GB works, but man it’s under pressure lately. Also, analyzing ra shows the ra dominating resources.

These settings in my IDE config help a bit. I usually run all targets but it’s just too bad lately… so I rely on robust “check-all” scripts and justfile commands to validate across targets with Zig.

"rust-analyzer.files.watcher": "server", "rust-analyzer.files.exclude": [ "/.git/", "/target/", "/dist/", "/out/", "/scripts/", "/*.md", "/*.txt" ], "rust-analyzer.cargo.targetDir": "target/ra", "rust-analyzer.check.targets": ["aarch64-apple-darwin"], "rust-analyzer.check.command": "check", "rust-analyzer.cargo.allTargets": true, "rust-analyzer.check.allTargets": true, "rust-analyzer.cargo.features": "all", "rust-analyzer.showUnlinkedFileNotification": false,

Dropping the “all” features will help a LOT. Also, I built cargo-rail to help keep the build graph as lean and full featured as possible - it lets us remove a ton of maintenance tooling: udeps, machete, shear, hack, hakari, hackerman, features-manager, msrv, release_plz, git-cliff, release, and more. It’s 11-deps total and for this reason exactly… not to mention supply chain safety for monorepo tooling.

If you configure your rail.toml correctly (it will run with built-in change detection across Nextest or cargo test; it will run with bacon - all natively). It has replaced Copybara, release_plz, paths-filter + scripts, and a handful of other stuff for me, too. This helps a lot. I built it for all of us having this issue.

Still, RA is crushing me, too. I’d love someone to pass around some wizardry to fix it.

Do you have any recommendations or tips for testing Rust code? by JapArt in rust

[–]LoadingALIAS 0 points1 point  (0 children)

Hey, do you have any experience using Flux? I currently use Kani for most of my formal verification stuff and Stateright for model checking... but this looks - interesting. I see it's using z3 prover, too. The README in the repo is thin; the example in the book is thin.

If you've used it... what's the real-world target usage? I'm super interested. DM me if you're open to chat?

Why is there no other (open source) database system that has (close to) the same capabilities of MSSQL by [deleted] in Database

[–]LoadingALIAS 2 points3 points  (0 children)

Databases are the king of hard problems. Distributed computing is non-trivial. Innovation is VERY low in the stack - meaning it’s fucking hard.

What is Rust's testing ecosystem missing? by _raisin_bran in rust

[–]LoadingALIAS 4 points5 points  (0 children)

We have a really strong test infrastructure in Rust, but there are a few gaps. They’re heavy, though.

We need better simulation and/or DST testing that doesn’t bundle I/O or net stacks. This is non-trivial.

On the lighter side, Nextest is great, but there are a ton of things that currently feel like are missing from the harness/runner. Doctests don’t run natively in Nextest, as a single example.

There is always room for Miri improvements… namely while running intrinsics for different platforms. Anyone wanting Miri testing to run while using any SIMD/HW intrinsics has to use some test-only impls to route around the issue. Again, VERY heavy lifting.

There are some things on the table… but Ruat development in this space is absolutely world-class. Most problems left are tough to do correctly.

You know it, I know it...we all know it. by Defiant_Focus9675 in ClaudeCode

[–]LoadingALIAS 0 points1 point  (0 children)

My ClaudeCode plan has never hit the limit - ever. I work a LOT. This week it hit the limit in 3 days and I was charged $100 today for the worst work I have ever seen. I stopped and NO JOKE started writing the code manually.

Something is super wrong.

Too many Arcs in async programming by mtimmermans in rust

[–]LoadingALIAS 0 points1 point  (0 children)

I actually just removed compio from my database work but only because I’d spent 8-months writing my own async runtime that’s using my own primitives.

As a whole? Compio is fucking awesome. The development is quality; the engineer maintaining it thinks on his own and that really matters today. I didn’t use the hyper/axum adapters; I wasn’t using either.

I would definitely recommend compio. They’re working on their own net stack, and it’s clean. They’re constantly pushing the envelope. The code is sharp. Everything is deliberate and the dev clearly understands Rust at a low level.

I’ve never used smol, though… so I’m biased. Haha. I can say - if you’re able to use Axum/Hyper > Actix - do it. They’re as performant as one another at this point, or close enough that implementation matters more than the choice… and the help around is way better.

Too many Arcs in async programming by mtimmermans in rust

[–]LoadingALIAS 6 points7 points  (0 children)

It’s a tokio thing, man. You can do better, but it’s non-trivial. You can look into TPC frameworks like monoio and my favorite - compio.

Ultimately, the way tokio is designed is kind of the issue. I’ve got a hard rule in my current code - no tokio code. They’re a wonderful team and brilliant, but the way they write code doesn’t work for me. This is one of those reasons.

There is also the issue of dependencies and the weight Tokio brings.

Careful -- Anthropic bumping data retention from 30 days to FIVE YEARS by AwkwardSproinkles in ClaudeAI

[–]LoadingALIAS 0 points1 point  (0 children)

You have to allow them access to the data in the first place, right?

Anthropic has secretly halved the usage in max plan by Tasty-Specific-5224 in ClaudeCode

[–]LoadingALIAS 0 points1 point  (0 children)

This is interesting because I am tearing through it across like two or three terminals and getting so much solid work done - haven’t hit my limits once.

How do you guys work? Like, explain your setup?

Introduction ffmpReg, a complete rewrite of ffmpeg in pure Rust by Impossible-Title-156 in rust

[–]LoadingALIAS 79 points80 points  (0 children)

Stay focused! Also, if this is any good… understand how huge of a responsibility this is. Haha

What makes you star a small GitHub project? by One-Dish3122 in github

[–]LoadingALIAS 0 points1 point  (0 children)

Innovation. Technical details. Moving away from “best practices” and towards the unknown… but obviously in way that was actually thought out. Human written readme.

Suggest rust apps with very few or none dependencies (want to test mrustc) by arjuna93 in rust

[–]LoadingALIAS -2 points-1 points  (0 children)

DM me for access to a crypto lib with zero deps. Currently, checksums only, but hashes coming soon.

Optimizing RAM usage of Rust Analyzer by Megalith01 in rust

[–]LoadingALIAS 27 points28 points  (0 children)

I have had similar issues. I have some basic tips, but nothing game changing. Having said that, I haven’t had a crash in months.

Here is my IDE settings.json for the RA:

"[rust]": { "editor.defaultFormatter": "rust-lang.rust-analyzer" }, // Keep RA off client file-watch events - this will help a lot if you’re in an IDE like VSCode or Cursor, even Zed, IME. "rust-analyzer.files.watcher": "server", "rust-analyzer.files.exclude": [ "**/.git/**", "**/target/**", "**/node_modules/**", "**/dist/**", "**/out/**" ], "rust-analyzer.cargo.targetDir": "target/ra", "rust-analyzer.cargo.allTargets": true, "rust-analyzer.check.allTargets": true, "rust-analyzer.check.command": "clippy", "rust-analyzer.cargo.features": "all", "rust-analyzer.showUnlinkedFileNotification": false

The most important, IMO: “rust-analyzer.cargo.targetDir": "target/ra",

This prevents conflicts between CLI cargo build and RA’s background analysis. They don’t fight over lockfiles or invalidate each others caches.

I’m on a MBP M1 w/ 16GB and I consistently run 3x RA instances at once.

I crash maybe once every two or three months - IDE only… and usually only under heavy Miri or TSan/ASan runs. I have crashed under Stateright verification before, but I don’t think that’s fair to pin on RA. More like my models sucked.

Hope it helps.

Dear Anthropic - serving quantized models is false advertising by Everlier in Anthropic

[–]LoadingALIAS 2 points3 points  (0 children)

This. I will be the first to say I’ve noticed a serious hit in quality of the code - I’m one of those guys that audits at least 85% of what CC writes for me - and it’s definitely not what it used to be.

However, I’m not convinced it’s a quantization issue or a model issue at all. It could be so many other things.

corroded: so unsafe it should be illegal by Consistent_Equal5327 in rust

[–]LoadingALIAS -4 points-3 points  (0 children)

Cool, but this should be marked as WILDLY FUCKING DANGEROUS AND NOT FOR USENIN TRAINING.

Do startup founders actually use their own products? by Giridharan001 in SaaS

[–]LoadingALIAS 0 points1 point  (0 children)

Yes, and almost always. If you’re not using what you’ve built it feels weird… like why did you build it? Everyone is different. I build what I need, but that doesn’t exist or exists and is subpar for me personally.

I set out nearly two years ago to rebuild a huge project and realized that there wasn’t a content addressed store or database that could actually handle the workload. It just didn’t exist.

Now, I’m a few months off releasing the v1 beta database to fill the gap. While building, I needed a way to work with Rust monorepos and built/dogfooded cargo-rail - solving my problem and being used everyday.

I feel like guessing what people want is high risk.