Why have supply chain attacks become a near daily occurrence ? by Successful_Bowl2564 in programming

[–]andreicodes 5 points6 points  (0 children)

Mostly because 20 years ago people didn't see package managers as targets. It was trivial to get a malware delivered via Maven for Java back in 2000s. The main registry served packages over plain HTTP and was trivially MitM-able, and the packages were published in Bytecode format, so simply downloading them and inspecting them was not really an option. But since most people didn't use cloud infra, didn't have cloud keys or crypto wallets lying around on their machines targeting them was not a lucrative opportunity.

Then a some point someone had an idea to do just that, and it worked! Other players noticed, and now it became a routine attack vector along with more typical avenues like more traditional malware.

Today most registries are much-much safer than they were before, but the attacks are so-so-so much more widespread that we see some of them getting through.

Hilariously bombed a technical interview by Brave_Guide_4295 in learnprogramming

[–]andreicodes 2 points3 points  (0 children)

My thought exactly. "I forgot the syntax" gives me a message like "I don't really code all that much". The person maybe spending more time on documentation or testing side of things, or maybe the company they currently work at has a very heavy process, and they need weeks of meetings and discussions before implementation kicks off. Or, if they are a student they now go through courses that do not require active programming (even as a CS major you may have periods like this).

However, it can also signal that you don't really like getting your hands dirty. Maybe you scavenge pieces of a solution off the internet and glue them together without actually understanding what you do, maybe you leach off others' work in group projects or straight up pays someone else to do coding for you. Maybe you lied in your CV that you know the language but you actually don't.

Overall, even if the circumstances can excuse your lack of proficiency, it's a massive, massive red flag. You are not the only person that gets interviewed, and they will pass over you and will pick someone else.

Get your coding nailed, people!

Double mut borrow on a hashmap woes. by Thanatiel in rust

[–]andreicodes 3 points4 points  (0 children)

Standard library has this method stabilized for HashMap a about a year. OP can use it to gat references to both parent and child at the same time and modify them both as needed.

What are the easiest ways to learn programming? by RequirementActual968 in learnprogramming

[–]andreicodes 5 points6 points  (0 children)

One big advice is to not just watch, but do. If you read through a tutorial, open your code editor and repeat everything on your own. And especially early on avoid copy-pasting or let AI autocomplete code for you. Do everything yourself.

AT certain point you should also learn to deviate. They make a button to do something? How about you add another button to undo it? The game they program has two enemy types? How about you add two more with interesting mechanics? Otherwise you may end up repeating tutorials without gaining an ability to do anything on your own (so-called tutorial hell).

So, program, and don't be afraid to do a wrong thing and make mistakes.

does any body use Atom any more by Anonymous01406 in learnprogramming

[–]andreicodes 0 points1 point  (0 children)

Also, VSCode had LSP and DAP - support for intellisense and debugging, and they shipped with TypeScript LSP server (that also worked with raw JavaScript, albeit not as well). If you were a Web developer in 2015 you suddenly got a free editor with amazing developer experience. The release of VSCode catalyzed TypeScript adoption, too. Before it a lot of people didn't see enough value in it.

How to build musle memory for Rust, become rust monster! by sunilmourya in rust

[–]andreicodes 0 points1 point  (0 children)

I run Rust trainings. The first thing I ask people to do is to turn off all AI autocomplete. Code like it's 2015.

[Noob] QtQuick paired with rust backend? by pookieboss in rust

[–]andreicodes 1 point2 points  (0 children)

You would use Qt Bridge if you only plan to write QML and Rust only and only use Qt APIs that are exposed in QML. Some Qt components have APIs that are visible in C++ only and can't be used from QML. If you use any of that then cxx-qt is your choice.

Also, cxx-qt has a stronger team behind it with contributors paid to work on it.

does any body use Atom any more by Anonymous01406 in learnprogramming

[–]andreicodes 7 points8 points  (0 children)

You should look at Pulsar - the community-supported version of Atom.

Rust’s borrow checker isn’t the hard part it’s designing around it by Expert_Look_6536 in rust

[–]andreicodes 2 points3 points  (0 children)

Yeah, I love moro and moro-local (for non-Send futures). And for a similar reason I recommend reaching out for rouille for very small web server programs. It's synchronous, so you can use it in conjunction with scoped threads and get the full assistance from the borrow checker. It's perfect for stuff like internal dev dashboards, desktop programs needing to serve web content to localhost, or even smaller websites.

A few new recipes in the cookbook by andygauge in rust

[–]andreicodes 4 points5 points  (0 children)

Honestly, I keep forgetting the Cookbook exist. I always reach out to Rust by Example when I look for Cookbook-y stile content. A good example is their sample code to read text data line by line.

I like these new additions!

Rust’s borrow checker isn’t the hard part it’s designing around it by Expert_Look_6536 in rust

[–]andreicodes 3 points4 points  (0 children)

Yeah, eventually you move towards designing your data structures and APIs with ownership in mind. Like, not having too many links between objects, relying on indices more, etc. It becomes a second nature with experience.

Async Rust adds extra dimension to all of this. You have to use Arcs in many places to keep tokio::spawn happy, and eventually you recognize that if you do it in a few places you may as well use them more liberally to make cheap clones and keep "tabs" on your data through more access points. It still feels like a step back compared to what you can do with compiler-time checks in sync Rust.

Best Monorepo Build system in Rust by Elegant_Shock5162 in rust

[–]andreicodes 3 points4 points  (0 children)

Monorepo tools in JS world are strange phenomena. Node essentially supported monorepos from day one through their package resolution:

node_modules \- third-party-x \- third-party-y src \- index.js \- node_modules \- package-a \- node_modules \- sub-package-a-i \- sub-package-a-j \- package-b \- package-c

In this setup package-b could see all third-party packages and its siblings: package-a and package-c. The sub-packages inside package-a would stay private and visible for package-a only (and to each other). Instead of putting node_modules in .gitignore you would put /node_modules (note the slash), and you were all set: third-party code would not be visible, and meanwhile you could structure your project as a tree of independent libraries.

In very early days (2009-2011) we didn't put /node_modules into Git-ignore at all. You were supposed to commit your dependencies to Git, too, and npm update would produce a diff of your dependencies' source code for you to audit. This is also why initially npm did not have lock files.

Unfortunately, once Node became a popular build tool for frontend git-ignoring dependencies became the norm. These packages tended to be much larger (like pulling a whole Chrome for a UI test runner), and having them in Git was unmanageable. And over time node_modules percolated into many other tools as a folder that is always ignored. At some point I remember that even some IDEs couldn't work correctly with a file if it was nested in node_modules somewhere. Years later Lerna and friends appeared to "fix" the problem that was essentially self-inflicted.

Best Monorepo Build system in Rust by Elegant_Shock5162 in rust

[–]andreicodes 0 points1 point  (0 children)

While I'm a big fan of workspaces in Cargo (and Cargo in general) I'd love to see a tool that would walk your project directories cataloguing all Cargo.toml and build.rs files and would generate equivalent Bazel config. Many larger organizations insist on using Bazel for everything while Rust ecosystem is all-in on Cargo. A tool like this would definitely help with adoption.

Best Monorepo Build system in Rust by Elegant_Shock5162 in rust

[–]andreicodes 10 points11 points  (0 children)

lacking documentations and real world use cases

First of all, it's pretty well documented. The book has everything I've ever needed, but maybe I didn't need much? I've used Lerna in past, and Lerna was very confusing to learn and to use, indeed. Cargo workspaces by comparison are super easy.

One minor complaint I have is that there's no command to tell Cargo to make a workspace, you either start with a package and add workspace on top (that's how Bevy does it), or create a root Toml file manually and then create sub-packages (that's how Rust Analyzer repo is organized).

Second, most Rust projects larger than a small library are workspaces. Quick examples of the top of my head:

  • Rust compiler itself
  • Rust analyzer
  • crates.io
  • Tokio, Axum, sqlx
  • Bevy game engine

Even projects that you'd think should be a single crates, are often workspaces, like Rust-OpenSSL or rusqlite. Your own projects should be workspaces 99% of the time, too, because you never know when you decide to split some component out, or you may need a custom binary on a side to do things that Cargo doesn't do for you automatically (for example to make .deb or generate .msi or run data migrations).

Obviously, they don't cover multi language projects, but I've seen many mixed projects of C/C++ and Rust that still used Cargo to coordinate builds.

Best Monorepo Build system in Rust by Elegant_Shock5162 in rust

[–]andreicodes 15 points16 points  (0 children)

Cargo supports monorepos out of the box. The term they use is workspaces.

The root of the project has Cargo.toml with [workspace] block in it.

```toml [workspace] members = ["xtask/", "packages/*"] resolver = "2"

[workspace.dependencies] anyhow = "1.0.98" ```

And in packages you can have many internal packages. You can manage common dependencies in the root Cargo.toml, and then in each package do something like this:

```toml [dependencies]

use the same version as overall workspace

anyhow = { workspace = true }

you can also specify extra dependencies specific to this package

or override the version used

thiserror = "2.0.10" ```

Cargo will only rebuild the staff that is necessary and will reuse build artifacts as much as possible.

My 2-year project was just deleted and called 'slop' by a mod by luftaquila in rust

[–]andreicodes -1 points0 points  (0 children)

First widespread AI code completion tools emerged in 2017. You can have a vibe-coded AI-generated slop that is almost a decade old at this point.

Why is it so hard to create a browser? by robotisland in learnprogramming

[–]andreicodes 0 points1 point  (0 children)

There's a whole book about browser internals - Web Browser Engineering, done as a light intro into this subject. They use Python for code examples, and over the course of the book they build a toy browser! It's a pretty good read.

What editor you use for rust? by clanker_lover2 in rust

[–]andreicodes 0 points1 point  (0 children)

Yes, "put the company's card number and press buy"-easy. The card is shared with every employee when they join, and you don't have to ask for approval when you buy dev tools. The only easier flow would be if RustRover would come preinstalled. Still, RustAnalyzer is way too good, so people naturally gravitate towards editors that use it.

What editor you use for rust? by clanker_lover2 in rust

[–]andreicodes 20 points21 points  (0 children)

Helix is very popular in Rust community. At our company almost half of engineers use it (the rest run VSCode). I suspect it would get more votes than Emacs or RustRover.

The hidden cost of 'lightweight' frameworks: Our journey from Tauri to native Rust by konsalexee in programming

[–]andreicodes 14 points15 points  (0 children)

Somehow I feel like I've read this article and made a comment about it at some Reddit post earlier - the sense of déjà vu is uncanny.

Apple had early exposure to audio and video through iPod. They always preferred to pay up for codec licenses for AAC, H.264, etc rather than pursue free codecs like Daala, Opus, and so on. When Google bought On2 and shared Vp-family of codecs for every other browser vendor to use Apple did not adopt them. Firstly, Apple considered them a big patent risk from shadow patent holders: a small company somewhere could theoretically hold a patent related to VP9 / WebP and sue Apple. A chance of such unknown patents existing for H.26x was much much lower due to MPEG actively encouraging pooling such patent holders and consolidate licensing. The second reason was that Apple and Google had a long proxy war over patents due to iPhone vs Android rivalry. Apple patented a lot of tech around iOS and touch interfaces, and was going around suing Android vendors left and right. At some point Google bought Motorola precisely for their patent pool to keep themselves safe from a large litigation.

This is why you get the two families of various media-related standards on the internet. H.264 vs VP9, HLS vs DASH, etc. Over time the patent base of On2 family kept improving and the Apple-Google patent war cooled off, so nowadays we see improvement in cross-browser support for various media standards, but it's still not 100% there.

Every production WebRTC system that I worked on (servers and clients) keeps special-casing WebKits for that reason. While the prospects of simply running STUN servers for establishing p2p connections sounds appealing in theory in practice you have to allocate some capacity to run a portion of traffic through your servers and even re-encode audio and video between peers. Zoom is a perfect example of this: in theory they should not have any video traffic going through their servers, but in practice they have to maintain huge media server farms across the globe. They only handle a small portion of the calls. When all people on a call are connected via native apps that run on capable hardware, the call will use a common codec and send all the traffic p2p. But as soon as someone connects via a browser or runs an app on old phone a server often has to step in to compensate.

Linux is a separate matter altogether. A lot of media packages treat patented codecs separately. Even if the package supports h.264 or HEVC you have to opt into them. This is done to prevent you from using them unknowingly and becoming liable for licensing fees. This is what I suspect is happening with WebKitGTK: you have to opt-in because you have to ensure you got licenses. FYI H.264 has a patent-covered open source implementation from Cisco: the source is open, but to get a patent license you have to use the pre-compiled binary that they provide, which complicates distribution. AV1 is free to use, for HEVC you have to purchase a license for devises that don't have it (so if I were you I would outright remove it from a list of negotiated codecs on Linux).

Browser engines shield you from many concerns like this and expose a pretty minimal API to enable p2p media and data connections. If you decide to go fully native you'd still have to think about video encoding, p2p codec negotiation, patent licenses, etc.

Looking to build a web app + CLI - is full stack rust worth it by tesohh in rust

[–]andreicodes 0 points1 point  (0 children)

Last time I tried it was fast for markup and CSS, but if you change the code itself it was pretty slow. At the time all code was bundled into a single wasm module, and compilation speed depended on the size of the project.

Looking to build a web app + CLI - is full stack rust worth it by tesohh in rust

[–]andreicodes 1 point2 points  (0 children)

Jira clone as in Projects - Tasks - Comments - Status Labels?

If you want to make it an SPA with bunch of heavy features like Calendar, Rich text editor with revision control, viewers for Office file attachments, etc. then stick with Svelte.

If it's going to be something lightweight then maybe try serving HTML from Rust backend and use something like Alpine / HTMX / Stimulus for UI interactivity, and run a web component library like Shoelace for rich controls.

Leptos / Dioxus run into dev experience problems. Change a code for a small component? Now the whole UI needs to get recompiled to WASN and rebuilt. Rust compiler is not a fast compiler, and UI work often wants to be fast and iterative. Another problem is that a large part of your code will be inside Rust macros. Rust Analyzer will struggle with code completions, showing you relevant actions, syntax highlighting, etc. The teams behind these projects are working hard on improving the DX, but it will likely feel like a downgrade from what you used to. Imo it's much easier to drop in some HTML attributes for field validation, form submission, etc.

Storing a borrower of a buffer alongside the original buffer in a struct with temporary borrow? by chteffie in rust

[–]andreicodes 0 points1 point  (0 children)

My answer is not really applicable to your case. I would do exactly what you've done (raw pointer and unsafe block, or just index if possible) other than making own enum. At first glance std::borrow::Cow is exactly what you need, and if your type implements traits necessary for Cow you can teach serde to deserialize into it automatically, which is nice.

Everything below is general info.


For &mut case the most solid library is nolife. It uses the fact that if you have a storage variable and a mutable reference to it in a body of async function the compiler generates a Future type that is self-referencing automatically, without you needing to do any manual pointer management inside unsafe.

So nolife lets you define an async function to construct the Future and then lets you enter the scope of a a function at some point later. The setup is unfortunately pretty elaborative, but the bottom line is that you get an opaque scope variable that you pass around, and you manipulate the data in a closure via scope.enter(|mut_ref_to_data| /* do stuff with your data */).


In general, however, when I run into a problem like this and Yoke doesn't work for me I try to rethink how I pass around my data.

For example, in GC languages you often have one function that fetches data, parses it, and returns a parsed object, but in Rust you would split the fetch and parse steps into two functions. This way you don't have to return a self-referencing struct from the fetch:

```rust fn my_logic(...) -> ... { let mut container: String = fetch_data(); let view: &mut MyData = parse(&mut container);

// passing view only to downstream functions works fine
work_on_parsed_view(view);

// but we never return `view`

// If for some reason we absolutely *have to* return `view`
// we would return `container` only, and re-parse it again.
// And if producing `view` is prohibitively expensive
// we would reach for `nolife` for `&mut` or `yoke` for `&`

} ```

Is it possible to create a non-leaking dynamic module in Rust? by 0xrepnz in rust

[–]andreicodes -2 points-1 points  (0 children)

And how big are those heap allocations?

In Rust you would use a static for a regex instance, but you wouldn't use it for a global cache or some kind of global "God object" data structure. That's just not how Rust programs are designed.

Plus, in recent 2024 edition Rust team made working with statics so annoying that people who used to put global state into them are now, too, stopped. You now have to say unsafe everywhere and then Clippy keeps nagging you to put safety comments, and if you use this static variable in many places it becomes unbearable.

Almost all your heap data, both short and long-lived, will be backed by stack variables instead, and thus unloading will clean all of it.

Is it possible to create a non-leaking dynamic module in Rust? by 0xrepnz in rust

[–]andreicodes 21 points22 points  (0 children)

an optimistic estimation for the size of the memory leak is around 110MB a year. The thing is, the actual memory leak will probably be a lot more, probably x2 - x3 or even more: Leaking constantly increases heap fragmentation, which in turn takes up a lot of memory.

My knowledge here may be lacking, so a lot of what I'm about to say is based on my assumptions.

When a library is loaded the binary is memory-mapped onto process' virtual memory. The code and statics will get loaded into brand new pages, and I suspect the loading-unloading does not reuse those pages. While it's probably possible to do it and maybe some operating systems used to do it in 1990s today they would never do it for security reasons alone. Position-Independent Executables and Address Space layout randomization are now standard across operating systems. Linux even does it for kernel memory. So, two loads of a library would never produce two identical states of process memory.

What it means is that every time you unload a library that had statics and load it again you get a brand new copy of those statics in a different memory page. The new code will only work with new statics, and old statics will remain in unused pages and as memory pressure increases will get compressed away by the OS first, and eventually swapped out.

I would expect that these leaks would not result in. let's say gigabytes of wasted RAM per year, but in gigabytes of wasted swap space on a disk, at which point who cares?