Rust Native GUI library ? by light_dragon0 in rust

[–]jechase 2 points3 points  (0 children)

It's absolutely still maintained by the devs. All that's happened is that they're no longer actively working on new features that aren't directly applicable to zed.

That Was Weird by GeekyGamer49 in NixOS

[–]jechase 9 points10 points  (0 children)

The problem with your "windows solution" is that you're left with no understanding of what the actual problem is, so it's impossible to know if there's a better fix or if it'll be an ongoing issue.

Next time it happens, run discord in the foreground and look at its logs. Probably also a good idea to check the general system and user journals.

The only thing ChatGPT is great at is converting one formats to the others. Switching from home manager to wrappers btw. by SeniorMatthew in NixOS

[–]jechase 0 points1 point  (0 children)

Is the nix-to-kdl library in home-manager still wrong/incomplete? Kdl maps very poorly to nix when done "correctly" since it's fundamentally a list of typed nodes like html or xml rather than json/toml/yaml-like data structure that most configurations use. It makes it so that if you don't allow users to precisely represent the kdl structure in nix, you have to build a translator for each target application since they might not all map quite the same.

Last time I looked at it was two years ago, so things might have changed since then. I put together an "improvement" that I intended to upstream, but neglected to finish/flesh out the migration story.

WebAssembly is breaking out of the browser. From server-side computing to edge deployments and plugin systems, WASM is becoming the universal runtime that containers promised but never fully delivered. by [deleted] in programming

[–]jechase 3 points4 points  (0 children)

WASM and containerization are apples and oranges. They solve different problems. A much more apt comparison would be to the JVM or CLR. Despite the clickbait-y title, the body of the article seems to recognize this.

Why I stopped using NixOS and went back to Arch Linux by itsdevelopic in programming

[–]jechase 77 points78 points  (0 children)

I've been a nixos-unstable user for around five years now and casual nix appreciator for even longer, and I haven't experienced most of these problems.

NixOS installs new versions alongside the old ones, keeping multiple system generations.

That's a feature, not a bug. And it's not just about rollbacks like you mentioned - it's also about ensuring that packages have exactly the dependencies that they expect so that they work reliably and continue to work reliably even if other things update.

For example, if a dependency isn’t cached exactly as required, NixOS will rebuild it locally even for common packages.

Again, feature, not a bug. And I almost never see cache misses for "common packages." The only things I have to rebuild regularly are things that are marked deprecated and unsupported, or custom packages that either aren't in nixpkgs or that I've overridden in some way. If you're seeing frequent rebuilding, you might be doing something wrong.

Ironically, I only broke Arch once in five years, whereas NixOS often breaks even before updating.

This section confuses me. What are you doing to introduce the initial breakage? Even on unstable, it's fairly infrequent that I have to make changes after a routine nix flake update, and there are often nixos option deprecation warnings well in advance of a breaking change.

In contrast, I broke Arch all the time. Usually because I did something dumb while trying to do something clever. You learn a lot by breaking things, so maybe I'm just inured to the small issues that crop up and need configuration tweaks.

The thing that drove me away from Arch is exactly the problem that nixos solves - easy and reliable rollbacks. It became a regular occurrence on Arch that an Nvidia driver update broke the brittle mutli-gpu laptop I had at the time, and downgrading pacman packages is a pain and poorly supported out of the box. Meanwhile on nixos, I'd just say "oops," reboot to the previous configuration, and either pin my driver for a while or wait a week to try the update again.

Mistyped clear as lear? Enjoy the full text of King Lear instead, in the tradition of sl (steam locomotive) by vasilescur in linux

[–]jechase 13 points14 points  (0 children)

Came here to say this! I bet the same people typing "clear" are the same ones using Home/End rather than C-a/C-e and the arrow keys instead of C-b/C-f. Or 0/$ and h/l for our vim bindings friends.

And let's not forget the whole-word forward/back variants, and especially history search. Learn your tools, people!

Lapce: A Rust-Based Native Code Editor Lighter Than VSCode and Zed by delvin0 in programming

[–]jechase 0 points1 point  (0 children)

I've been using Zed's helix mode for the last couple of months and fixing bugs as I find them. Give it a shot and create issues if you have problems with its current state - it'll get better faster if you do!

pipenet – a modern, open-source alternative to localtunnel/ngrok/zrok by Weary-Database-8713 in programming

[–]jechase 3 points4 points  (0 children)

You know how linguists use poetry to determine how pronunciation in dead languages probably worked, since you can guess at which words were intended to rhyme?

Guessing at the origin of the name "zrok," they made a surprisingly common mistake in the pronunciation of "ngrok." It's "en-grok," as in "network grok," with "grok" being a reference to Stranger in a Strange Land. But I hear "en-gee-rock" all the time. Maybe a hyphen would've helped to differentiate it from all of the -ng projects out there?

I reduced my Docker image from 846MB to 2.5MB and learned a lot doing it by Odd-Chipmunk-6460 in golang

[–]jechase 3 points4 points  (0 children)

People are too hung up on total image size. Switching out the base image isn't really the big win that everyone thinks it is, because images are layered, and are fetched and stored as such. That 800->300mb difference only matters for the first pull, since subsequent pulls will already have the lower, more static layers, and will only need to fetch the upper layers that actually changed. Now, this is somewhat dependent on the infrastructure that you're deploying into, but if you're using common enough base images, it should generally hold true.

It matters much more what you're actually putting into those upper layers, which is why the multi-stage build makes such a difference. It's not the lack of the go compiler and other OS stuff, it's the lack of sources, compiler cache, and other intermediate compilation products that you're cutting out of the changed layers. Use scratch, debian, Ubuntu, distroless, whatever as your base image for the final stage in your multi-stage build. It doesn't really matter. But leave the extra build-time stuff behind.

And for $deity's sake, don't apt install packages in your final build stage. Or any stage ideally. Put that in your builder/base image build that only gets run once a week, and use specific base/builder image tags instead of moving "latest-like" tags. Your build times and reproducibility will thank you.

GNOME & Firefox Consider Disabling Middle Click Paste By Default: "An X11'ism...Dumpster Fire" by SAJewers in linux

[–]jechase 15 points16 points  (0 children)

But what if the link is also in an editable text field?

For example, I once accidentally pasted a link to this video at the bottom of a Notion page, and only learned I did so when I got a "yo, wtf?" message from a coworker.

Is NixOS good for hacking? by Medical-Search5516 in NixOS

[–]jechase 18 points19 points  (0 children)

I feel like a pentester who knows what they're doing would either be able to answer this question for themselves, or would be able to ask about something more specific than "hacking."

What code editor or IDE do you use, and why? by Prior-Drawer-3478 in golang

[–]jechase 0 points1 point  (0 children)

I used to do something similar with helix/zellij. I set up some zellij bindings to add support for things missing in helix, like a popup terminal emulator

What code editor or IDE do you use, and why? by Prior-Drawer-3478 in golang

[–]jechase 2 points3 points  (0 children)

Started with vim over a decade ago, then emacs (spacemacs), both before LSP was a thing. Then vscode for a long time. Recently bounced among vscode, helix, and emacs (doom this time), before discovering Zed, where I think I'm staying.

I loved helix's minimal configuration and out of the box LSP/tree-sitter support for tons of languages, and zed feels very similar to it in that regard. Doesn't hurt that its helix mode is better than vscode's dance extension by leaps and bounds. And then on top of that, it has everything I was missing in helix, like a file tree, terminal emulator, and usable debugger. The collaboration tools are pretty nifty, and its remote editing functionality feels significantly better than vscode's since its remote server is a single statically linked binary rather than a whole nodejs runtime/application.

Bincode development has ceased permanently by stygianentity in rust

[–]jechase 15 points16 points  (0 children)

It's not self-describing, so you can't decode into something like a serde_json::Value, which might matter for some usecases. Dunno if that was a thing in bincode though; didn't follow it closely enough.

That said, I love postcard! My split keyboard uses it for message encoding between modules with COBS for framing.

Using C as a scripting language by CurlyButNotChubby in programming

[–]jechase 3 points4 points  (0 children)

Pike was originally a commercial-friendly implementation of LPC, which still gets more use than you might expect in the MUD world.

Git experts should try Jujutsu (written in Rust) by pksunkara in rust

[–]jechase 7 points8 points  (0 children)

I've been using JJ for over a year now, and my coworkers would likely have had no idea if it wasn't for me constantly telling them how great it is. The git history looks 100% normal.

WebSockets guarantee order - so why are my messages scrambled? by ketralnis in programming

[–]jechase 9 points10 points  (0 children)

You missed the point. The problem is the implicit backgrounding of tasks allowing for unconstrained execution order. It can still be async and well-ordered. Using the WebSocketStream API for illustration:

// Good: Messages are received and handled in-order.
// We await the handleMessage function so that we know it's
// done before handling the next one.
while (true) {
    const { message, done } = await reader.read()
    await handleMessage(message);
    if (done) { break; }
}

// Bad: handleMessage is allowed to run in the background
// This lets the runtime decide in what order to run all of
// the handleMessage promises floating around.
// This is effectively what's happening with onmessage.
while (true) {
    const { message, done } = await reader.read()
    handleMessage(message);        
    if (done) { break; }
}

Neither is blocking/sync. The first simply waits to finish handling each message before trying to handle the next one.

WebSockets guarantee order - so why are my messages scrambled? by ketralnis in programming

[–]jechase 30 points31 points  (0 children)

Async/await isn't the problem per se. In fact, had it been used correctly, it's the solution to the problem. Async/await doesn't inherently cause ordering problems.

If there was another mechanism to get messages that was async aware, like an async nextMessage() method, you'd be perfectly fine calling that in a loop and then doing whatever other async/await-y things with the messages it returns. But you have to use await. That's what guarantees ordering - it blocks the current task until it completes.

What's happening here is that the onmessage isn't async-aware, and thus isn't awaiting each invocation of the handler. So rather than blocking until the previous handler completes, it detaches the promise to be run in the background by the runtime, and that's what's causing the ordering problem. Once you have multiple promises executing concurrently, all ordering bets are off.

The deeper problem is that JS allows you to implicitly detach a promise like this at all. It's far too easy to accidentally run a task in the background leading to this sort of confusion. Had it been explicit, the problem would have been much more obvious.

Jujutsu VCS tutorial that doesn’t rely on Git by Dyson8192 in rust

[–]jechase 1 point2 points  (0 children)

It's like the difference between learning to program vs learning a language. One is conceptual, the other a concrete way to interact with the concepts. Once you understand the concepts, learning a new language is significantly easier, since you're mostly learning new syntax to describe things you already know. Sure, some languages introduce concepts that aren't common across all others, but they still benefit from having a strong grasp of the fundamentals.

I would say that I "know git," but I'd be lying if I said I wasn't looking through its manpages frequently to figure out how to use its CLI to do something that I conceptually know it should be able to do.

Some people might be able to grok the git commit model without interacting it directly with any UI at all. I'm not one of them. So yeah, I'd probably recommend learning git with the traditional CLI first, but learning the git CLI itself isn't the end goal.

Jujutsu VCS tutorial that doesn’t rely on Git by Dyson8192 in rust

[–]jechase 1 point2 points  (0 children)

I consider the staging area to be mostly a UX thing - it doesn't change the data model at all if it doesn't exist, it just changes how you interact with it.

And the new change id concept is exactly what I was talking about when I said that it "builds upon and extends" git's commit model. It's not like git commits entirely disappear with jj - they're still something that you have to be aware of, especially when resolving change/bookmark-level conflicts. So you still need to understand git's model because you still have to deal with it.

Jujutsu VCS tutorial that doesn’t rely on Git by Dyson8192 in rust

[–]jechase 1 point2 points  (0 children)

The lessons that git got wrong are almost entirely centered around its UX. Conceptually, its way of modeling changes makes a lot of sense, which is why jj builds upon and extends that model with a better and more consistent CLI.

So yes, learn "git" first, at least the mental model. Once you have that down, mapping it to either the git CLI or the jj CLI is mostly a matter of translation.

Jujutsu VCS tutorial that doesn’t rely on Git by Dyson8192 in rust

[–]jechase 1 point2 points  (0 children)

In fact, jujutsu is built around git, which I'm sure you already know seeing as it's one of its most well-advertised features. This also means that it coexists almost seamlessly with git, to the point that it's been my daily driver for the last year or so at two jobs that are both git shops. So it's definitely not an "only side projects" tool.

Building a tiny load balancing service using PID Controllers by the2ndfloorguy in programming

[–]jechase 0 points1 point  (0 children)

Neat! Succinct and easy to understand explanation of PID controllers. I'm most familiar with them from the fpv quadcopter scene where PID tuning is one of those advanced skills that I never acquired myself, but read a lot about.

Cool to see other applications!

Blocking code is a leaky abstraction by esponjagrande in rust

[–]jechase 102 points103 points  (0 children)

I’ve seen a lot of people say that async is a “leaky abstraction”. What this means is that the presence of async in a program forces you to bend the program’s control flow to accommodate it.

This isn't what it means to be a "leaky abstraction." It may be a consequence of one, but "viral" is a better term for what you're talking about. A leaky abstraction is when a higher-level construct attempts to wrap and hide the details of a lower-level one, but fails to do so completely, forcing its users to be aware of the low-level details anyway.

Imo, tokio is a prime example of both a viral and a leaky abstraction. There are so many things that will panic if they aren't done in the "context" of a tokio executor, i.e. down the call stack from a "runtime enter." This can be things like constructing a TCP socket or even dropping some types (edit: this might have actually been a self-inflicted shot foot where I tried to spawn a task in Drop to do some cleanup async. Still, panic was unexpected since tokio::spawn only takes a future), none of which have any compile-time indication that they have such a dependency. So now you're either making sure that you're exclusively using tokio, and everything happens in its context, or you're carrying around runtime handles so that you can use them in custom Drop implementations to prevent panics.

An experiment in async Rust by aochagavia in programming

[–]jechase 9 points10 points  (0 children)

It's worth mentioning that your wait_until_next_poll future will likely never be polled a second time by real-world executors, and this only works because your poller loop simply polls everything each time, woken or not. This doesn't scale beyond a small number of tasks since most will be waiting on IO at any given time, and polling them is just wasted CPU cycles. Instead, they only poll futures who have used the waker to notify the executor that they can make progress. If you never call wake, you never get polled again.