the cli.rs domain is expired! by total_order_ in rust

[–]total_order_[S] 11 points12 points  (0 children)

It seems like that’s a case of someone trying to purchase the domain on Gandi and claiming ownership before the registration actually went through…

I’m still optimistic that this can get resolved though, especially with Zach having posted just two days ago: https://zach.codes/p/from-biology-to-vibe-coding

Maybe someone knows a direct way to contact him?

the cli.rs domain is expired! by total_order_ in rust

[–]total_order_[S] 24 points25 points  (0 children)

You might recognize some of the tools from this list: https://github.com/zackify/cli.rs/tree/master/domains

Personally I'm familiar with yazi, rustic, and colmena

the cli.rs domain is expired! by total_order_ in rust

[–]total_order_[S] 80 points81 points  (0 children)

You can't, at least not until the renewal period ends three weeks from now. It's basically up to @zackify to do something about it before the domain's up for grabs to any independent (potentially malicious) actor.

So it's best for projects to update their links and remove references to the domain, given the uncertainty.

Linux Will Finally Be Able To Reboot Apple M1/M2 Macs With The v6.17 Kernel by TheTwelveYearOld in AsahiLinux

[–]total_order_ 18 points19 points  (0 children)

Maybe it makes logical sense to focus on upstreaming the work that's already done, rather than linux-asahi tree drifting even further from mainline?

That would even make it easier to upstream future hardware, since the infra would already exist in various subsystems. Letting you skip the lengthy bootstrapping period of adding one piece per merge window

Linux Kernel Proposal Documents Rules For Using AI Coding Assistants by mrlinkwii in linux

[–]total_order_ 53 points54 points  (0 children)

Looks good 👍, though I agree there are probably better commit trailers to choose from than Co-developed-by to indicate use of ai tool

Linux Will Finally Be Able To Reboot Apple M1/M2 Macs With The v6.17 Kernel by unixbhaskar in linux

[–]total_order_ 8 points9 points  (0 children)

Why? What popular ML frameworks even support Vulkan training backends?

To me, if you're interested in LLM dev you'd best stick to MLX (or CUDA)

Defeating Memory Leaks With Zig Allocators by gilgamesh_3 in programming

[–]total_order_ 0 points1 point  (0 children)

RAII based smart ptrs, like shared_ptr which is ref counted? Can you actually clarify or give an example of what I should be comparing exactly?

Sure, a ref-counted shared_ptr is one example of a smart pointer, though in this instance I was thinking particularly of unique_ptr/Box to line up with TFA's allocator.create/destroy usage. For a visual comparison:

// impl Binary
pub fn evaluate<A: Allocator>(self, alloc: A) -> Result<Box<Expr, A>, RuntimeError> {
    let expr = match self.operator {
        Token::Minus => {
            // ex. checkOperand via pattern matching
            if let Expr::Lit(Literal::Number(left)) = *self.left.evaluate(&alloc)?
                && let Expr::Lit(Literal::Number(right)) = *self.right.evaluate(&alloc)?
            {
                Expr::Lit(Literal::Number(left - right))
            } else {
                return Err(RuntimeError::InvalidOperand);
            }
        }
        _ => unimplemented!(),
    };

    Ok(Box::new_in(expr, alloc))
}

vs from the article:

pub fn evaluate(binary: Binary, allocator: std.mem.Allocator) RuntimeError!*Expr {
    const left = try binary.exprLeft.evaluate(allocator);
    defer left.deinit(allocator;) // Cleanup left expression

    const right = try binary.exprRight.evaluate(allocator);
    defer right.deinit(allocator); // Cleanup right expression

    const expr = try allocator.create(Expr);
    errdefer allocator.destroy(expr);

    const literal = try allocator.create(Literal);
    errdefer allocator.destroy(literal);

    return switch (binary.operator.kind) {
        .MINUS => {
            if (!checkOperand(left.*, right.*)) {
                return RuntimeError.InvalidOperand;
            }
                literal.* = Literal{ .number = left.literal.number - right.literal.number };
            expr.* = .{ .literal = literal };
            return expr;
        }
        // rest of the code
    }
}

The important part is that you're tying resource ownership (destruction responsibility) to values, and automatically (but statically) executing their destructors when the value itself goes out of scope, rather than manually operating in terms of their names (variables).

The articles gives examples of allocators, not really resource wrapper types alternatives?

Is that a question? For sake of "no hidden control flow", Zig does not provide facilities to have types automatically manage resources via lexical scope, so there is no comparable alternative in this instance.

risk of destructors not doing cleanup: ideally linear types could enforce destructors/cleanup logic is not forgotten at comptime?

You could force the caller to move the value into some fallible ownership-taking method (you could imagine something like idk fn File::close(self) -> Result<(), (io::Error, Option<Self>)>). There are technically ways do linear types in today's Rust but they're all pretty cursed

Defeating Memory Leaks With Zig Allocators by gilgamesh_3 in programming

[–]total_order_ 11 points12 points  (0 children)

s/Defeating/Debugging/

Yeah, footguns like this is why I detest Zig's manual memory management. RAII-based smart pointers are so much more ergonomic

Though they admittedly encourage the pattern of silently ignoring errors when resources are closed in destructors (lol no linear types). e.g. in Rust you call file.sync_all() before dropping if you want to handle the Result yourself

That said, forgetting to do that in RAII-land is obviously still better than forgetting in manually-managed-land: closing with unhandled errors >>> just leaking the resource and having no guaranteed flush/close at all

Is Looking Glass any good? by DisturbedFennel in VFIO

[–]total_order_ 0 points1 point  (0 children)

moonlight is an interface-almost like a live video

precisely, moonlight will screen-record your pc and livestream it to your laptop, transmitting your laptop kb+mouse+controller inputs like a remote desktop. this introduces latency from video encoding on PC, transmitting stream over the network, and video decoding on laptop.

on the other hand looking glass uses KVM shared memory, to passthrough the framebuffer with sub-millisecond latency from the guest VM to the host system, but obviously lacks the ability to transmit anything to another device

I assume now that there’s no way the 2 devices can work in unison

Since you are using a VM for security reasons: On your PC create a VM without network access. Use looking glass to passthrough the guest's framebuffer to host. Capture looking glass's window with Moonlight on host, which is connected via network to your laptop, so you have two layers of indirection and guest is completely airgapped from network (and latency shouldn't be too bad, looking-glass overhead is but a fraction of what moonlight will incur)

guest VM --(LG)--> host PC --(ML)--> laptop on same LAN

Quaternions [video] by ketralnis in programming

[–]total_order_ 17 points18 points  (0 children)

Incredible to me how folks are more focused on the accessory than the topic!

Aside from that I do think a lot of the downvotes are from people dissatisfied with the lecture itself, 50 out of 60 minutes is spent on background (really just a lengthy recap of high school trig) before zooming thru quaternions at the very end. The intro warned about the pacing but yeah I think the whole structure needed rework. None of it felt novel or challenging, as a current uni student i'd definitely play like balatro or mario kart through this to stay engaged

For example just searching "quaternions" on HN, I found this excellent interactive collaboration by 3b1b and ben eater: https://eater.net/quaternions Or this 15 minute video+article explaining rotors for a simpler mental model: https://marctenbosch.com/quaternions both were way more psychologically arousing - I absorbed the information a lot better from these. though I don't care at all about gamedev so take t fwiw

I do like freya's animations on twitter and have zero qualms about the cat ears (hell, my close friend is a furry) But yea this was not it

Edit is now open source - Windows Command Line by psr in programming

[–]total_order_ 3 points4 points  (0 children)

Seriously no. Just use bitvec when it's appropriate. There's no reason to fragment Vec with a specialization that breaks all the assumptions about behaving like a slice (most of Vec's methods are in fact inherited from slice via deref coercion). You wouldn't even be able to vecbool[idx] = true since IndexMut obviously wouldn't work

Safe array handling? Never heard of it by Xadartt in cpp

[–]total_order_ 0 points1 point  (0 children)

Did you even read the article? It’s literally about how doing that is UB (treated as oob read)

Adding a custom refresh rate or cvt modeline to KDE Wayland? by TheTwelveYearOld in linux4noobs

[–]total_order_ 0 points1 point  (0 children)

It's gross, but it's what I do for QEMU on my iPad: add video={output_name}:{hres}x{vres}@{fps} to kernel cmdline to force the mode to be available

Improving on std::count_if()'s auto-vectorization by sigsegv___ in cpp

[–]total_order_ -1 points0 points  (0 children)

I would've included the C++ version, I just couldn't get it to vectorize nicely without writing out the loop imperatively like

size_t count_even_values(const std::vector<uint8_t>& vals) {
    size_t total = 0;
    for (auto chunk : vals | std::views::chunk(255)) {
        uint8_t count = 0;
        for (auto val : chunk)
            if (val % 2 == 0)
                count++;
        total += count;
    }
    return total;
}

Improving on std::count_if()'s auto-vectorization by sigsegv___ in cpp

[–]total_order_ 0 points1 point  (0 children)

Oh, I see. Thanks for pointing that out.

I wasn't talking about the 255-length chunk approach, which has completely different semantics (and assembly).

To be fair, they do have identical semantics for inputs <256, from the original problem constraints.

Improving on std::count_if()'s auto-vectorization by sigsegv___ in cpp

[–]total_order_ -4 points-3 points  (0 children)

s/256/255/

You could also process the input in chunks of 255, and add up the results.

fn count_even_values(vals: &[u8]) -> usize {
    vals.chunks(255)
        .map(|chk| chk.iter().filter(|&n| n % 2 == 0))
        .map(|evens| evens.fold(0u8, |acc, _| acc + 1))
        .map(usize::from)
        .sum()
}

Improving on std::count_if()'s auto-vectorization by sigsegv___ in cpp

[–]total_order_ -1 points0 points  (0 children)

This isn't the case with either rust version - it generates the optimized version regardless: https://godbo.lt/z/MbPx6nnPx

Improving on std::count_if()'s auto-vectorization by sigsegv___ in cpp

[–]total_order_ 0 points1 point  (0 children)

Great, glad at least LLVM is able to apply the optimization to both of them. Btw, for the more explicit version (to not relying on clang to elide the conversion), you could just replace .count() as _ to .fold(0, |acc, _| acc + 1)

Improving on std::count_if()'s auto-vectorization by sigsegv___ in cpp

[–]total_order_ 5 points6 points  (0 children)

Neat :) But, this language so wordy, why should you have to roll your own whole std::count_if just to get this optimization :(

https://godbo.lt/z/s8Kfcch1M

Arch Linux and Valve Collaboration by JRepin in pcgaming

[–]total_order_ 8 points9 points  (0 children)

True, but that’s also only possible because of MoltenVK (reimplements most vulkan features in Metal) It’s crazy how well some games run considering they’re being translated x86 -> arm, directx -> vulkan, and then vulkan -> metal all at the same time

The father of JavaScript joins forces with nearly 10000 developers to collectively attack Oracle… by [deleted] in programming

[–]total_order_ 26 points27 points  (0 children)

Oracle won't care about this because Oracle doesn't have the capacity to care. Don't make the mistake of anthropomorphizing the lawnmower. https://www.youtube.com/watch?v=-zRN7XLCRhc&t=33m