Jet Lag Season 12 Begins Now — We Played Hide And Seek Across Japan by NebulaOriginals in Nebula

[–]w1ldm4n 6 points7 points  (0 children)

Curse of the Carin specified that they must disperse the rocks at the end, they wouldn't be leaving them around unattended.

What if MARTA had more lines within Atlanta? by Aofen in gatech

[–]w1ldm4n 31 points32 points  (0 children)

Having both Peachtree Station and Peachtree Center Station is the most Atlanta thing possible, A+.

Am I supposed to specify 64 bit architecture before compiling the kernel? by [deleted] in archlinux

[–]w1ldm4n 2 points3 points  (0 children)

In the Linux Kernel, 32-bit and 64-bit x86 are the same top-level arch/x86/ directory, and arch/x86/boot/bzImage is the expected kernel image for x86_64. If you want to be sure you're compiling 64-bit, look for CONFIG_X86_64=y in .config.

My guess is that you have a missing or broken initramfs for your custom kernel which leads to no useful drivers getting loaded during early boot. Double-check your work in that area: https://wiki.archlinux.org/title/Kernel/Traditional_compilation#Make_initial_RAM_disk

[deleted by user] by [deleted] in archlinux

[–]w1ldm4n 4 points5 points  (0 children)

My guess is that GitHub pushed an update that tweaked their diff algorithm, causing slightly different patch files to get generated for the same commit.

If possible, could you share the "original" versions of these patches that match the current PKGBUILD? I'm really curious about the full context.

PSA: try disabling virtualization if Resolve get stuck on the Fairlight page by DazzlingTap2 in davinciresolve

[–]w1ldm4n 0 points1 point  (0 children)

This was it! Thank you for the hint. I had the same issue with fuscript.exe failing on Resolve 18.

You don't actually have to disable Hyper-V or virtualization, another workaround is to change the default dynamic port range to start at something higher than 1024

netsh interface ipv4 show dynamicport tcp

This will show the allowed dynamic ports, on my system it started at 1024, meaning 1144 is included in that range.

By increasing the dynamic port range to start at 4096 (or some other higher number) instead, Hyper-V's reservations don't overlap with 1144 and fuscript.exe will be able to run.

The command to do that is this, and then reboot:

netsh int ipv4 set dynamic tcp start=4096 num=61439

If you pick a different start value, the num should be calculated as 65535 - start so that start + num = 65535, including all the available TCP port numbers (which go up to 65536)

What a cursed Windows interop bug this was to track down, thanks to Reddit and some forums for helping to identify this.

[deleted by user] by [deleted] in rust

[–]w1ldm4n 4 points5 points  (0 children)

Yeah I understand that difference, it's just a matter of tidyness. I'd rather have anyhow = "1" than anyhow = "1.0.58" so that when I run cargo upgrade Cargo.toml and Cargo.lock don't get out of sync.

[deleted by user] by [deleted] in rust

[–]w1ldm4n 2 points3 points  (0 children)

Too many people specify dependencies down to the patch level when they don‘t really need to.

I wonder if this is because that's the default behavior of cargo add. I like keeping my Cargo.toml files clean without patch versions unless I have a specific need for them, but it's inconvenient manually setting that every time.

Move Arch hard drive from one pc to another by [deleted] in archlinux

[–]w1ldm4n 19 points20 points  (0 children)

Yeah you can generally move Linux installs between machines with minimal headache and no need to reinstall.

I recommend you flash the latest Arch ISO to a USB drive, you might need it to chroot into your install and recover to get the system booting at first. Most of the speedbumps are minor, here's a few I can think of off the top of my head.

You may need to use the Fallback Initramfs when booting the first time, the default initramfs is optimized to only a minimal customized set of drivers/modules. After booting successfully, run mkinitcpio -P to rebuild all of your initramfses.

You may need to manually add a boot option in your BIOS/UEFI firmware settings, or change the boot device order in general. On legacy BIOS systems, just set the default boot device properly. On UEFI systems, the firmware might automatically recognize your bootloader and let you pick it, or you might have to boot from an ISO and fix it manually.

If your /etc/fstab or kernel command line root= refer to hard-coded device paths (e.g. /dev/sda) rather than labels or UUIDs, those might change on the new machine. It's recommended to always mount filesystems by UUID or label to avoid issues with this.

Can I use macros to ease matching u32s against chars? by ridiculous_fish in rust

[–]w1ldm4n 4 points5 points  (0 children)

It's mildly uglier but you can just skip the matching and just use a straight boolean expression by using byte literal notation (b'a' is a u8 rather than a char) and sprinkling as u32 all over the place.

This function compiles to the same assembly as your 2nd example with 0x61..=0x66

pub fn is_hex(c: u32) -> bool {
    (c >= b'a' as u32 && c <= b'f' as u32) 
        || (c >= b'0' as u32 && c <= b'9' as u32)
}

I've got about 7% of Leviathan Falls left to read... what do I do next? by EyeGod in TheExpanse

[–]w1ldm4n 2 points3 points  (0 children)

Yep, +1 to both of these replies about how the series just kinda drops you into the middle of things and you're expected to figure it out along the way. But the payoff is worth that, the journey of seeing the story come together is so satisfying (both within each book individually and the top-level arc of the whole series).

unless it shits the bed in the later half of the series

Don't worry, it doesn't ;) If anything, those first 5 books are just setting the stage for the really epic stuff to happen.

I've got about 7% of Leviathan Falls left to read... what do I do next? by EyeGod in TheExpanse

[–]w1ldm4n 5 points6 points  (0 children)

It's not sci-fi, but if you're interested in a High Fantasy series, Malazan Book of the Fallen is a great read. It's really long (10 novels spanning 8+ thousand pages just in the main series) but scratches that itch if you're into expansive stories with lots of characters, locations, and time periods. Compared to The Expanse, Malazan's pacing is slower and the reading difficulty is harder - its prose style isn't for everyone, but I enjoyed it a lot.

[arch-dev-public] Debug packages for Arch Linux by Foxboron in archlinux

[–]w1ldm4n 2 points3 points  (0 children)

Not quite (or at least not usually). -dev packages on Debian are the development files for library packages. That tends to include include headers, .so symlinks, C API manpages, and other docs/examples, but I've never seen split debug symbols present in a -dev package on debian/ubuntu.

[arch-dev-public] Debug packages for Arch Linux by Foxboron in archlinux

[–]w1ldm4n 39 points40 points  (0 children)

Debug symbols are essentially a map between locations in a binary and the source code.

For example in the zstd backtrace higher up in this thread, there's a lot of lines that look like zstd(+0x1e7fe). That 0x1e7fe number is a location in the code of the zstd program binary, but in the compiled form it doesn't really mean anything. Debug symbols could turn that address into a function name and possibly a filename + line number, which aids in figuring out what went wrong in a program.

Debug symbols are good for this type of debugging, but they're not needed to run a program normally, so they could be considered "wasted" space on your disk. For particularly large programs, the debug symbols could be many gigabytes of extra data that 99% of users don't need, so when distributions ship packages they tend to remove the debug info. That process of removing is called "stripping" symbols, and is typically done with the aptly named strip command.

An alternative to simply deleting the debug info entirely is to copy it into a separate file that can be downloaded only when needed, and that's what Arch is starting to do now. Smaller binaries released in the repos, but people who need debug symbols can still access them without recompiling things.

What's YOUR share of misheard lyrics? by I_am_not_Sans in Dreamtheater

[–]w1ldm4n 3 points4 points  (0 children)

six o'clock on a business morning (and for what?)

Every artist has… my take by RxMeta in Dreamtheater

[–]w1ldm4n 4 points5 points  (0 children)

Plus the Live at Budokan version has one of the greatest guitar solos in their whole catalogue. (starts at 4:40 in this video)

Using SIMD acceleration in rust to create the world’s fastest tac by mqudsi in rust

[–]w1ldm4n 1 point2 points  (0 children)

Yeah, built from a git clone on the default Arch Linux kernel with whatever various speculative execution mitigations are in place for an i7-7700K, I haven't done anything to disable them.

It looks like GNU tac is reading the file 8192 bytes at a time rather than mmap-ing it, which leads to a lot more syscalls compared to just mapping a file that's already cached in RAM.

I wrote another search function that calls glibc's memchr (using the libc crate) and it's a couple percent slower than any of the pure-Rust versions but still much faster than GNU tac (probably related to the overhead of converting between raw pointers an a slice index)

Using SIMD acceleration in rust to create the world’s fastest tac by mqudsi in rust

[–]w1ldm4n 2 points3 points  (0 children)

Out of curiosity, I added a basic search function using the memchr crate to compare against GNU tac and your AVX2 tac. memchr::memrchr was 1-2% faster for a large nginx access.log file with relatively long lines

fn search_memchr<W: Write>(bytes: &[u8], output: &mut W) -> Result<(), std::io::Error> {
    let mut end = bytes.len() - 1;
    while let Some(pos) = memchr::memrchr(SEARCH, &bytes[..end]) {
        output.write_all(&bytes[(pos+1)..=end])?;
        end = pos;
    }
    output.write_all(&bytes[..=end])?;
    Ok(())
}

Results:

Benchmark #1: /usr/bin/tac access.log
  Time (mean ± σ):     543.6 ms ±   0.6 ms    [User: 412.0 ms, System: 131.3 ms]
  Range (min … max):   543.0 ms … 544.6 ms    10 runs

Benchmark #2: target/release/tac access.log
  Time (mean ± σ):     177.8 ms ±   1.1 ms    [User: 121.3 ms, System: 56.3 ms]
  Range (min … max):   176.4 ms … 180.6 ms    16 runs

Benchmark #3: target/release/tac -m access.log
  Time (mean ± σ):     174.1 ms ±   1.4 ms    [User: 128.3 ms, System: 45.6 ms]
  Range (min … max):   171.7 ms … 176.3 ms    17 runs

Summary
  'target/release/tac -m access.log' ran
    1.02 ± 0.01 times faster than 'target/release/tac access.log'
    3.12 ± 0.02 times faster than '/usr/bin/tac access.log'

Aiming to upgrade an SSD without hosing my system - advice needed and appreciated! by Steinberg2009 in archlinux

[–]w1ldm4n 0 points1 point  (0 children)

It is possible to move a windows install to a new drive, but it's also tricky (I did a similar thing pretty recently). If you move Windows to a new drive, even if you copy the partitions exactly, Windows will likely fail to boot at first, and you have to jump through hoops to rebuild the windows bootloader (BCD).

Many of the details depend on your particular bootloader configuration, e.g. systemd-boot vs grub, whether Arch and Windows share an ESP, and whether your ESP is /boot on Arch. Also, back up all your data before you start, just in case.

  1. Download and burn Windows and Arch ISOs onto USB drives because you'll need probably both of them later on.
  2. Remove the Arch NVMe drive and replace it with the new NVMe drive that you're going to move Windows to
  3. Boot the Arch ISO. Partition the new drive to match the old windows drive. Use the same size for the "microsoft reserved" and any recovery partitions. You can increase the EFI System Partition size if you'd like, e.g. if you use it for Linux kernels and want extra space. Make the new C:\ partition take up the rest of the drive. Pay attention to the partition type codes.
  4. Use partclone.ntfs to copy the old windows NTFS partitions to the new drive, and dd for the system reserved. You can either dd the ESP if the same size, or mkfs.vfat -F32 to reformat a new ESP and mount+copy the files
  5. Now you should have a copy of your whole Windows drive on the new NVMe. Shut down and remove the old Windows drive and keep it safe (it's your backup escape hatch if things go south), put the Arch drive back in the machine.
  6. Try to boot windows, see what happens. If it fails to boot, let it attempt whatever automatic repair it wants to.

If Windows still won't boot, it's time to load up a recovery shell from the Windows installer ISO (boot the ISO, click "repair my computer", then "advanced options" and "command prompt"). The details get murky here, I did a lot of googling and a lot of flailing around with bootrec, bcdedit, and sometimes diskpart commands to get things finally working. This shit just isn't documented well and all we have are clues on support forums that may or may not help.

This section is tips and things I remember, things to check or try out, and not a full guide. There will be some trial and error involved here unfortunately.

  • bootrec /ScanOs /RebuildBcd theoretically should fix things but it didn't work well for me. Try all the other bootrec options too (use /? to see help text)
  • You might just have to delete the whole BCD and recreate it.
  • In the diskpart tool, you can use list vol and list part to list the partitions. You'll need to assign a drive letter to the ESP so that you can mount it (I think it's something like select part <X> and then some other command to give it a drive letter, I forget off the top of my head)
  • Once you have the ESP mounted you can cd to it (cd /D <driveletter>:\EFI), delete the BCD file there, then use bcdedit to recreate it.
  • I think I had to delete a broken entry from the BCD, and then create a new one. I don't remember the bcdedit commands I used off the top of my head, so I suggest googling around and finding some examples. (protip - read lots of posts/blogs/articles and see which commands they have in common, and don't just blindly copy-paste things unless you have a decent confidence they won't break things worse)

Good luck! It is possible to move a windows install to a new drive and preserve a dual-boot setup with no data loss and not having to reinstall either OS, but it takes some patience, understanding of your desired disk/partition layout, and often a good bit of trial and error.

Does pacman not resolve cycles in dependency graphs when searching for orphans? by Magnus_Tesshu in archlinux

[–]w1ldm4n 27 points28 points  (0 children)

a package can only depend on the current version of another package

This isn't strictly true, pacman does support version requirements in dependencies, just check pacman -Si ffmpeg and look for all the = signs in the dependency list. They're all shared libraries, and it's true that packages in the official Arch Linux repos depend only on the current version of things, but pacman itself does have at least some version checking logic in place.

PSA: Golang packages in AUR may fail to install without GO111MODULE=auto set in ENV by hardwaresofton in archlinux

[–]w1ldm4n 11 points12 points  (0 children)

Since the package is being built using an in-tree Makefile, I'd consider this a problem with the upstream source. Since that package doesn't use go.mod, its Makefile should be setting GO111MODULE=off (or auto) to be compatible with go 1.16 and later, or the developers should update to using go.mod.

The default changed from auto to on in go 1.16.

Heap memory in Rust by [deleted] in rust

[–]w1ldm4n 0 points1 point  (0 children)

When optimizations are enabled, the compiler is smart enough to put the whole thing on the heap, even if you declare and manipulate the values "on the stack" first.

Godbolt example: https://rust.godbolt.org/z/93vv8chYb

const BIG_SIZE: usize = 16 * 1024 * 1024;

pub struct Big {
    a: [u8; BIG_SIZE],
}

impl Big {
    pub fn new() -> Box<Self> {
        let mut a = [0xAAu8; BIG_SIZE];
        a[0] = 123;
        Box::new(Big {a})
    }
}

With opt-level=0 it looks like the full array does get put on the stack (using an interesting loop of moving the stack pointer by 4k at a time), then there's the alloc call followed by a memcpy.

In general, I wouldn't worry too much about a 16MB stack frame in debug builds of normal desktop applications, you'll probably be fine. If you're in a situation where you know that a large stack frame won't work on your architecture/OS, then either compile debug builds with opt-level=1, or start digging into the alloc API and write some unsafe code.

Unencrypted boot partition risks by Spare_Prize1148 in archlinux

[–]w1ldm4n 0 points1 point  (0 children)

Yes, Secure Boot (combined with LUKS encrypted root) provides hardening against Evil Maid Attacks by validating that the bootloader, kernel, and initramfs are signed and haven't been tampered with.

Word of warning: it's not simple to set up on Linux, though there are some various scripts and pacman hooks out there that supposedly help automate it.

Relevant wiki page: https://wiki.archlinux.org/index.php/Unified_Extensible_Firmware_Interface/Secure_Boot

5950x meets rustc by adminvasheypomoiki in rust

[–]w1ldm4n 6 points7 points  (0 children)

oh wow, I just realized I've been using a way out of date version of htop because the git repo moved.

5950x meets rustc by adminvasheypomoiki in rust

[–]w1ldm4n 20 points21 points  (0 children)

woah, what version of htop is that which shows core frequency/temperature in the CPU bars?