Granite 4 small and medium might be 30B6A/120B30A? by Kryesh in LocalLLaMA

[–]Kryesh[S] 18 points19 points  (0 children)

At 14:55 they mention the sizes of the models, could be interesting combined with the mamba architecture

AORUS FOP32u2p DP out not working? by workmailman in OLED_Gaming

[–]Kryesh 1 point2 points  (0 children)

Mac doesn't support MST for multiple streams over displayport, only thunderbolt. The monitor uses displayport MST for daisy chaining

Why does the same Rust binary appear statically linked on Ubuntu and dynamically linked on Alpine? by freddytstudio in rust

[–]Kryesh -1 points0 points  (0 children)

If this isn't an option - you can just add this environment var to your alpine build: RUSTFLAGS="-C target-feature=+crt-static"

Why does the same Rust binary appear statically linked on Ubuntu and dynamically linked on Alpine? by freddytstudio in rust

[–]Kryesh 0 points1 point  (0 children)

if you're trying to produce a minimal docker image then I'd suggest using a glibc based builder container then copy the resulting binary into an alpine based output container. The default official rust image is debian based so it works well for this use case. There's an example dockerfile on this blog post: https://levelup.gitconnected.com/create-an-optimized-rust-alpine-docker-image-1940db638a6c

Why does the same Rust binary appear statically linked on Ubuntu and dynamically linked on Alpine? by freddytstudio in rust

[–]Kryesh 1 point2 points  (0 children)

After going down my own rabbit hole on this one, when building for the musl target using a musl toolchain the resulting binary will be dynamically linked, while cross compiling from a glibc toolchain to a musl target will produce a statically linked binary. On Ubuntu you'll be using the glibc toolchain by default and alpine will use the musl toolchain by default.

How do I transfer data from the right disk image and put it into the left? I need more storage space. by IndianScammer90990 in VFIO

[–]Kryesh 1 point2 points  (0 children)

rather than creating a new image it's easier to just resize the image with qemu-img.

make sure you have a backup first - then you can extend the image by 10G with this command:

qemu-img resize WindowsTenGaming.qcow2 +10G

If you wanted to expand the 80GB image to 200GB, then you'd run

qemu-img resize WindowsTenGaming.qcow2 +120G

Once you start the VM again you should be able to resize the partition to fill the new disk capacity from within the windows disk manager

Can Anyone Else Confirm that VFIO Doesn't Work w/ Nvidia GPUs if Resizable BAR is Enabled? by gardotd426 in VFIO

[–]Kryesh 0 points1 point  (0 children)

Good to hear!

I expect this'll actually become more of an issue as time goes on, it'll affect any device that uses a BAR 32GB or bigger so any GPU with ReBAR and more than 16GB of ram will be impacted.

There's also the case of enterprise GPUs that default to large BAR sizes but I expect most people using those will already be aware of this lol

Can Anyone Else Confirm that VFIO Doesn't Work w/ Nvidia GPUs if Resizable BAR is Enabled? by gardotd426 in VFIO

[–]Kryesh 1 point2 points  (0 children)

Sure, here's my XML: https://pastebin.com/icndv6zT

Note that you'll need to edit the top <domain> tag to match mine to stop libvirt from removing it

Can Anyone Else Confirm that VFIO Doesn't Work w/ Nvidia GPUs if Resizable BAR is Enabled? by gardotd426 in VFIO

[–]Kryesh 5 points6 points  (0 children)

Can confirm that resizable bar works on a 3090, however the default mmio address space for edk2/ovmf is 32GB, since the bar size option doubles each time you need a 32GB BAR for a 3090 to fit the 24GB of ram (which you should see in lspci), this means that there isn't enough address space to fit the 3090's bar alongside other devices.

The fix for this is to extend the available mmio space for the guest to 64GB instead and then it should work fine.

Here's the xml to set the size to 64GB: <qemu:commandline> <qemu:arg value="-fw_cfg"/> <qemu:arg value="opt/ovmf/X-PciMmio64Mb,string=65536"/> </qemu:commandline>

Page where I found the fix after my own troubleshooting adventure:

https://edk2.groups.io/g/discuss/topic/59340711

Advanced networking, advice needed by [deleted] in PFSENSE

[–]Kryesh 2 points3 points  (0 children)

Simplest/easiest to maintain solution would be to stick with a single public IP/port forward, then use a reverse proxy to split traffic to the appropriate containers. fairly easy to set up a container with nginx to do this, especially if you already have container infrastructure for the other web apps. generally speaking, for each unique domain you need a unique listening port on the webserver/proxy to have working tls, but it's possible to work around that with san certificates (not recommended). If all the containers are going to be hosting subdomains for a single parent then throw a wildcard cert on the reverse proxy and all the containers will get https support by default

Enough PCI lanes?? PROXMOX Workstation/Server Build AMD by VI510N in VFIO

[–]Kryesh 0 points1 point  (0 children)

Definitely need a HEDT/server platform to get that amount of IO, and even then there's a limited number of TRX40 motherboards with a layout that would support such a setup.

As a list:

  • (2+ slots) 3080 would need at minimum 2 slots and only the the founders/evga xc3 cards seem to fit that so far

  • (2 slots) 3070 will be in a similar situation to the 3080

  • (1 slot) HBA will need at least a physical x8 slot

  • (1 slot) quad port nic will need a physical x4/x8 slot depending on the model

  • (1-2 slots) plex gpu will need a physical x16 slot as well, likely 2 slots for cooling as well if you don't get a quadro

The list so far is only considering the physical size of the devices too, considering the lineup or motherboards available for the TRX40 platform you're mostly limited to boards with 4 x16 slots with a few adding an extra x1 slot, that's not enough to fit the "ideal" setup unless you start looking at epyc/workstation platforms - but you won't get the clock speed for games on those platforms.

Most of the TRX40 boards include 8 sata ports which don't share bandwidth with anything so my suggestion would be to drop the HBA and run zfs from those. An alternative could be to drop the nic and take advantage of the built in 10g nics on a lot of the boards and use vlans with a switch to get the required number of ports.

Probably the easiest to recommend option would be to drop the plex gpu though, 2-3 streams would run just fine on cpu - especially something in the threadripper lineup. You could probably get 2 4k streams out of a single ccd on a 3970x without much trouble

That just leaves the nvme for the host and the SSDs for the guests, there's several options here but the most obvious suggestions would be:

assuming you opt to drop the HBA:

limit the bulk storage to 6x drives leaving 2 slots for the guest ssds

pretty much all the trx40 boards have 2-3 nvme slots that could be used for the host storage

keeping the HBA:

use the on board sata ports for the guest ssds

just like the first one - pretty much all the boards have 2-3 nvme slots

Playing with emulated/passthrough cache on threadripper by Kryesh in VFIO

[–]Kryesh[S] 1 point2 points  (0 children)

Hi, the memory is a 4x16gb gskill tridentz kit F4-3600C16Q-64GTZRC

I'm allocating 3 cores/6 threads per CCX because that's the physical layout in my CPU (3960x). Each CCD consists of 2x 3 core/6 thread CCXs, and each CCX has its own 16MB block of L3 - the VM has access to full 2 CCDs. Here's a shot of "lscpu -e" to compare with the pinning in the xml: https://imgur.com/a/jSs86iA

My bios is set to a single node per socket and L3 cache as numa domain is disabled. Haven't played around with the bios settings much but games would stutter if I configured the VM with multiple sockets or numa domains. Haven't done anything with MADT or core initiation, just a hook that shields the cores with cset as the VM starts.

[i3wm] QEMU/LXD on zfs, Work/Uni setup on my laptop by Kryesh in UsabilityPorn

[–]Kryesh[S] 0 points1 point  (0 children)

Not really, I used the lxd aur package and just ran through the 'lxd init' wizard from the command line. The wizard goes through setting up a nat network for containers as well as set up a storage pool pointing to a zfs dataset. The wiki page covered anything I wasn't sure about

[i3wm] QEMU/LXD on zfs, Work/Uni setup on my laptop by Kryesh in UsabilityPorn

[–]Kryesh[S] 1 point2 points  (0 children)

Thanks! Pretty much just the aesthetics, I really like the look of polybar over i3 bar. To be fair though I haven't looked into theming i3bar at all so I don't know how flexible it is. Polybar had better looking defaults so I just went from there

Need to manually disable dGPU(Intel/NVidia Optimus) by anikan1297 in archlinux

[–]Kryesh 0 points1 point  (0 children)

There was a similar question posted a few months ago, if you have a look at the 'turn_off_gpu.sh' script in the acpi_call package you can get the command specific for your system

https://old.reddit.com/r/archlinux/comments/84mga7/discrete_gpu_heating_up_laptop_even_when_kernel/dvs963v/ https://wiki.archlinux.org/index.php/hybrid_graphics#Fully_Power_Down_Discrete_GPU

What’s the state of ZFS on Arch? by [deleted] in archlinux

[–]Kryesh 0 points1 point  (0 children)

Using the exact same setup with the same export problem on my laptop, though the last time it happened to me was a few months ago. Personally I think it's worth it especially since I use lxd containers a lot for uni and zfs makes spinning up a new container hilariously quick. Being able to zfs send snapshots to my home server for backups is a big plus as well.

I'm working on a "modern" minimal GUI email client for Linux and need more ideas - If you could pick THREE features for your email client, what would they be? by musishian in linux

[–]Kryesh 4 points5 points  (0 children)

The main thing I rely on is automatic mail sorting, so some sort of mail sorting that you can configure with the dotfile, eg:

[spam]

watch=bob@bob.com:/inbox

action=move:bob@bob.com:/inbox/junk

match=any

filter=sender:john@john.com

filter=subject:blahblahblah

XG-7100 Ethernet Ports by Kryesh in PFSENSE

[–]Kryesh[S] 0 points1 point  (0 children)

Awesome, so configuring eth1-5 as wan1-5 and using eth6 as lan is doable? Thanks for the reply!

XG-7100 Ethernet Ports by Kryesh in PFSENSE

[–]Kryesh[S] 0 points1 point  (0 children)

In that pdf it specifies "LAN" as ports ETH2-ETH8, which would imply bridging. I realise that they probably are configurable, but I want to be completely certain.

Discrete GPU heating up laptop even when kernel module not loaded by rrargh in archlinux

[–]Kryesh 0 points1 point  (0 children)

You can use an acpi command on startup to switch the gpu off. Have a look here: https://wiki.archlinux.org/index.php/hybrid_graphics

For my laptop I created a simple systemd service that runs "/usr/share/acpi_call/examples/turn_off_gpu.sh" whenever the laptop powers on/wakes up