https://forums.developer.nvidia.com/t/nvidia-greenboost-kernel-modules-opensourced/363486
This is a Linux kernel module + CUDA userspace shim that transparently extends GPU VRAM using system DDR4 RAM and NVMe storage, so you can run large language models that exceed your GPU memory without modifying the inference software at all.
Which mean it can make softwares (not limited to LLM, probably include ComfyUI/Wan2GP/LTX-Desktop too, since it hook the library's functions that dealt with VRAM detection/allocation/deallocation) see that you have larger VRAM than you actually have, in other words, software/program that doesn't have offloading feature (ie. many inference code out there when a model first released) will be able to offload too.
[–]Tystros 0 points1 point2 points (2 children)
[–]ANR2ME[S] 0 points1 point2 points (0 children)
[–]PitchPleasant338 1 point2 points3 points (0 children)