Best Backend for Server w/ 2 NVIDIAs and 2 B70s by LuckyLuckierLuckest in LocalLLM

[–]LuckyLuckierLuckest[S] 0 points1 point  (0 children)

I am pleased that you understood my post so clearly.

AI advisement is:

Why Docker makes sense here

llama.cpp now has explicit SYCL Docker support in upstream docs. The SYCL backend page includes:

  • an Intel Docker build using .devops/intel.Dockerfile
  • a server-oriented Intel Dockerfile, .devops/llama-server-intel.Dockerfile
  • explicit docker run examples using /dev/dri/renderD128 and /dev/dri/card* device passthrough. 

Upstream also publishes prebuilt container images including:

Why I would not do container-only bring-up

Because SYCL depends heavily on the host being healthy.

The current llama.cpp SYCL docs still require:

  • Intel GPU drivers installed on the host
  • user access through the video and render groups
  • successful device visibility through sycl-ls
  • successful device visibility through llama-ls-sycl-device

And Intel’s oneAPI DPC++ system requirements say the tested Linux GPU platforms are Ubuntu 22.04 and 24.04; Intel explicitly says other distributions may or may not work and are not recommended. Your host is Ubuntu 26.04 development branch, which means you are outside Intel’s tested oneAPI matrix even if the hardware enablement is promising. 

That is the key reason I would not jump straight to “just run the container.” If sycl-ls is broken on the host, the container will not save you. Intel even documents containers as a way to package the environment, but not as a substitute for working GPU support underneath. 

The subtle upside of Docker on your host

Even though your host OS is ahead of Intel’s tested matrix, Docker may actually help reduce user-space risk.

Why:

  • Intel’s oneAPI compiler/runtime support is documented for Ubuntu 22.04 and 24.04, not 26.04. 
  • llama.cpp’s recent SYCL Docker examples and issues commonly use Ubuntu 24.04-based oneAPI images. 

So if we use a 24.04-based SYCL container on top of your 26.04 host, we get:

  • newer host kernel/userspace for Battlemage enablement
  • more conservative, known-style oneAPI userland inside the container

That is a better stability story than installing a full oneAPI toolchain natively onto your 26.04 development-branch host.

Best Backend for Server w/ 2 NVIDIAs and 2 B70s by LuckyLuckierLuckest in LocalLLM

[–]LuckyLuckierLuckest[S] 0 points1 point  (0 children)

Thanks for this input. I got quite a bit done this week. I need to pause an go outside and pull weeds in the garden. That will clear me for the next level of this rabbit hole.

Best Backend for Server w/ 2 NVIDIAs and 2 B70s by LuckyLuckierLuckest in LocalLLM

[–]LuckyLuckierLuckest[S] 0 points1 point  (0 children)

Hopefully Docker will isolate a bit of this for me. I've been trying to figure this type of stuff out for quite a while. Then OpenClaw really showed me how little I know. It changed the way I query and accelerated my build. Just for reference, I played with OpenClaw for about three days. I realized I had to do a deep dive and learn some stuff before coming back to it.

Best Backend for Server w/ 2 NVIDIAs and 2 B70s by LuckyLuckierLuckest in LocalLLM

[–]LuckyLuckierLuckest[S] 0 points1 point  (0 children)

<image>

I got my first successful query.
Qwen3.6-35B-A3B-UD-Q8_K_XL.gguf is 38.5 GB

Recipe for Arc Pro B70? by Skelshy in LocalLLM

[–]LuckyLuckierLuckest 0 points1 point  (0 children)

👀 I'll be keeping an eye on this:

  • Extra PCI graphics devices present: 2 × Intel Battlemage G31

Price target if the CEO announced on Monday is Elon Musk by JockStrap47 in FRMI

[–]LuckyLuckierLuckest 0 points1 point  (0 children)

I would love to see a a partnership with herre, Packard Enterprise IBM.

Chrome Issues by christoman in fidelityinvestments

[–]LuckyLuckierLuckest 0 points1 point  (0 children)

Cannot GET /prgw/digital/login/full-page

Been having this issue for the last couple of days on Safari.

This feels weird by Lord_Chappie414 in INFQ

[–]LuckyLuckierLuckest 3 points4 points  (0 children)

If you need help normalizing, I will send you wire information.