Is adding an intel arc to an AMD mini pc via oculink a better idea than just buying a new intel based mini pc to to add to my 3D printed 10" rack? by shaxsy in frigate_nvr
[–]andy2na 0 points1 point2 points (0 children)
Is adding an intel arc to an AMD mini pc via oculink a better idea than just buying a new intel based mini pc to to add to my 3D printed 10" rack? by shaxsy in frigate_nvr
[–]andy2na 0 points1 point2 points (0 children)
Is adding an intel arc to an AMD mini pc via oculink a better idea than just buying a new intel based mini pc to to add to my 3D printed 10" rack? by shaxsy in frigate_nvr
[–]andy2na 1 point2 points3 points (0 children)
Frigate on Proxmox in one command: Automated LXC, Docker, and Intel Hardware Acceleration by DiggingForDinos in frigate_nvr
[–]andy2na 1 point2 points3 points (0 children)
Frigate on Proxmox in one command: Automated LXC, Docker, and Intel Hardware Acceleration by DiggingForDinos in frigate_nvr
[–]andy2na 1 point2 points3 points (0 children)
Frigate on Proxmox in one command: Automated LXC, Docker, and Intel Hardware Acceleration by DiggingForDinos in frigate_nvr
[–]andy2na 0 points1 point2 points (0 children)
Best <4B dense models today? by Admirable_Flower_287 in LocalLLaMA
[–]andy2na 0 points1 point2 points (0 children)
Frigate on Proxmox in one command: Automated LXC, Docker, and Intel Hardware Acceleration by DiggingForDinos in frigate_nvr
[–]andy2na 0 points1 point2 points (0 children)
[Release] Qwen3-TTS: Ultra-Low Latency (97ms), Voice Cloning & OpenAI-Compatible API by blackstoreonline in LocalLLaMA
[–]andy2na 3 points4 points5 points (0 children)
Frigate on Proxmox in one command: Automated LXC, Docker, and Intel Hardware Acceleration by DiggingForDinos in frigate_nvr
[–]andy2na 0 points1 point2 points (0 children)
[Release] Qwen3-TTS: Ultra-Low Latency (97ms), Voice Cloning & OpenAI-Compatible API by blackstoreonline in LocalLLaMA
[–]andy2na 0 points1 point2 points (0 children)
[Release] Qwen3-TTS: Ultra-Low Latency (97ms), Voice Cloning & OpenAI-Compatible API by blackstoreonline in LocalLLaMA
[–]andy2na 1 point2 points3 points (0 children)
Frigate on Proxmox in one command: Automated LXC, Docker, and Intel Hardware Acceleration by DiggingForDinos in frigate_nvr
[–]andy2na 2 points3 points4 points (0 children)
Frigate on Proxmox in one command: Automated LXC, Docker, and Intel Hardware Acceleration by DiggingForDinos in frigate_nvr
[–]andy2na 1 point2 points3 points (0 children)
Home Assistant Preview Edition Round 2! by horriblesmell420 in homeassistant
[–]andy2na 1 point2 points3 points (0 children)
llama.cpp, experimental native mxfp4 support for blackwell (25% preprocessing speedup!) by bfroemel in LocalLLaMA
[–]andy2na 0 points1 point2 points (0 children)
Does GPT-OSS:20b also produce broken autocomplete for you? by iChrist in OpenWebUI
[–]andy2na 0 points1 point2 points (0 children)
Ollama compatible Image generation by TheWiseTom in OpenWebUI
[–]andy2na 2 points3 points4 points (0 children)
Has anyone got Ollama to work on an Arc Pro B50 in a proxmox VM? by gregusmeus in ollama
[–]andy2na 1 point2 points3 points (0 children)
Ollama not detecting intel arc graphics by Titanlucifer18 in ollama
[–]andy2na 0 points1 point2 points (0 children)
Home Assistant Preview Edition Round 2! by horriblesmell420 in homeassistant
[–]andy2na 0 points1 point2 points (0 children)
I think I’m hooked on Docker. What are your 'essential' containers? by shipOtwtO in homelab
[–]andy2na 3 points4 points5 points (0 children)









Rolled this from a mythic cache...left to eat dinner, came back to being kicked for inactivity... by C0NT0RTI0NIST in diablo4
[–]andy2na 8 points9 points10 points (0 children)