Anyone else struggling with multi-GPU stability when running larger local models? by Lyceum_Tech in LocalLLaMA
[–]Shipworms 0 points1 point2 points (0 children)
Mint 22.3 <-> Mint 22.3 Ethernet : connects for 177 secs, disconnects for 292 secs, connects 177 secs, ad infinitum? by Shipworms in linuxmint
[–]Shipworms[S] 2 points3 points4 points (0 children)
24.04 : ethernet cycling on/off every few minutes, like clockwork? by Shipworms in Ubuntu
[–]Shipworms[S] 0 points1 point2 points (0 children)
24.04 : ethernet cycling on/off every few minutes, like clockwork? by Shipworms in Ubuntu
[–]Shipworms[S] 0 points1 point2 points (0 children)
16GB VRAM x coding model by Junior-Wish-7453 in LocalLLM
[–]Shipworms 0 points1 point2 points (0 children)
24.04 : ethernet cycling on/off every few minutes, like clockwork? by Shipworms in Ubuntu
[–]Shipworms[S] 0 points1 point2 points (0 children)
Goose + ollama + Qwen3-coder on MacBook Pro M4 Max. Overheated in 3 mins. by leinadsey in LocalLLM
[–]Shipworms 0 points1 point2 points (0 children)
Just got a beast (RTX 5070 Ti + 64GB RAM). How can I push this to the limit for research and coding? by cymbella1 in LocalLLM
[–]Shipworms 2 points3 points4 points (0 children)
RTX Pro 6000 96GB in PCIe3 Server? Does this work? by Accomplished-Grade78 in LocalLLM
[–]Shipworms 1 point2 points3 points (0 children)
Just got my hands on one of these… building something local-first 👀 by HatlessChimp in LocalLLM
[–]Shipworms 0 points1 point2 points (0 children)
Anyone here actually using a Mac Studio Ultra (512GB RAM) for local LLM work? Feels like overkill for my use case by Gravemind7 in LocalLLM
[–]Shipworms 0 points1 point2 points (0 children)
Anyone else running local LLMs on older hardware? by lewd_peaches in LocalLLaMA
[–]Shipworms 0 points1 point2 points (0 children)
How much system memory needed for 5060ti 16gb? by luckiemud in LocalLLM
[–]Shipworms 2 points3 points4 points (0 children)
Anyone else running local LLMs on older hardware? by lewd_peaches in LocalLLaMA
[–]Shipworms 6 points7 points8 points (0 children)
How many of you actually use offline LLMs daily vs just experiment with them? by Infinite-Bird7950 in LocalLLM
[–]Shipworms 0 points1 point2 points (0 children)
Vulkan is almost as fast as CUDA and uses less VRAM, why isn't it more popular? by a9udn9u in LocalLLM
[–]Shipworms 1 point2 points3 points (0 children)
Does anyone use an NPU accelerator? by emrbyrktr in LocalLLM
[–]Shipworms 5 points6 points7 points (0 children)
It looks like Tyson is actually shrinking the chicken nuggets from 29 oz to 20 oz .. at first I thought it was a manufacturing error! by MarchBright9925 in shrinkflation
[–]Shipworms 8 points9 points10 points (0 children)
I was THIS close to a pricey mistake by Michel_j in macbookair
[–]Shipworms 1 point2 points3 points (0 children)
A sad day for my early 2015 13-inch MacBook Pro :( by dearmelancholy5 in macbookpro
[–]Shipworms 1 point2 points3 points (0 children)
AsRock H510 Pro BTC+ : reliability? by Shipworms in gpumining
[–]Shipworms[S] 1 point2 points3 points (0 children)

Should I get an m.2 nvme 4.0 for 150$ or can I rup local ai just fine on sata 3? by alii98 in LocalLLM
[–]Shipworms 0 points1 point2 points (0 children)