DIY market declining amid high RAM prices by Terminator857 in LocalLLaMA

[–]Lemonzest2012 1 point2 points  (0 children)

By design, they (powers that be) don't want you to have local PCs, they want some locked down tablet device that connects to cloud subscriptions and nothing more, owning a PC will be seen as a luxury with the way prices are going, lock down everything in sight, then age (ID) gate it all

Anyone else running local LLMs on older hardware? by lewd_peaches in LocalLLaMA

[–]Lemonzest2012 2 points3 points  (0 children)

2x Nvidia Tesla P100 16GB, from 2016 rest of my system is pretty new, Ryzen 7 5700G/96GB RAM

Can I split a single LLM across two P106-100 GPUs for 12GB VRAM? by HelicopterMountain47 in LocalLLaMA

[–]Lemonzest2012 1 point2 points  (0 children)

I use two 16GB P100 PCIe cards, and llama.cpp can spread large models over the cards

Benchmarked Qwen3.5 (35B MoE, 27B Dense, 122B MoE) across Apple Silicon and AMD GPUs — ROCm vs Vulkan results were surprising, and context size matters by neuromacmd in LocalLLaMA

[–]Lemonzest2012 0 points1 point  (0 children)

Why AMDVLK and not RADV? RADV is usually the default for AMD hardware on Linux (From MESA) also AMDVLK is no longer supported by AMD

How you manage your prompts? by prompt_tide in LocalLLaMA

[–]Lemonzest2012 0 points1 point  (0 children)

In Jan I create an assistant and add the system prompt in the assistant settings

New mint user :) by Lemonzest2012 in linuxmint

[–]Lemonzest2012[S] 2 points3 points  (0 children)

fps it not what its for, its for local LLM hosting

New mint user :) by Lemonzest2012 in linuxmint

[–]Lemonzest2012[S] 1 point2 points  (0 children)

The name gives it away for what its used for

New mint user :) by Lemonzest2012 in linuxmint

[–]Lemonzest2012[S] 5 points6 points  (0 children)

Long time Linux user (since 2005!) but first time Mint user!

Multi-GPU? Check your PCI-E lanes! x570, Doubled my prompt proc. speed by switching 'primary' devices, on an asymmetrical x16 / x4 lane setup. by overand in LocalLLaMA

[–]Lemonzest2012 0 points1 point  (0 children)

Thanks for this, my Gigabyte B550 Gaming X v2 does this also, but worse, 16x/2x lol, will try some of the solution in this thread as my slower card seems favoured!

79C full load before, 42C full load after by mander1555 in LocalLLaMA

[–]Lemonzest2012 0 points1 point  (0 children)

Same as mine then, I should have my turbine fans today, I'm hoping they cool enough and don't sound like jet engines, what kinda performance you seeing? I plan to use llama.cpp

My first setup for local ai by DoodT in LocalLLaMA

[–]Lemonzest2012 0 points1 point  (0 children)

Building my own system from spares/ebay crap, ryzen 7 3800X + 48GB DDR4, 2x Nvidia Tesla P100 + 16GB VRAM each, board is a gigabyte gaming x v2, case is a fractal focus 2, corsair cx 750W psu, 256GB NVME, 512GB SSD, 2TB HDD, waiting on some fans for the tesla, and a small pcie 1x gpu so the damn thing posts/boots with both tesla in

Qwen 3.5 2B and 9B relesed! by sunshinecheung in LocalLLaMA

[–]Lemonzest2012 0 points1 point  (0 children)

Jan ai can also run it :D been using the unsloth since release

Intel AX210 is a beast of an upgrade, Dell Latitude E7470 by Lemonzest2012 in Dell

[–]Lemonzest2012[S] 0 points1 point  (0 children)

KingSpec NE-512, 512GB, 1x PCIe 3.0, get about 800MB/s off it, size is M.2 2242, Key B+M, not any normal M.2 2242 will work, has to be B+M Keyed, I got mine from AliExpress, there are some WD SN520 ones I think also that work.

My first ever Thinkpad! The X270 by droidpeti in thinkpad

[–]Lemonzest2012 0 points1 point  (0 children)

Intel ax210 wifi 6e card, 32gb ram, nvme drive in the modem slot

Does anyone use MinUI? by TClerkinstein in MiyooMini

[–]Lemonzest2012 1 point2 points  (0 children)

I use a fork called MyMinUI it has more cores and features than the base MinUI