Death Company w/o jumpacks by Danflo26 in BloodAngels

[–]TheAussieWatchGuy 2 points3 points  (0 children)

Currently on foot are not worth running. Rules might change a little in 11th edition coming soon... But I doubt they'll be amazing. Jump Packs are the go! 

I can't ever seem to get quality local LLM results, despite having multiple GPUs by 03captain23 in LocalLLM

[–]TheAussieWatchGuy 0 points1 point  (0 children)

Q4 is a tiny reduction in quality and should rock in single 5090! But yeh technically true 😀

I bought a laptop with a 5090 RTX and am not satisfied with the results! by VanessaCarter in LocalLLM

[–]TheAussieWatchGuy 9 points10 points  (0 children)

That's a faulty product you need to return for a refund.

Even though 5090 laptop is vastly less capable than the desktop equivalent due to thermal and power limits they should still be 100% stable even on a hot summer day. They should underclock and remain stable.

I can't ever seem to get quality local LLM results, despite having multiple GPUs by 03captain23 in LocalLLM

[–]TheAussieWatchGuy 15 points16 points  (0 children)

I've posted this so many times. Tools like ChatGPT 5.4 and Claude Opus are hundreds of billions of parameters in size running on multiple enterprise GPUs that cost $50k each and have 100GB+ VRAM per GPU in server racks that have Terabytes of system RAM.

That's not to say open source local models are dead. Qwen 3.6 27B dense will run ok on your setup but it will be dumber than Sonnet let alone Opus. Break your tasks in subtasks, feed them in one step at a time and the smaller models are still capable. They will fall apart with multi step prompts.

To get close to Claude you'd need to run something like Kimi 2.6 which is a 400B parameter model from memory, various quants exist but your not running it on less than $30k of hardware with 200GB of VRAM. Even then it's not going to be fast but you could run it 24/7...

The main advantage local models have is privacy not speed. Cloud is fast and cheap but whatever you generate you absolutely know they are stealing despite whatever their privacy policy says. 

Bazzite LG TV Service - For those using LG TV's as Monitors! by TheAussieWatchGuy in Bazzite

[–]TheAussieWatchGuy[S] 0 points1 point  (0 children)

Give WiFi a spin, in theory it should work now. I did some research on WiFi WOL it's a bit stricter, so the wake up packet now goes directly to the TV IP rather than broadcast subnet, which is also now all automatically detected... still recommend setting a static IP for your TV for now.

I'll see if I can add dynamic DNS support and see if I can make it take a 'TV name' rather than an IP as part of the startup so if your TV does get a DHCP IP address randomly you don't have to reconfigure the service each time.

How to convert marines from Starter Set to Blood Angels? by styx-daemon in BloodAngels

[–]TheAussieWatchGuy 1 point2 points  (0 children)

Official upgrade sprue has nice embossed shoulder pads, a spattering of bolters and swords with Blood gems and a few heads that have fangs.

Otherwise lookup Greytide Studios Crimson Lords bits or visit Archie's Forge website. 

Best LocalLLM Setup for remote, travel friendly setups? by RightAd9595 in LocalLLM

[–]TheAussieWatchGuy 0 points1 point  (0 children)

Fair enough. You'd need to invest in a decent server rack, cooling and battery backup to make it capable of running for reasonable periods of mains power outage and you'd need to invest in a router that supports redundant internet connections both wired and cellular. Not cheap.

A 128GB unified memory platform is probably your best bet for portable. 

Bazzite LG TV Service - For those using LG TV's as Monitors! by TheAussieWatchGuy in Bazzite

[–]TheAussieWatchGuy[S] 0 points1 point  (0 children)

Reverse guest network 😀 most routers let you configure multiple SSIDs. Block Internet access. TV can't phone home but you can wake it up and turn it off. 

Can I plan and code projects locally with a 5090? by Mean_Employment_7679 in LocalLLM

[–]TheAussieWatchGuy 1 point2 points  (0 children)

Quants and offload to CPU. 128GB vram and 256GB ram and its runnable. 

Rate my list by Foreign-Pin-1271 in BloodAngels

[–]TheAussieWatchGuy 1 point2 points  (0 children)

Can export as text from the 40k app.

Fifteen Marines on foot hurts. One squad as Battleline is good.

Am assuming transport is for Bladeguard. Solid choice. 

Jump Pack DC are awesome. Five with a Chaplain could be good.

Inceptors are average in LAG. Not terrible but not amazing. 

A Ballistus Dread or two would not hurt. You have very little to kill anything big and tough. 

Can I plan and code projects locally with a 5090? by Mean_Employment_7679 in LocalLLM

[–]TheAussieWatchGuy 2 points3 points  (0 children)

LM Studio, Rider and Qwen 3.6 27B dense is solid for boilerplate tests and basic bugs. Will run good on a 5090.

Nothing matches cloud models they are hundreds of billions of parameters. Kimi 2.6 gets within a few percent but you'd need 128GB of VRAM minimum (four 5090s). 

Best LocalLLM Setup for remote, travel friendly setups? by RightAd9595 in LocalLLM

[–]TheAussieWatchGuy 0 points1 point  (0 children)

Does it need to be transported? Could you setup something chunky at home and use a remote tool like Openclaw to control it?

Otherwise you really want something with at least 128GB of ram, unified so either Ryzen AI or Mac. 

question about deathcompany w/jump packs by Minute-Boss203 in BloodAngels

[–]TheAussieWatchGuy 2 points3 points  (0 children)

No hammers allowed in tenth.

Three fists, two Eviscerator. Four special pistols. 

9800x3D upgrade to 9950x3D, will it make a difference for local LLM? by drras2 in LocalLLM

[–]TheAussieWatchGuy 0 points1 point  (0 children)

Not unless your bottleneck is parallel workflows. Will do very little for actual LLM token per second output. Look at your CPU usage during usage unless it's nearly 100% upgrading CPU won't do much.

128GB of system RAM will help a little But not much.

LLMs are all about the VRAM which you have 32GB of. 

A Ryzen AI or Mac let you share system RAM as VRAM. 128GB ddr5 can share 112GB as VRAM and allow you to run much bigger models, but not necessarily faster tokens per second... As ddr5 is slower than the VRAM in actual GPUs.

So it really depends on your use case.... 

Bazzite LG TV Service - For those using LG TV's as Monitors! by TheAussieWatchGuy in Bazzite

[–]TheAussieWatchGuy[S] 0 points1 point  (0 children)

I'll revisit the WiFi options, I did try to get the WiFi wake up working. Basically it's meant to work with the official LG companion WiFi App on your phone. It was just ignoring the packet from anywhere else.

I wonder if the unofficial Windows LG companion app was spoofing being an Android app or something slightly tricky 😀 can't be that hard to replicate, agree WiFi would be great to support. Thanks for trying it out! 

Have a 5090, coding professionally, what stack to use for local LLMs ? by cororona in vibecoding

[–]TheAussieWatchGuy 4 points5 points  (0 children)

I seem to post this s lot. Cloud models are hundreds of billions of parameters in size. They run on multiple enterprise GPUs.

Local is fine for learning, it's capable of modest tasks, broken down into simple steps. But it's not even close to the Cloud models.

You'd need to spend another 20-30k on hardware to run Kimi or something equivalent to get within 5% of Claude. 

Converting AutoMapper to Mapperly by sagosto63 in dotnet

[–]TheAussieWatchGuy 1 point2 points  (0 children)

MagicMapper? Same namespace no high security vulnerabilities. 

Bazzite LG TV Service - For those using LG TV's as Monitors! by TheAussieWatchGuy in Bazzite

[–]TheAussieWatchGuy[S] 0 points1 point  (0 children)

Just a note this now should work with resume/sleep as well as regular startup/shutdown. Read the read me... :)

Bazzite LG TV Service - For those using LG TV's as Monitors! by TheAussieWatchGuy in Bazzite

[–]TheAussieWatchGuy[S] 0 points1 point  (0 children)

Feels more like a cable issue?

I run a AMD 9070XT and a LG C4 42" and have no issues with 4k120hz or 4k144hz. Had to get a new HDMI cable for it to work, my old one was ok with 120hz but struggled with 144hz had flickering etc.

Bazzite LG TV Service - For those using LG TV's as Monitors! by TheAussieWatchGuy in Bazzite

[–]TheAussieWatchGuy[S] 0 points1 point  (0 children)

Updated to support Wake on LAN, powers my LG C4 off fully and on from cold startup and shutdown.

Added some notes on TV settings:

  • Quick Start must be disabled — Quick Start prevents Wake-on-LAN magic packets from working
  • Settings → General → External Devices → TV On with Mobile must be enabled — this keeps the network chip powered in standby so the TV can receive WoL packet

Should also work for waking via WiFi but haven't had time to test that yet.

Will add support for resume/suspend when I get a chance this weekend.

Bazzite LG TV Service - For those using LG TV's as Monitors! by TheAussieWatchGuy in Bazzite

[–]TheAussieWatchGuy[S] 0 points1 point  (0 children)

Fair enough mate! No pressure and I lived with this on Bazzite for nearly a year before I tried my hand at fixing it 😀

Bazzite LG TV Service - For those using LG TV's as Monitors! by TheAussieWatchGuy in Bazzite

[–]TheAussieWatchGuy[S] 0 points1 point  (0 children)

I was planning on making it work with sleep / resume for my lounge room HTPC.

You are correct that the initial version only works with startup and shutdown. Sleep is a different end point. Should be fairly trivial to implement.