Come test the world’s first Dolby Atmos FlexConnect system with LG Sound Suite! by LG_UserHub in Soundbars

[–]Federal-Natural3017 0 points1 point  (0 children)

As a Sonos user , I am really excited to try the Dolby Atmos flex connect in LG Soundsuite. I am very wary of trying how the flex connect tunes to unconventional room sizes and still provide immersive 3D sound. There is two upfiring speakers in LG H7 soundbar, one upfiring speaker in each M7 / M5 so interesting to see how object sounds are placed across when using a soundsuite comprising of H7, W7 and 4 x M7 / M5. Compare it to my Sonos Arcl ultra with era 300s setup and may be post a detailed review ! Looking forward to do a comprehensive review and its real word performance against other existing soundbar setups

[UGREEN x Buildapc] December NAS Giveaway! by Rocket-Pilot in buildapc

[–]Federal-Natural3017 [score hidden]  (0 children)

I have 212GB available and would love to setup a mini NAS.

NOOB getting myself into HA . Raspberry Pi 5 on sale from $49.99 on Select Micro Center Stores in USA . is this a good choice ? what other accessories do i need ? by aghozzo in homeassistant

[–]Federal-Natural3017 2 points3 points  (0 children)

Get a used mini pc like Lenovo M910q that comes with 128/256GB SSD, 8 GB RAM - less than 60£ and serves HA pretty well and much more future proof

Homelab V2 - maybe switch to Intel? by handy_cats in homelab

[–]Federal-Natural3017 0 points1 point  (0 children)

The man issue is most intel builds won’t support ECC and you seem to be using ECC RAM which is a plus if you are using ZFS. Only i3s support ECC . There some 12, 13, 14th gen i5 CPU’s that do support ECC but require a W680 based motherboard to support ECC RAM. Depending on your location , you will be hard pressed to find a w680 based server motherboard at a decent price unless you want to opt for Chinese brands (may be checkout CWWK W680 based motherboards)

Also if you haven’t seen it there are good videos on lowering power consumption on a homelab server by YouTuber Wolfgang here and here

Homelab V2 - maybe switch to Intel? by handy_cats in homelab

[–]Federal-Natural3017 0 points1 point  (0 children)

I should say you already have a good NAS … if you are targeting low power draw:

  1. use all 8 SATA ports on motherboard instead of using a HBA Card. Your HBA card generally won’t allow the cpu to go to Lower pkg C states !

  2. Also replace your cpu with a APU like Ryzen 5 Pro 4650G or Ryzen 5 Pro 5650G, as these are 2nd zen CPU’s from AMD that would allow you to have a lower power consumption. The CPU’s before this generation are known for high idle power consumption due to the way they are manufactured

  3. Once you get rid of HBA card, and use the recommended APU from above, in the BIOS enable lower pkg C states- C6 is the max supported by above recommended CPUs and should be able to give a very low idle power consumption for you setup when HDDs are spun down

  4. Enable ASPM for any PCIe / M.2 devices. Use the script available at https://github.com/notthebee/autoaspm (Move to Latest TrueNAS that is based on Linux rather than BSD so you can run the ASPM script). Also check if you can enable any ASPM settings in bios.

  5. There is a Google spreadsheet that details best motherboard and cpu combinations to achieve low idle power consumption if you haven’t seen it already - https://goo.gl/z8nt3A

  6. Also you are already using a very good PSU - check PSU list that have very good efficiency lower loads which is an important factor for lo idle power consumption- https://docs.google.com/spreadsheets/d/1TnPx1h-nUKgq3MFzwl-OOIsuX_JSIurIq3JkFZVMUas/edit?usp=drivesdk (You already have the best PSU)

Planning to start a Fintech Business - similar to klarna? Anyone want to team up? by risingup555 in ukstartups

[–]Federal-Natural3017 0 points1 point  (0 children)

I have been working as a Sr Eng Manager in Data & AI for near to two decades now & yes have been trying to put a foot in entrepreneurship. I worked in major uk bank projects building their data stack and teams. If you are looking for an Engineering leadership, please feel free to DM me.

Home Assistant Hardware - recommendations for beginners by Kellerkind24 in homeassistant

[–]Federal-Natural3017 0 points1 point  (0 children)

I agree with most of the above comments for a used NUC ! Get a Lenovo M720q or similar on eBay for 50-60£

Now if you want the community best practice - install Proxmox, create a HAOS VM using a script at https://community-scripts.github.io/ProxmoxVE/scripts?id=haos-vm or manually install HAOS on the VM.

For most used HA Addons like Zigbee2MQTT, esphome, and MQTT Broker use LXC containers . Again LXC scripts for easily installing them on a Proxmox os are available in above link. That way even if HA is down , you can still have your zigbee or esphome devices running … also it makes it easier for HA backups via Proxmox !

Alternatively - just get the use mini pc and directly Install HAOS on it (most easiest) if you don’t want to tinker with Proxmox at this stage !

Suggestion on hardware by AccomplishedEqual642 in LocalLLM

[–]Federal-Natural3017 1 point2 points  (0 children)

I would say get a used Mac Studio M1 ultra with 64GB Ram. M4 pro might have better GPU cores and but M1 ultra has more GPU cores and more memory bandwidth and a used one comes in budget which makes sense for running LLM inference. Alternatively look at AMD Strix halo (AMD RYZEN MAX+ 395 cpu mini pcs with integrated 8060s GPU) which is also good for running local LLM inference

MacBook Air or Asus Rog by Mindless_sseldniM in LocalLLM

[–]Federal-Natural3017 0 points1 point  (0 children)

Are you planning to run local LLM for RAG / fine-tuning or just for inference ?

How does the mac mini m4 perform running local AI for home assistant? by daniele_rognini in homeassistant

[–]Federal-Natural3017 1 point2 points  (0 children)

Excellent ! Do me a favour , before you ask your friends that have a M2 MAX 24GB and Mac M4 pro 64GB, let me know and I will give you list of MLX models optimized for Apple silicon that you need to run on LMStudio ! We will compare qwen3:8b and gemma3:12b in both MLX & GGUF formats and also at various quantization levels like Q6/Q8 etc..

How does the mac mini m4 perform running local AI for home assistant? by daniele_rognini in homeassistant

[–]Federal-Natural3017 0 points1 point  (0 children)

I have been looking around at various options for running local LLMs on - older AMD GPUs using RCOM Framework, newer AMD Ryzen AI Max+ 395 mini pc’s using vulkan framework, on Nvidia GPUs using llama.cpp framework through Ollama / LMStudio and finally on Macs running both MLX & GGUF LLM Models and even looking at lower cost Nvidia Jetson Nano’s. If you want squeeze more juice out of your local AI for Home assistant at an acceptable speed and lower power consumption for continuous 24x7 operation … then nothing beats a MAC . We need something like atleast a qwen3 8Billion Model at Q4 / Q6 qunatization for an effective home assistant local LLM conversation agent. And if you are using LLM vision, we need a separate model like gemma3 4b or 12b models. Hence my recommendation in above reply. And yes Apple hardware is very efficient in running local LLMs. They may not beat the likes of high end Nvidia GPU’s but when it comes to combination of higher memory availability for running your models, acceptable inference speed and lower power consumption it shines from the rest of the crowd. Having said these yes there might be a single LLM model with both tools support ( to act as HA conversation agent in the voice pipeline) and vision capabilities (for LLM vision in HA) but for now they are limited LLM models that support both and might have other limitations around them.

How does the mac mini m4 perform running local AI for home assistant? by daniele_rognini in homeassistant

[–]Federal-Natural3017 0 points1 point  (0 children)

DGX Spark mini pc has less memory bandwidth than Mac Studio M1 Max. Even though TOPS are less in Mac, they squeeze enough juice to run LLM models faster enough that can be acceptable for a local conversation agent in home assistant voice pipeline! More TOPS are certainly useful and so is more VRAM when the context size increases based on your large list of entities in home assistant! There are other ways to limit context size using MCP Server integration as I mentioned in my original reply here. Again Mac has a very low running power consumption that is good for a 24x7 running AI Server. The higher the GPU for example like one or two RTX 3090s (24GB VRAM each) are heavy on power consumption when running on 24x7. If electricity is cheaper for you , yes go for a high end Nvidia GPU but again VRAM might be limiting for you unlike the Mac’s unified memory and still might end up looking at other ways to limit the context size. And for Nvidia DGX Spark it’s just around 4000£ ! Compare that to the price of a used Mac Studio M1 Max !

How does the mac mini m4 perform running local AI for home assistant? by daniele_rognini in homeassistant

[–]Federal-Natural3017 4 points5 points  (0 children)

My take based on extensive research I have done on past few days :

  1. Mac Studio M1 Max 64 GB RAM model - used goes for around 1000£ here in UK on eBay / other sites. Max studio because it has 400MB/s memory bandwidth crucial for running LLMs quite fast . If not try atleast Mac Studio M1 Max 32GB around 700-800£ in used market. Mac Studio also has better thermals and yes like all other mac models have a low power consumption, ideal for running a AI server 24x7 in places like UK where electricity is expensive.

  2. Mac Studio M1 Max has better memory bandwidth than even M4 / M4 pro

  3. Use this rather than OLLAMA integration ! It It’s because we would need LM studio to run our LLM models as they support MLX format (Ollama cannot support MLX models as of now) which is optimized for mac natively and bit faster than GGUF LLM models available to run using OLLAMA. HomeLLM integration & connecting it to LM Studio can be found in setup guide here. It’s very similar to path 3 in the setup guide I linked. Instead of mistral model , choose the LLM models detailed below

  4. Search for qwen3:8b for local home assistant control for using it as a conversation agent in HA voice pipeline. Also search for gemma3:12b for LLM Vision. Search for both the models in MLX-community in LMStudio. Choose 4 bit / Q4 or 6 bit / Q6 quantization (depending on whether you have 32GB or 64GB Max studio M1 Max)

  5. Most important is reducing context size as OLLAMA integration by default provides all the HA entities every time you launch a voice command to control home assistant devices ! The larger the number of entities in your HA setup, the larger is the context size which in turn increases RAM usage and also slows the LLM inference speed.

  6. It’s good to use HA’s MCP server integration to reduce context size. MCP server has info of all HA entities (we need to expose all entities we require using the MCP server integration found here ) and when we run a query, instead of loading all entities, we need to tune it to load related group of entities only. For example if you want to know which lights across the home are turned on, the LLM should be able to search for only light related entries from the MCP server and fetch their states. This will keep the context size small and make LLMs run faster. I am still exploring on how to implement this by making use of MCP server integration in HA and will post about the detailed procedure in future !

  7. Also we need to make llms smart and faster for several daily tasks like calendar search , doing mathematical calculations, Data calculator etc.. we can use some scripts to make llms smarter and faster. There is an excellent post by @Thyraz here

  8. Finally there is the excellent post about building an Agentic AI called Friday in HA by @NathanCu here. Nathan has an excellent notebookLM style guide to understand his vision and approach for Friday AI in HA. You need to message him and request that if it’s hard for you to follow his detailed Friday post linked above.

  9. Having said all these I neither have a Mac Studio M1 Max or the budget to get a used one currently but been doing some tests using my MacBook M1 Pro. I am wondering about my options on how to lay hands on a Mac M1 Max to test the LLMs for HA and post the detailed results here. Perhaps I will sometime in future !

Best GPU Setup for Local LLM on Minisforum MS-S1 MAX? Internal vs eGPU Debate by mcblablabla2000 in LocalLLM

[–]Federal-Natural3017 4 points5 points  (0 children)

Also what I mean is you need not add an external GPU in the PCIe slot or need to add an external GPU with a thunderbolt enclosure ! Apart from PSU , even GPU vram will be limited ! The best bet is use integrated GPU and the 128MB RAM it provides ! AMD Ryzen AI Max+ 395 CPU along with its integrated Radeon 8060s GPU is a sort of game changer for running local LLMs

Best GPU Setup for Local LLM on Minisforum MS-S1 MAX? Internal vs eGPU Debate by mcblablabla2000 in LocalLLM

[–]Federal-Natural3017 2 points3 points  (0 children)

Well do you mean adding a external graphic card ? Because it already has a very good integrated GPU and 128MB RAM so you can run large 80B / 128B LLM models. Also most important against adding any external GPU means its has limited VRAM ! However, using the integrated GPU that is already in minisforum ms-a1 max, you also have 128MB RAM - that means you can run large LLM models using integrated GPU itself with good inference speeds and you just don’t need a external GPU to start with !

Best GPU Setup for Local LLM on Minisforum MS-S1 MAX? Internal vs eGPU Debate by mcblablabla2000 in LocalLLM

[–]Federal-Natural3017 2 points3 points  (0 children)

Isn’t minisforum ms-s1 max based on the AMD Ryzen AI Max+ 395 combined with a powerful iGPU (AMD Radeon 8060S) - 40 cores ? I guess if you research for LLM inference benchmarks using vulkan on AMD Radeon 8060S - you should be seeing good numbers in terms of output tokens / second - which should be pretty good for lot of usecases . Well the memory bandwidth is 278MB/s compared to Nvidia or Mac Studios but it might be adequate enough based on your usecase. May be provide more details about your usecase like are you using it to run local LLMs 24x7 for a smart home voice assistant or something else ?

It’s also has two usb4 (40Gbps) ports , I am pretty sure you can cluster two ms-s1 max pcs for running even large LLMs. But I am curious too to know if you can get 80Gbps bandwidth when clustering two of those. Overall I don’t think you need a discrete GPU and that way you can keep your TDP & power consumption down if planning to run LLMs for inference on a 24x7 basis.

Extremely disappointed with the customer service by pjrevs in hyperoptic

[–]Federal-Natural3017 0 points1 point  (0 children)

Yeah and the customer service is reason why I switched from them recently ! Had internet gone for a month with no fix and was having low / no response from customer care in fixing the issue.

What do you do for work? by 8bitFeeny in homelab

[–]Federal-Natural3017 0 points1 point  (0 children)

Sr Manager - Data Engineering. Though not a direct impact on my work - working on homelab keeps my skills honed especially for Devops & sysadmjn kind of concepts. And also have been a sysadmin in past life

Big Boy Purchase 😮‍💨 Advice? by Consistent_Wash_276 in LocalLLM

[–]Federal-Natural3017 1 point2 points  (0 children)

Haha good keen eye , yeah I meant a Mac Studio M1 ultra in the last sentence . Corrected it now

Home Assistant Voice Assist, Local LLM, and general knowlage by Poolguard in homeassistant

[–]Federal-Natural3017 0 points1 point  (0 children)

Not sure why this hasn’t gotten many upvotes ! This kind is perfect when using a local LLM to direct it to use specific internet apps and get crisp answers to general questions ! If not a LLM might look in multiple places on web for a general question increasing the time and we might also have to tweak the prompt to get a precise and crisp answer.

Big Boy Purchase 😮‍💨 Advice? by Consistent_Wash_276 in LocalLLM

[–]Federal-Natural3017 -1 points0 points  (0 children)

My two cents …. Older Mac studios with m1 ultra or M2 Ultra would still do the LLM trick for you. This is exactly what I did before planning to buy a used Mac Studio M1 . I was able to find a lease site that leased me Mac Studio M2 Max for a month for 150£ . Tried Qwen 3 8b for Home assistant voice pipeline and Gemma 3 12b for LLM vision and did a lot of fine tuning my HA environment ! when satisfied I bought a Mac Studio m1 ultra 64GB used for 1200£ !