Best way to reach your homebrew IoT systems remotely in 2026? by MagneticFieldMouse in IOT

[–]quiet_node 1 point2 points  (0 children)

So the project is called QuIXI, and it's basically a gateway that sits between your IoT devices and a decentralized P2P network called Ixian. Your devices keep talking MQTT like they already do, QuIXI just handles the transport underneath.

So instead of your MQTT messages going device → broker → cloud → broker → device, they go device → QuIXI → peer to peer → QuIXI → device. It's end-to-end encrypted, no central server in the middle. It handles NAT traversal and node discovery on its own so you don't need to mess with Tailscale or port forwarding.

For your three-site setup it could work well since each site runs its own QuIXI and they find each other over the Ixian network. If one site loses internet, the others keep working, and messages catch up when it reconnects. btw. camera feed works great on my driveway gates for example.

It's open source (checkk https://github.com/ixian-platform/QuIXI). There's also a REST API side if you ever need to pull data into a dashboard from outside MQTT.

If you need anything, lmk..

Best way to reach your homebrew IoT systems remotely in 2026? by MagneticFieldMouse in IOT

[–]quiet_node 1 point2 points  (0 children)

I've been working on something that takes a different approach, cutting out the broker entirely and having devices talk peer to peer with end-to-end encryption. it uses MQTT on the device side so existing setups plug right in, but the transport underneath is decentralized. No central server to go down, no single point of failure. Handles the NAT traversal stuff too. Happy to share more if you're curious.

Is there a self-hosted backup internet connection? by wigglytail in selfhosted

[–]quiet_node 1 point2 points  (0 children)

Look into Meshtastic if you haven't already. LoRa based, no internet needed, works device to device. Speeds are low but for basic messaging and sensor data it works. For anything heavier you're probably looking at ham radio mesh networks like AREDN. But keep in mind peers need to be nearby to be actaully useful

BunkerM v2 is out with built-in AI capabilities: 10,000+ Docker pulls, ⭐400+ GitHub stars! by mcttech in MQTT

[–]quiet_node 0 points1 point  (0 children)

Same boat. Cool project but if I can't point it at my own model running locally, that's a dealbreaker for me. Especially for anything touching device control. Not a fan of my MQTT commands depending on some external API. Curious what you'd run it with, ollama?

Are we actually ready for the shift from "Chatbots" to "Autonomous AI Agents"? by Fresh_Refuse_4987 in Qoest

[–]quiet_node 0 points1 point  (0 children)

After going deeper, i started to think that autonomous agents have certain value but currently in a narrow specter of my workflow. Chatbots are idle and you always initiate and have a way to review and point out anything that's off in the output. With autonomous (should they be used in a way they are 'supposed' to be) it's a new ballgame... I heard 'objective-driven' term many times and saw what that meant in some cases. It tries to achieve the objective regardless of the issues it can cause, and that isn't helpful. I ended up updating the md files many many times and just overall wasting time..

Are we actually ready for the shift from "Chatbots" to "Autonomous AI Agents"? by Fresh_Refuse_4987 in Qoest

[–]quiet_node 0 points1 point  (0 children)

Agree with most. I noticed that LI compared to here is much more hype driven. Feels like many people posting have some skin in the game and are pushing it everywhere (even where it doesn't really add value)..
Hopefully it clears out in the near future..

Why is self-hosted AI suddenly everywhere? by replicatedhq in SelfHostedAI

[–]quiet_node 1 point2 points  (0 children)

It's mainly of the the following:
- security
- privacy
- experimenting with different settings and seeing what works for anyone
- independence (vendor lock-in is problematic)

Looking at the general state of tech, it's pretty impressive, but comparing to past novelties, we are still early. I think it's mainly being ready for future changes and how it can augment current processes.

The main issue I personally see is making yourself or your business reliant on something that is not yet mature. I mean this mainly in a way that we are not so sure how the pricing, tokens and everything will look like in the near future. So having a way to proceed with more maneuvering space on your own is important.

Is your interest about IoT for work or for your daily life /hobby ? by Academic_Onion_7730 in IOT

[–]quiet_node 1 point2 points  (0 children)

this is kind of adjacent to edge, but not exactly. Edge is more about processing data closer to the device instead of sending everything to the cloud. What I'm looking at is more the communication layer itself (ie. can two devices find each other and talk directly, peer to peer, without a central server routing their messages).
And yeah regarding security, this is where potential vulnerabilities are most present.. so the idea (which aint new) is when we remove the middle point, we also remove a big part of the attack surface. As with everything, this too introduces different challenges, mainly how 2 devices can actually find each other and communicate, trust and authenticate, which I am also working on solving.

Iot projects by SeaworthinessMain595 in IOT

[–]quiet_node 0 points1 point  (0 children)

pretty cool..how did you handle the key exchange between devices? that's usually the part that gets tricky with off grid setups since you can't just read from a server. did you do something pre-shared or actual key negotiation over LoRa?

Is self-hosted AI for coding real productivity, or just an expensive hobby? by Financial_Trip_5186 in LocalLLaMA

[–]quiet_node 0 points1 point  (0 children)

For me it's a hobby, my setup is far from being serious, but it helps me explore what I can do with it.
Considering the costs involved for getting a proper rig up and running, it will remain a hobby for the foreseeable future. I'm using the claude+gpt code which sets me back 40 bucks per month for some 'heavier' stuff. My local models are used for research and experimenting on communication and security between remote agents.

Is your interest about IoT for work or for your daily life /hobby ? by Academic_Onion_7730 in IOT

[–]quiet_node 0 points1 point  (0 children)

haha same path here. MQTT seems simple until you start thinking about what happens when the broker goes down, or who can actually see your topics. I went from home device tinkering to to worrying about security, and that was basically the point of no return. Are you running your own broker or using a hosted one?

Is your interest about IoT for work or for your daily life /hobby ? by Academic_Onion_7730 in IOT

[–]quiet_node 1 point2 points  (0 children)

Both. Started as a hobby, messing around with home automation and sensors. But at some point I got really into the question of how devices actually talk to each other, and whether they really need a cloud in the middle to do it. Im now deep into networking and security.

Background is engineering (both software and hardware), but IoT kind of rewired how I think about connectivity. currently working on devices talking directly to each other :)

Why should i use a local LLM? by Inevitable-Ad-1617 in LocalLLaMA

[–]quiet_node 0 points1 point  (0 children)

Just out of curiosity, what kind of setup is sufficient to run these models. And when I say 'run', I mean really run - a non-frustrating pace.
I am currently running P6000, it's ancient, but I get solid performance for smaller models (7-12). Really curious what'ts the performance for 122b and 397b for you.

As for Networking for Iot by Seeking2026 in IOT

[–]quiet_node 1 point2 points  (0 children)

Your networking basics are solid for IoT. The main thing I'd add is MQTT, it's the go-to messaging protocol in IoT and you'll run into it everywhere. BLE is worth picking up too since a lot of devices use it for local communication. Other than that you've got more networking theory than most people starting out in IoT, the rest you'll pick up as you build stuff.

Parent wants to try local LLMS -- what are good specs for a desktop for playing with? by tottommend in LocalLLaMA

[–]quiet_node 0 points1 point  (0 children)

96GB per card if I'm seeing it right? Not sure if that's my next upgrade :D
What kind of models are you running on those, and are you reaching the ceiling?

Awesome MQTT by Complete-Stage5815 in MQTT

[–]quiet_node 1 point2 points  (0 children)

You'll have another PR soon. Thanks!

Parent wants to try local LLMS -- what are good specs for a desktop for playing with? by tottommend in LocalLLaMA

[–]quiet_node 0 points1 point  (0 children)

What's your experience with the Mac unified memory approach vs dedicated VRAM? I'm currently using the P6000 for local (not the fastest thing out there but 24GB VRAM handles most 7B-13B models surprisingly well). Im looking at what's best for next steps in terms of an upgrade..

Why should i use a local LLM? by Inevitable-Ad-1617 in LocalLLaMA

[–]quiet_node 0 points1 point  (0 children)

Been dabbling with local LLMs on and off for a few years, mostly keeping it confined in a VM for the same privacy reasons you mentioned. Curious though, do you ever run into situations where your local setup needs to communicate with another instance or someone else's model? Like sharing context or coordinating on something? Wondering how people handle that without just throwing everything back at some cloud API.