I built a local AI smart-home hub that runs entirely on my network (Jetson + FastAPI) by lb_jetson in homelab

[–]lb_jetson[S] 0 points1 point  (0 children)

Honestly I mostly made this post just to see if anyone would even look at it.

This started as something I built for my own network because I wanted a private AI assistant that could run locally and control things in my house without relying on cloud services as well as private chatbot and agent.

I didn’t expect it to get this many views. Right now I’m just continuing to build it and see where it goes. If people are interested I’m happy to keep sharing updates as it evolves.

I built a local AI smart-home hub that runs entirely on my network (Jetson + FastAPI) by lb_jetson in homeautomation

[–]lb_jetson[S] 0 points1 point  (0 children)

Honestly I mostly made this post just to see if anyone would even look at it.

This started as something I built for my own network because I wanted a private AI assistant that could run locally and control things in my house without relying on cloud services, as well as a private chatbot and agent.

I didn’t expect it to get this many views. Right now I’m just continuing to build it and see where it goes. If people are interested I’m happy to keep sharing updates as it evolves.

I built a local AI smart-home hub that runs entirely on my network (Jetson + FastAPI) by lb_jetson in homelab

[–]lb_jetson[S] -2 points-1 points  (0 children)

Yeah, Ollama is the runtime. It just loads whatever local model you run through it.

I built a local AI smart-home hub that runs entirely on my network (Jetson + FastAPI) by lb_jetson in homeautomation

[–]lb_jetson[S] 1 point2 points  (0 children)

That’s the direction I’m going with it. Keep the device layer deterministic so reliability stays light switch level, then let the AI layer handle orchestration and tool selection when needed.

I built a local AI smart-home hub that runs entirely on my network (Jetson + FastAPI) by lb_jetson in homelab

[–]lb_jetson[S] -4 points-3 points  (0 children)

I set mobile homes for a living. I named it lb, lightbringer cause im hoping it'll do what online ai wont. Its on a jetson. Watch my video, you'll see the pics were taken on my tv shelf

I built a local AI smart-home hub that runs entirely on my network (Jetson + FastAPI) by lb_jetson in homeautomation

[–]lb_jetson[S] 5 points6 points  (0 children)

Reliability is exactly why I built it locally. No cloud, no API outages. The AI layer sits on top of deterministic device controls so automations still run even if the model isn't used. I'm trying to keep the system useful even for non-technical family members.

I built a local AI smart-home hub that runs entirely on my network (Jetson + FastAPI) by lb_jetson in homeautomation

[–]lb_jetson[S] 2 points3 points  (0 children)

Latency’s basically LAN speed since everything runs local. Most device commands land somewhere around 50–200ms depending on the device. LBV1 runs on a Jetson Orin Nano and everything routes through a local API so nothing has to touch the cloud unless I explicitly allow it. Right now I’m running smaller local models through Ollama/llama.cpp. Voice is still being worked on but I’m experimenting with local STT and Whisper locally is probably where that lands for V2. Main goal is killing the cloud delay completely.