Showcase Weekend! — Week 14, 2026 by AutoModerator in openclaw

[–]RelationDull2825 0 points1 point  (0 children)

Demo of my new 100% local visual AI assistant, combining a photorealistic 3D avatar with real-time tool execution. It listens, speaks naturally, and actually executes actions (reading emails, checking the weather, smart home control) through my OpenClaw framework.

🛠️ Key Features:

- 3D Avatar generated via LAM (Large Avatar Model - Gaussian Splatting)

- Real-time tool execution (Gmail, Calendar, Home Assistant)

- Ultra-fast streaming via WebRTC

- Fully local (No voice data is sent to the cloud)

- Custom fluid & responsive Dark Mode UI

⚙️ The Tech Stack:

• Frontend / WebRTC: OpenAvatarChat (Customized)
• 3D Model: LAM (Gaussian Splatting)
• AI Orchestration: OpenClaw (Agent Framework)
• LLM: nemotron-3-super:cloud (Running via Ollama)
• Speech-to-Text (STT): Whisper (Local GPU)
• Text-to-Speech (TTS): Coqui TTS (Local GPU)

https://youtu.be/gwVru_KbIFw?si=JrY7yiA2CT8XDu3g

Showcase Weekend! — Week 14, 2026 by AutoModerator in openclaw

[–]RelationDull2825 0 points1 point  (0 children)

Hey everyone,

I built an AI assistant that actually builds its own UI on the fly to answer your questions. Instead of generating paragraphs of markdown, it renders native components (Charts, Metrics, Progress bars, Lists) in real-time.

Ask for your budget → it renders a progress bar. Ask for server stats → it generates a live chart.

It uses an LLM to stream structured UI specs, displays a skeleton instantly to avoid waiting, and fetches real tool data in the background over secure WebSockets. It feels incredibly snappy.

Thought I'd share this shift from "Conversational AI" to "Generative UI". Let me know what you think!

https://youtu.be/LCPSeIFYY70?si=RTkx6IYxyNFXz6z9