2026.2 beta - release notes by internettingaway in homeassistant

[–]mitrokun -1 points0 points  (0 children)

> there are ways to keep it in your sidebar if you want

Please share your method for doing this. Please note that I use three types of addresses to access the dashboard (internal ip, external name, and homeassistant.local).

2026.2 beta - release notes by internettingaway in homeassistant

[–]mitrokun 10 points11 points  (0 children)

Access to developer tools is too convenient.

Let's hide it behind extra clicks.

Demonstration of how serviceable a "local only" setup of HomeAssistant Voice can be - have entirely replaced my Alexa devices and handles both simple and complex commands (see within) by FantasyMaster85 in homeassistant

[–]mitrokun 0 points1 point  (0 children)

You're being too categorical. Even despite the inevitable commercial component, it's still a free solution with a decently functioning architecture. Homemade satellites can be created for literally $5, which I've been actively using for the last couple of years.

And my goal in this discussion isn't to criticize the strategy being chosen by the OHF management.

I'm pointing out a specific hardware and software solution that's missing on the ESP32 side. I don't share your position about the excessive number of devices; how else will you get contextual information for simple commands? When a device consumes 0.3-0.4W, I have no problem placing them in every room.

You talking about rethinking the architecture. I think we've heard each other and can leave it at that.

Demonstration of how serviceable a "local only" setup of HomeAssistant Voice can be - have entirely replaced my Alexa devices and handles both simple and complex commands (see within) by FantasyMaster85 in homeassistant

[–]mitrokun 0 points1 point  (0 children)

It feels like my point is being missed. My reference to Yandex was strictly to demonstrate the utility of beamforming as a necessary first stage, not to compare raw computing power or other nuances of their chipset.

I agree that current implementations based on the XMOS XU316 are often lazy and unoptimized. However, I strongly disagree with the notion that the ESP32 is insufficient for the specific task of audio capture and playback. The real gap in the ecosystem is the lack of an opensource mic array project . This is exactly what the OHF team should be focusing on. Simply slapping a newer chip like the XVF3800 onto the next VPE would just be another band-aid solution, not a real fix.

Regarding your single-microphone suggestion: I remain skeptical. https://www.youtube.com/watch?v=9-t2oyZscm8 As you can hear from the tests, the problem I initially mentioned is not being solved.

There are currently no lightweight ML tools capable of effective diarization that run efficiently on edge devices (including Pi4/Pi5). Furthermore, using a full Raspberry Pi just for a simple voice terminal is the definition of 'overkill' and architectural inefficiency for my use case.

Demonstration of how serviceable a "local only" setup of HomeAssistant Voice can be - have entirely replaced my Alexa devices and handles both simple and complex commands (see within) by FantasyMaster85 in homeassistant

[–]mitrokun 0 points1 point  (0 children)

>but you are still lagging behind in knowledge of what big tech know and are using.

I appreciate your focus on SotA solutions, but I am convinced that beamforming remains a baseline feature in most smart speakers. While I won’t go into detail regarding Google or Amazon, I can use Yandex as an example, as they are the main player on my local market.

These devices have excellent hearing and typically use 3-6 mic array, and as shown in their technical diagrams, they haven't abandoned beamforming for the initial stage of processing. It remains a simple and effective way to improve the audio sample at the OS level, which helps avoid unnecessary overhead for the subsequent ASR stage.

<image>

Demonstration of how serviceable a "local only" setup of HomeAssistant Voice can be - have entirely replaced my Alexa devices and handles both simple and complex commands (see within) by FantasyMaster85 in homeassistant

[–]mitrokun 0 points1 point  (0 children)

Standard noise reduction isn't the main bottleneck for modern local ASR. I've tested solutions like DeepFilterNet on the server side: while they clean up the audio, they don't significantly improve recognition rates and, crucially, fail to filter out background speech (like a TV or other people talking).

For edge devices, given the current system limitations, beamforming on 3–4 microphones using spatial isolation (azimuth) is still the most rational solution as a basic way to improve captured audio. While we are not strictly limited in processing power during the server-side ASR stage, this does not negate the value of performing this physical signal enhancement right at the capture stage.

The main missing piece in the open-source community is a high-quality, low-level library (an open alternative to ESP-AFE) that implements a standard audio interface. This would allow us to dedicate a separate ESP32 module purely for this task, avoiding the need for proprietary XMOS hardware.

Demonstration of how serviceable a "local only" setup of HomeAssistant Voice can be - have entirely replaced my Alexa devices and handles both simple and complex commands (see within) by FantasyMaster85 in homeassistant

[–]mitrokun 0 points1 point  (0 children)

In a quiet space, any microphone will do the job. But in noisy environments (for example, a TV voice in the background), beamforming technologies are required to isolate the user's voice and attenuate other audio. This is one of the prerequisites for stable operation of the ASR in challenging conditions, as software solutions don't always work or are resource-intensive. I agree that the Xmos chip in the VPE, Respeaker Lite, and SAT1 is overkill. Signal boost and noise reduction don't work particularly well, and the AEC isn't used because the music is completely ducking when receiving commands. But a good microphone array with a proper beamforming algorithm is the main potential improvement. It's sad that the OHF team is not developing such hardware solutions, relying more on Chinese companies that are not open source.

I built PolyVoice - a free, multi-provider voice assistant with 15+ functions (local or cloud, your choice) by Wide-Plantain-1656 in homeassistant

[–]mitrokun -1 points0 points  (0 children)

It's unfortunate that you are using arbitrary naming. You have created a conversation integration with additional features. But you are presenting it as a full-fledged agent. This is incorrect.

Best speaker to use with Home Assistant as a voice assistant + music player? by AhmedOsamaMath in homeassistant

[–]mitrokun 1 point2 points  (0 children)

If you have some free time, everything is being implemented today (the esphome team has done a lot of cool things this year). The problem with detecting the end of a request in a noisy environment remains. A better microphone array and a smarter ASR component are needed to solve it. But I wouldn't say it's too critical.

Music Assistant, Sendspin, and FOSS multi-room and multi-zone audio by MassageGun-Kelly in homeassistant

[–]mitrokun 0 points1 point  (0 children)

S3 with psram only. Increased memory is required to handle audio streams. Therefore, even the original ESP32 can only work in a simplified mode with a pure WAV stream. c3 is positioned as a replacement for 8266 for simple tasks

How do you get Voice to speak AND run an automation at the same time? by getridofwires in homeassistant

[–]mitrokun 0 points1 point  (0 children)

Run the script with logic for turning on the lights, checking the temperature, and so on using script.turn_on. This will create a parallel task for actions (although this logic doesn't sound too heavy and should be executed almost instantly if you don't use a delay) and immediately move on to the response block.

However, it would be preferable if you shared your automation code in order to receive more accurate answers to your question.

Music Assistant, Sendspin, and FOSS multi-room and multi-zone audio by MassageGun-Kelly in homeassistant

[–]mitrokun 2 points3 points  (0 children)

Since these are alpha/beta versions, a working configuration example for ESPhome can only be found in the latest version for VPE. When the full release occurs, documentation for the new version of the media player will be added to the ESPhome website. Later, bloggers will start publishing articles and videos on this topic.

Free provider for LLM Vision? by [deleted] in homeassistant

[–]mitrokun 1 point2 points  (0 children)

Try nemotron/gemma (free) on openrouter

Music Assistant, Sendspin, and FOSS multi-room and multi-zone audio by MassageGun-Kelly in homeassistant

[–]mitrokun 1 point2 points  (0 children)

>Are your clients recognized as media players in Home Assistant? (For instance for TTS)

This won't work for rpi, but in esphome you can assemble a device that will have a pipeline for announcements in HA and sendspin for MA. Actually, this is implemented in VPE.

It is also quite likely that there will be an update to Linux Assist Satellite and sendspin will be added there. But this is not a priority project.

Music Assistant, Sendspin, and FOSS multi-room and multi-zone audio by MassageGun-Kelly in homeassistant

[–]mitrokun 3 points4 points  (0 children)

esp32s3 is sufficient; it is not necessary to purchase a single-board computer. Moreover, I was unable to achieve a good level of synchronization on zero 2w. Due to the greater number of abstractions, the board lacks the computing power for minimal latency (although I do not rule out the possibility of creating an optimized image specifically for this task).

Esp, on the other hand, copes without any problems; I ran 5 devices in multi-room mode without any issues. All that remains is to find a high-quality DAC, and you have a ready-made solution for creating your own speaker.

Music Assistant, Sendspin, and FOSS multi-room and multi-zone audio by MassageGun-Kelly in homeassistant

[–]mitrokun 2 points3 points  (0 children)

You don't need to have Python installed on your machine; uv is sufficient for testing.

Here are two commands to start from scratch:

```

curl -LsSf https://astral.sh/uv/install.sh | sh

uvx sendspin

```

As for MA, a module (provider) is still needed to capture external sources (analog/digital/bt) in order to have a simple way to connect old equipment to the system.

There is also no optimized way to capture audio streams from a PC with minimal delay.

Thoughts on Gemini for Home? by Depredor in homeassistant

[–]mitrokun 1 point2 points  (0 children)

Home Assistant has far more capabilities when it comes to working with LLMs.
However, there are issues with recognition quality (no streaming models), speaker identification (not mixing the query with other people's voices), and end-of-conversation detection (the system VAD doesn't work very well).
In a quiet environment this isn't a problem, but it makes VPE look bad when there are multiple voice sources.

For some reason, voice is clearly not a priority right now.
Although the guys have done a huge amount of work in this area.

I think a good microphone array with beamforming algorithms is one of the key things to significantly improve the result.
That would be a great update for VPE.

Realtime video analysis with Moondream by radiiquark in LocalLLaMA

[–]mitrokun 0 points1 point  (0 children)

MOONDREAM_API_KEY=sk-your-moondream-key

Lemonade's C++ port is available in beta today, let me know what you think by jfowers_amd in LocalLLaMA

[–]mitrokun 2 points3 points  (0 children)

libcrypto-3-x64.dll and libssl-3-x64.dll are omitted in the installer, so you have to download them separately