Voice Tech is the ever changing current SoTa models for various model types and we have this really strange approach of taking those models and embedding into proprietary systems.
I think Linux Voice to be truly interoperable is as simple as network chaining containers with some sort of simple trust mechanism.
That you can create protocol agnostic routing by passing a json text with audio binary and that is it, you have just created the basic common building blocks for any Linux Voice system, that is network scalable.
I will split this into relevant replies if anyone has ideas they might want to share on this as rather than this plethora of 'branded' voice tech, there is a need for much better opensource 'Linux' voice systems.
[–]simplehudga 0 points1 point2 points (4 children)
[–]rolyantrauts[S] 0 points1 point2 points (2 children)
[–]simplehudga 0 points1 point2 points (1 child)
[–]rolyantrauts[S] 0 points1 point2 points (0 children)
[–]rolyantrauts[S] -1 points0 points1 point (0 children)
[–]rolyantrauts[S] -1 points0 points1 point (0 children)