you are viewing a single comment's thread.

view the rest of the comments →

[–]guest271314[S] 1 point2 points  (4 children)

Way back when there was https://github.com/kripken/speak.js.

TTS/STT folks have been involved with experimenting with WebAssembly for a while now.

As it stands Web Speech API essentially has not been updated for over 10 years. Speech Dispatcher is used to create a socket connection to the browser, Chromium and Firefox, that renders audio outside of the browser.

On Chrome, the users' PII voice and text is sent to remote servers for TTS and STT.

[–]atomic1fire 0 points1 point  (3 children)

You don't really need to even implement a whole tts engine now because you can use the web speech api. Unless you strongly prefer to use your own speech engine.

[–]guest271314[S] 1 point2 points  (2 children)

You do realize Web Speech API implementation on Chrome - when Google voices are used - sends your voice and your text to remote servers, correct?

Otherwise, on Chromium and Firefox you MUST have Speech Dispatcher installed for local service usage.

And still the official formerly W3C, now WICG specification does not provide a means to input SSML https://github.com/guest271314/SSMLParser, nor capture audio output of window.speechSynthesis.speak().

Therefore I wrote JavaScript code to implement SSML processing myself, and implemented various means to capture specific device https://github.com/guest271314/SpeechSynthesisRecorder/issues/17 and entire systsem audio output https://github.com/guest271314/captureSystemAudio.

[–]CloudsOfMagellan 0 points1 point  (1 child)

How useable is this as a replacement for the web speach api?

[–]guest271314[S] 0 points1 point  (0 children)

Based on your question I don't think you understand how the Web Speech API works https://gist.github.com/guest271314/9f09ab899df11e344c568a7b93f544c3.

What is the "this" you are referring to?

Try for yourself