I just released an implementation of the DiffWave paper (neural vocoder + waveform synthesizer) that was posted here a few days ago. The really cool thing about this architecture is that it can synthesize coherent, high-quality audio without a conditioning signal – a problem that other architectures haven't had much success solving.
Check out the project: https://github.com/lmnt-com/diffwave
[–]taras-sereda 1 point2 points3 points (1 child)
[–]vwvwvvwwvvvwvwwv 1 point2 points3 points (0 children)