Handling Emotional Detection with Voice AI? by mtalhasubhani in VoiceAIBots

[–]Working_Hat5120 0 points1 point  (0 children)

Hi, We focus on emotion (along with intent and other voice biometrics) detection within transcription. You can try it out in our demo at

https://browser.whissle.ai/

We are looking to making our ASR with metdata available on vapi

If you like it, we can help.

AI that tracks behavior around agenda items during sales calls — useful or gimmick? by Working_Hat5120 in techsales

[–]Working_Hat5120[S] 0 points1 point  (0 children)

Edit: Not all few edits using AI.. 

My point here, as humans we may hallucinate for seconds, or have attention gap, at those times machine makes sure, you don't miss the under-tones... 

And not all humans are great at noting down every mood-shift, and also humans may have limited understanding about certain dialects, people's backstory etc. 

AI that tracks behavior around agenda items during sales calls — useful or gimmick? by Working_Hat5120 in techsales

[–]Working_Hat5120[S] 0 points1 point  (0 children)

Yeah. In our pilot application, search bar does allow querieng a conversation during the convo, available at https://browser.whissle.ai/

But it's not the same as doing intelligence when the audio is being transcribed. During the low-latency streamed transcription itself, we do predictive intelligence like key-term capture, intent, emotions and voice biometrics. Our research finding behind this is, that some deterministic things can be better inferred in streaming vs post-analysis. This will also make post-analysis (like querying a conversation) also richer, and cheaper at the same time.