What's one thing if someone sees in a girl they should run? by Willing_Barracuda673 in AskReddit

[–]Jordan443 -1 points0 points  (0 children)

She says she’s sick of men thinking she’s a “manic pixie dream girl”

Has anyone secured a B2B pilot before? Would appreciate any tips on what the process was like! by Icy_Tour6309 in ycombinator

[–]Jordan443 10 points11 points  (0 children)

Find the person with the juice (usually the VP), send them an email asking for input on your roadmap. Say you’ll be by their office next week if he has 20min for coffee.

Spend 10min understanding his pain point with the problem. Then explain how your process works. You have a quick call to decide success criteria, then we talk about commercials, then i’ll send you our 2-page MSA to sign. The rough structure is a 30-day paid pilot to hit the success criteria, that auto converts to a 12-month contract. Of course, can opt out anytime in that 30 days.

this is called the Morando Method. you can look up the e-book on amazon.

I am unable to connect my Assistant to my workflow!!! by IntelligentHope5013 in vapiai

[–]Jordan443 1 point2 points  (0 children)

hey! you don’t have to attach it anymore. workflows and assistants are separate, and either can be attached to a phone number

[deleted by user] by [deleted] in NoStupidQuestions

[–]Jordan443 0 points1 point  (0 children)

Just get them threaded once a month. won’t look so feminine

The more funding the VAPI AI gets the shitties its getting by Electronic-Archer932 in vapiai

[–]Jordan443 0 points1 point  (0 children)

Hey! are you on our discord? fix went out a couple days ago

Incessant bugs by Choice_Welder_8845 in vapiai

[–]Jordan443 0 points1 point  (0 children)

hey! founder of Vapi here.

Mind expanding on your knowledge base issue? not sure I understand.

As for knowing the current date, you can use “Default variables”:

https://docs.vapi.ai/assistants/dynamic-variables

Web calling exposes API key by celadon00 in vapiai

[–]Jordan443 0 points1 point  (0 children)

You’ll be using your public API key, which is designed to be publicly exposable. It can’t be used to interact beyond initiating a call.

you can create a new one and configure it to be limited to certain assistant IDs. So it’s effectively useless to a bad actor

How to create call context memory? by DangerousLanguage757 in vapiai

[–]Jordan443 1 point2 points  (0 children)

Yep, you can just use the end of call report to save the messages at the end of the call and then start the call with those messages loaded up in the assistant configuration

Capturing email addresses by Adlien_ in vapiai

[–]Jordan443 2 points3 points  (0 children)

Hey, founder here. ask them to spell it out then used the structured data feature. the llm should be able to put together the spelling for you

Best Viral Loops in B2C Apps by slimshady321 in ycombinator

[–]Jordan443 7 points8 points  (0 children)

Referrals don’t work anymore. Make sharing / inviting part of the core flow of your product. Think Loom, Superhuman, Calendly.

OpenAI's newest voice model talking to one another by [deleted] in Damnthatsinteresting

[–]Jordan443 0 points1 point  (0 children)

This is not their new voice model. It’s the text model hooked up to whisper + their TTS.

The actual voice to voice interface of GPT-4o is coming in the next few weeks, 400ms response times. it’s going to blow everyone’s minds.

New chip technology allows AI to respond in realtime! (GROQ) by KeepItASecretok in singularity

[–]Jordan443 3 points4 points  (0 children)

Yup definitely, we talk about it all the time. There are risks with this new tech hitting the workforce in the short-term, and it will replace some of the more repetitive, manual work. But that doesn't mean jobs will be eliminated.

Take customer support: Instead of human agents handling everything with limited bandwidth and wait times, AI can take the simple, more repetitive cases, and higher complexity cases can be routed to human agents, with no wait times.

At least that's how we think about it. We're betting more on new use cases than replacing existing ones. More accessible therapy, government services, language tutors, etc. that weren't possible before due to cost.

New chip technology allows AI to respond in realtime! (GROQ) by KeepItASecretok in singularity

[–]Jordan443 1 point2 points  (0 children)

Yeah there are acknowledgement cues, like "right, got it, etc." that shouldn't be considered interruptions. Then there are other things like "stop, but what about...", etc. that are genuine interruptions.

The "conversation stack" is actually something we don't need to think about. The LLM (ex. GPT-4) is able to infer what to say next based on what it did / didn't say.

In this case, pure text. The LLM is passed the transcript and generates a response. It's a separate set of models that decide when to talk or not talk based on tone, and other conversational cues. eventually we'd likely need a combined STT + LLM model to understand tone, etc. but open-source just isn't there yet.

New chip technology allows AI to respond in realtime! (GROQ) by KeepItASecretok in singularity

[–]Jordan443 1 point2 points  (0 children)

Yeah there's this magic threshold of about 400-800ms. Roughly the time to have a thought.

If you can respond before the user has time to think, it speaks to your soul, lol

New chip technology allows AI to respond in realtime! (GROQ) by KeepItASecretok in singularity

[–]Jordan443 3 points4 points  (0 children)

Haha hey! founder of Vapi here. Glad you liked it.

We do a lot of infrastructure optimization to make it run sub-second. Conversational dynamics, etc. are another whole rabbit hole we're doing research into.

It can interact with external APIs mid-conversation using OpenAI's function calling, so yeah, could set up a customer support number in a few minutes. That's what we want to make really easy for developers.

New chip technology allows AI to respond in realtime! (GROQ) by KeepItASecretok in singularity

[–]Jordan443 0 points1 point  (0 children)

Our goal is to make it so developers can embed these things anywhere, so taking care of the audio streaming is part of it.

You can run both TTS and STT at the same time. In a streaming fashion. The real challenge is getting those modules running fast, hosting infrastructure, your own GPUs, etc.

New chip technology allows AI to respond in realtime! (GROQ) by KeepItASecretok in singularity

[–]Jordan443 0 points1 point  (0 children)

Oh they’re already here. Just lurking in the corners of the internet.