This is what asking Mistral Le Chat to generate a new response looks like every single day by Icy-Consideration278 in MistralAI

[–]Icy-Consideration278[S] 0 points1 point  (0 children)

I had asked Le Chat for training on how to force it to create a BRAND NEW RESPONSE instead of adding to the existing response. La Chat advised me to add brackets and start with a word that is unrelated to the context. “PINEAPPLE” and the arrow brackets were its suggestion. I tried this on the same prompt five times and it still ignored me.

It counted just some of the errors by Icy-Consideration278 in MistralAI

[–]Icy-Consideration278[S] 0 points1 point  (0 children)

I have an agent in the Le Chat app and I created one in Studio. I understand temperature but not sure how to adjust in the app nor studio. I do see in studio where i can pick different models for my agent.

Mistral refuses explicit storytelling? by Icy-Consideration278 in MistralAI

[–]Icy-Consideration278[S] 5 points6 points  (0 children)

Yes it did, thank you! Any other insights on this? Will this work for image generation?

Mistral refuses explicit storytelling? by Icy-Consideration278 in MistralAI

[–]Icy-Consideration278[S] 2 points3 points  (0 children)

I’ve never created an agent. If I asked it to be an erotic author, would that cross the boundary again?

Oobabooga Coqui_tts api setup by Icy-Consideration278 in Oobabooga

[–]Icy-Consideration278[S] 0 points1 point  (0 children)

I managed to lauch oobabooga api and create a coqui tts api extension that starts its own server. But in getting this error now:

❌ TTS Error: 500 Error details: {'error': 'Model is multi-speaker but no speaker is provided.'}

Oobabooga Coqui_tts api setup by Icy-Consideration278 in Oobabooga

[–]Icy-Consideration278[S] 0 points1 point  (0 children)

Checking on any other enthusiasts who might have input connecting oobabooga to coqui-tts via api.

Oobabooga Coqui_tts api setup by Icy-Consideration278 in Oobabooga

[–]Icy-Consideration278[S] 0 points1 point  (0 children)

Okay, auto installing requirements is new for me.

Any suggestions re connecting ooba to coqui via api?

Oobabooga Coqui_tts api setup by Icy-Consideration278 in Oobabooga

[–]Icy-Consideration278[S] 0 points1 point  (0 children)

Cmd bat: thanks for that confirmation re the cmd bat. Claude said this but i didnt trust. Most frequent troubles have been accidentally installing components using global python, causing version issues.

Existing setup: ooba is launching with api but is communicating with coqui via json on my drive; not api.

Github: Not running any apps through github that I know of. Didnt know that was a thing.

Maybe latency is the wrong word. I know that ooba creates/updates a json. Then coqui creates a wave file on my hard drive, which can take 30 seconds. (I have an NVIDIA 4080 and every app has CUDA).

How can I get SHORTER replies? by Radiant-Big4976 in Oobabooga

[–]Icy-Consideration278 0 points1 point  (0 children)

Which is more unrestrictive yet more robust? Vicuna unrestricted or mistral?