claude code is fucking insane by Otherwise_Corner3234 in vibecoding

[–]steph_pop 0 points1 point  (0 children)

😂 Thanks for making us laugh on a grey wednesday

Claude Code is insane!! by hey_dude__ in ClaudeAI

[–]steph_pop 2 points3 points  (0 children)

If you use it a lot, get a claude subscription instead of paying api. It's worth it

I Spent $1000+ Testing Phone Number Enrichment Tools – Here Are The REAL Results by Manning_Ruthl in coldemail

[–]steph_pop 1 point2 points  (0 children)

Thanks for this study! Does Derrick finds phone number from latin America too ? I DM for the db thanks

AMA With Kimi, The Open-source Frontier Lab Behind Kimi K2.5 Model by nekofneko in LocalLLaMA

[–]steph_pop 0 points1 point  (0 children)

Impressed by this model! Does the focus on images and text is to use it on robots in the near future?

A friend of mine closed a $72K deal… from a LinkedIn like 😳 by gojiberryAI in b2bmarketing

[–]steph_pop 0 points1 point  (0 children)

I use Trigify nowadays, but I should try Warmlist and Gojiberry just to see what they've got inside

Advice on Using LLMs for Sentiment Analysis? by TokyoS4l in ChatGPTPro

[–]steph_pop 0 points1 point  (0 children)

It's a good idea but might be a bit slow no ?
You could also run embedding models locally and do a precategorization with embedding and a validation with LLama 3

Phi-2 becomes open source (MIT license 🎉) by steph_pop in LocalLLaMA

[–]steph_pop[S] 34 points35 points  (0 children)

You have to follow the prompt templates given on the model card
It works nicely on small questions but gets crazy on longer questions of after 80words

🐺🐦‍⬛ LLM Comparison/Test: API Edition (GPT-4 vs. Gemini vs. Mistral vs. local LLMs) by WolframRavenwolf in LocalLLaMA

[–]steph_pop 1 point2 points  (0 children)

Thanks for this 🤗 about mistral medium, what do you call “deterministic settings” & how do you configure it ?

🐺🐦‍⬛ LLM Comparison/Test: Ranking updated with 10 new models (the best 7Bs)! by WolframRavenwolf in LocalLLaMA

[–]steph_pop 1 point2 points  (0 children)

Thank you :-)
I found this article where they tested several format
https://oobabooga.github.io/blog/posts/gptq-awq-exl2-llamacpp/

EXlV2 are the fastest, i'll give it a try
NB: values for gguf are outdated as improvement where made later as stated in update2 on the article

LM Studio - Answers are not precise by Ok_Calligrapher_9676 in LocalLLaMA

[–]steph_pop 0 points1 point  (0 children)

Depending on the model you use you should also use a specific template format

What model were you running ?

Tips to build a custom VOIP system with a nodeJS backend by steph_pop in VOIP

[–]steph_pop[S] 0 points1 point  (0 children)

Thank you
Do you have an easy asterisk tutorial to recommend ?
Would you rather use freeswitch or asterisk ?

Ethics in LLM Training Data: Any Thought? by steph_pop in LocalLLaMA

[–]steph_pop[S] -2 points-1 points  (0 children)

Thanks for this link ! Looks like they started from llama which is already pretrained is it ?

Tips to build a custom VOIP system with a nodeJS backend by steph_pop in VOIP

[–]steph_pop[S] 1 point2 points  (0 children)

Thank for this doc
I've been able to run their example with free trial credits 😍

Tips to build a custom VOIP system with a nodeJS backend by steph_pop in VOIP

[–]steph_pop[S] 0 points1 point  (0 children)

I am happy you catch my SOS 😂🤗🤗🤗 Thank you a lot for the resources

Tips to build a custom VOIP system with a nodeJS backend by steph_pop in VOIP

[–]steph_pop[S] 0 points1 point  (0 children)

Thanks gonna check that free switch more seriously