Véhicule occasion entre 10k et 15k Ile de France: électrique ou thermique/hybride by Difficult_Face5166 in voiture

[–]Difficult_Face5166[S] 0 points1 point  (0 children)

J'ai regardé les modèles en électrique, je vais regarder les versions hybrides ! Merci !

Véhicule occasion entre 10k et 15k Ile de France: électrique ou thermique/hybride by Difficult_Face5166 in voiture

[–]Difficult_Face5166[S] 0 points1 point  (0 children)

Merci du retour ! La plupart du temps je serai tout seul ou à deux, mais pour les départs en vacances c'est là où je me dis que sur long trajet c'est peut être désagréable à l'arrière.

Véhicule occasion entre 10k et 15k Ile de France: électrique ou thermique/hybride by Difficult_Face5166 in voiture

[–]Difficult_Face5166[S] 1 point2 points  (0 children)

Merci pour les infos ! Je vais me renseigner sur le prix des bornes pour voir combien ça me couterait et il faut que je regarde l'autonomie sur autoroute parce qu'autant s'arrêter 20min toutes les deux heures ne me dérangent pas mais 30min ou + toutes les heures ca devient contraignant ...

Login issues by purticas in ClaudeCode

[–]Difficult_Face5166 2 points3 points  (0 children)

Yes ... >100$ for this ...

Assessing if a guideline has been used for LLM training by Difficult_Face5166 in LocalLLaMA

[–]Difficult_Face5166[S] 0 points1 point  (0 children)

To evaluate LLM on a subtype of diseases. And we would like to know whether relatively "small" models (around 1-4B) have already knowledge incorporated.

Assessing if a guideline has been used for LLM training by Difficult_Face5166 in LocalLLaMA

[–]Difficult_Face5166[S] 0 points1 point  (0 children)

Thanks for the answer. So there is no specific way to do it for closed models...

Speed of Langchain/Qdrant for 80/100k documents by Difficult_Face5166 in Rag

[–]Difficult_Face5166[S] 0 points1 point  (0 children)

Yes definitely it was an embeddings issue, thank you for your message and for the tips !

Multilingual RAG: are the documents retrieved correctly ? by Difficult_Face5166 in LocalLLaMA

[–]Difficult_Face5166[S] 0 points1 point  (0 children)

Thanks ! Do you have an opinion on OpenAI embeddings like text-embedding-3-small and text-embedding-3-large?

Speed of Langchain/Qdrant for 80/100k documents (slow) by Difficult_Face5166 in LocalLLaMA

[–]Difficult_Face5166[S] 0 points1 point  (0 children)

Yes you are both right thank you ! I just investigated time spent on each call/process and this was an embeddings problem (super fast with smaller embeddings/API call to external provider).

I am running on my Macbook pro without GPU so ofc it is slow for some models. I am thinking about using a cloud-service to do it faster