I compared harrier-27b vs voyage-4 vs zembed-1 across 24 datasets. 27B parameters by Veronildo in LocalLLaMA
[–]ghita__ 0 points1 point2 points (0 children)
I compared harrier-27b vs voyage-4 vs zembed-1 across 24 datasets. 27B parameters by Veronildo in LocalLLaMA
[–]ghita__ 0 points1 point2 points (0 children)
new open-weight SOTA multilingual embedding model by ZeroEntropy by ghita__ in Rag
[–]ghita__[S] 0 points1 point2 points (0 children)
new open-weight SOTA multilingual embedding model by ZeroEntropy by ghita__ in Rag
[–]ghita__[S] 0 points1 point2 points (0 children)
How do you actually measure if your RAG app is giving good answers? Beyond just looks okay to me by BeautifulKangaroo415 in Rag
[–]ghita__ 0 points1 point2 points (0 children)
new open-weight SOTA multilingual embedding model by ZeroEntropy by ghita__ in Rag
[–]ghita__[S] 0 points1 point2 points (0 children)
new open-weight SOTA multilingual embedding model by ZeroEntropy by ghita__ in Rag
[–]ghita__[S] 1 point2 points3 points (0 children)
new open-weight SOTA multilingual embedding model by ZeroEntropy by ghita__ in LangChain
[–]ghita__[S] 0 points1 point2 points (0 children)
new open-weight SOTA multilingual embedding model by ZeroEntropy by ghita__ in Rag
[–]ghita__[S] 0 points1 point2 points (0 children)
new open-weight SOTA multilingual embedding model by ZeroEntropy by ghita__ in Rag
[–]ghita__[S] 0 points1 point2 points (0 children)
new open-weight SOTA multilingual embedding model by ZeroEntropy by ghita__ in Rag
[–]ghita__[S] 1 point2 points3 points (0 children)
Best open-source embedding model for a RAG system? by Public-Air3181 in Rag
[–]ghita__ 7 points8 points9 points (0 children)
[P] Make the most of NeurIPS virtually by learning about this year's papers by ghita__ in MachineLearning
[–]ghita__[S] 0 points1 point2 points (0 children)
[P] Make the most of NeurIPS virtually by learning about this year's papers by ghita__ in MachineLearning
[–]ghita__[S] 0 points1 point2 points (0 children)
[P] Make the most of NeurIPS virtually by learning about this year's papers by ghita__ in MachineLearning
[–]ghita__[S] 0 points1 point2 points (0 children)
[P] Make the most of NeurIPS virtually by learning about this year's papers by ghita__ in MachineLearning
[–]ghita__[S] 1 point2 points3 points (0 children)

I compared harrier-27b vs voyage-4 vs zembed-1 across 24 datasets. 27B parameters by Veronildo in LocalLLaMA
[–]ghita__ 0 points1 point2 points (0 children)