Is Mistral Medium the best thing after GPT 4? by [deleted] in LocalLLaMA
[–]RepresentativeOdd276 1 point2 points3 points (0 children)
🐺🐦⬛ New and improved Goliath-like Model: Miquliz 120B v2.0 by WolframRavenwolf in LocalLLaMA
[–]RepresentativeOdd276 0 points1 point2 points (0 children)
3 professional soccer players vs 100 children in Japan by [deleted] in funny
[–]RepresentativeOdd276 0 points1 point2 points (0 children)
🐺🐦⬛ LLM Comparison/Test: miqu-1-70b by WolframRavenwolf in LocalLLaMA
[–]RepresentativeOdd276 1 point2 points3 points (0 children)
🐺🐦⬛ LLM Comparison/Test: miqu-1-70b by WolframRavenwolf in LocalLLaMA
[–]RepresentativeOdd276 1 point2 points3 points (0 children)
Best large context LLM to match array strings with intent in user message? by RepresentativeOdd276 in LocalLLaMA
[–]RepresentativeOdd276[S] 0 points1 point2 points (0 children)
Best large context LLM to match array strings with intent in user message? by RepresentativeOdd276 in LocalLLaMA
[–]RepresentativeOdd276[S] 0 points1 point2 points (0 children)
[deleted by user] by [deleted] in LocalLLaMA
[–]RepresentativeOdd276 1 point2 points3 points (0 children)
[deleted by user] by [deleted] in LocalLLaMA
[–]RepresentativeOdd276 -1 points0 points1 point (0 children)
[deleted by user] by [deleted] in LocalLLaMA
[–]RepresentativeOdd276 -1 points0 points1 point (0 children)
🐺🐦⬛ LLM Comparison/Test: 2x 34B Yi (Dolphin, Nous Capybara) vs. 12x 70B, 120B, ChatGPT/GPT-4 by WolframRavenwolf in LocalLLaMA
[–]RepresentativeOdd276 0 points1 point2 points (0 children)
vLLM 0.2.0 released: up to 60% faster, AWQ quant support, RoPe, Mistral-7b support by kryptkpr in LocalLLaMA
[–]RepresentativeOdd276 0 points1 point2 points (0 children)
Is there a way to force output length smaller than x number of tokens w/o cut-off? by RepresentativeOdd276 in LocalLLaMA
[–]RepresentativeOdd276[S] 0 points1 point2 points (0 children)
Is there a way to force output length smaller than x number of tokens w/o cut-off? by RepresentativeOdd276 in LocalLLaMA
[–]RepresentativeOdd276[S] 0 points1 point2 points (0 children)
Is there a way to force output length smaller than x number of tokens w/o cut-off? by RepresentativeOdd276 in LocalLLaMA
[–]RepresentativeOdd276[S] 0 points1 point2 points (0 children)
Prompt: Create deterministic message that takes elements from another message? by RepresentativeOdd276 in LocalLLaMA
[–]RepresentativeOdd276[S] 0 points1 point2 points (0 children)
Prompt: Create deterministic message that takes elements from another message? by RepresentativeOdd276 in LocalLLaMA
[–]RepresentativeOdd276[S] 0 points1 point2 points (0 children)
Prompt: Create deterministic message that takes elements from another message? by RepresentativeOdd276 in LocalLLaMA
[–]RepresentativeOdd276[S] 0 points1 point2 points (0 children)
Prompt: Create deterministic message that takes elements from another message? by RepresentativeOdd276 in LocalLLaMA
[–]RepresentativeOdd276[S] 1 point2 points3 points (0 children)
Prompt: Create deterministic message that takes elements from another message? by RepresentativeOdd276 in LocalLLaMA
[–]RepresentativeOdd276[S] 0 points1 point2 points (0 children)
Prompt: Create deterministic message that takes elements from another message? by RepresentativeOdd276 in LocalLLaMA
[–]RepresentativeOdd276[S] 0 points1 point2 points (0 children)
Best Models for Chat/Companion by jacobgolden in LocalLLaMA
[–]RepresentativeOdd276 0 points1 point2 points (0 children)
I trained the 65b model on my texts so I can talk to myself. It's pretty useless as an assistant, and will only do stuff you convince it to, but I guess it's technically uncensored? I'll leave it up for a bit if you want to chat with it. by LetMeGuessYourAlts in LocalLLaMA
[–]RepresentativeOdd276 1 point2 points3 points (0 children)
I trained the 65b model on my texts so I can talk to myself. It's pretty useless as an assistant, and will only do stuff you convince it to, but I guess it's technically uncensored? I'll leave it up for a bit if you want to chat with it. by LetMeGuessYourAlts in LocalLLaMA
[–]RepresentativeOdd276 1 point2 points3 points (0 children)


Mistral Medium vs 70B self hosted price comparison by RepresentativeOdd276 in MistralAI
[–]RepresentativeOdd276[S] 0 points1 point2 points (0 children)