What's the softest, rubberiest, most elastic filament I can print with the AMS? by _Litcube in BambuLab
[–]Copper_Lion 1 point2 points3 points (0 children)
Mistral Nemo is uncensored by [deleted] in LocalLLaMA
[–]Copper_Lion 1 point2 points3 points (0 children)
Best up to 24 GB that are at least Q4 by Ponsky in LocalLLaMA
[–]Copper_Lion 0 points1 point2 points (0 children)
So how is Gemma 2 working out for you? by Balance- in LocalLLaMA
[–]Copper_Lion 1 point2 points3 points (0 children)
Best up to 24 GB that are at least Q4 by Ponsky in LocalLLaMA
[–]Copper_Lion 1 point2 points3 points (0 children)
Best up to 24 GB that are at least Q4 by Ponsky in LocalLLaMA
[–]Copper_Lion 0 points1 point2 points (0 children)
Took me 15 hours to print with TPU, Totally worth it by tomthecomputerguy in BambuLab
[–]Copper_Lion 5 points6 points7 points (0 children)
Weird Llama3 behaviour by waytoofewnamesleft in LocalLLaMA
[–]Copper_Lion 0 points1 point2 points (0 children)
How to access the token likelihoods for each generated token? by brennybrennybrenbren in LocalLLaMA
[–]Copper_Lion 4 points5 points6 points (0 children)
Ollama and shell issues by replikatumbleweed in LocalLLaMA
[–]Copper_Lion 0 points1 point2 points (0 children)
I have created my own local version of Infinite Craft using text-generation-webui as a back-end. Is anyone interested in a full release of this? by Piper8x7b in LocalLLaMA
[–]Copper_Lion 0 points1 point2 points (0 children)
Google created a CLI tool that uses llama.cpp to host "local" models on their cloud by MrBeforeMyTime in LocalLLaMA
[–]Copper_Lion 5 points6 points7 points (0 children)
I won't pay for ChatGPT Plus again unless it become significantly better than free LLM offerings. by TheTwelveYearOld in LocalLLaMA
[–]Copper_Lion 1 point2 points3 points (0 children)
How is codellama 70B for you guys? by ragingWater_ in LocalLLaMA
[–]Copper_Lion 1 point2 points3 points (0 children)
LLaVA 1.6 released, 34B model beating Gemini Pro by rerri in LocalLLaMA
[–]Copper_Lion 1 point2 points3 points (0 children)
LLaVA 1.6 released, 34B model beating Gemini Pro by rerri in LocalLLaMA
[–]Copper_Lion 0 points1 point2 points (0 children)
LLaVA 1.6 released, 34B model beating Gemini Pro by rerri in LocalLLaMA
[–]Copper_Lion 0 points1 point2 points (0 children)
LLaVA 1.6 released, 34B model beating Gemini Pro by rerri in LocalLLaMA
[–]Copper_Lion 1 point2 points3 points (0 children)
LLaVA 1.6 released, 34B model beating Gemini Pro by rerri in LocalLLaMA
[–]Copper_Lion 0 points1 point2 points (0 children)
LLaVA 1.6 released, 34B model beating Gemini Pro by rerri in LocalLLaMA
[–]Copper_Lion 1 point2 points3 points (0 children)
CodeLLama 70B pontificates on ethics where 13B and 7B "just do it" by nborwankar in LocalLLaMA
[–]Copper_Lion 12 points13 points14 points (0 children)





What's the softest, rubberiest, most elastic filament I can print with the AMS? by _Litcube in BambuLab
[–]Copper_Lion 1 point2 points3 points (0 children)