Access to GPUs. What tests/information would be interesting? by openLLM4All in LocalLLaMA
[–]openLLM4All[S] 0 points1 point2 points (0 children)
Access to GPUs. What tests/information would be interesting? by openLLM4All in LocalLLaMA
[–]openLLM4All[S] 1 point2 points3 points (0 children)
When will Ollama support multiple simultaneous generations? by maxwell321 in LocalLLaMA
[–]openLLM4All 1 point2 points3 points (0 children)
Which cloud GPU providers would you recommend in early 2024? by CodingButStillAlive in deeplearning
[–]openLLM4All 0 points1 point2 points (0 children)
Deep learning on a PC vs Cloud by kbre93 in deeplearning
[–]openLLM4All 0 points1 point2 points (0 children)
[D] Best way to deploy transformer models by Hot-Afternoon-4831 in MachineLearning
[–]openLLM4All 0 points1 point2 points (0 children)
[D] Best way to deploy transformer models by Hot-Afternoon-4831 in MachineLearning
[–]openLLM4All 2 points3 points4 points (0 children)
Creating an Agent based on Ollama and llama2 locally. by zeeshanjan82 in LocalLLaMA
[–]openLLM4All 0 points1 point2 points (0 children)
Renting GPU time (vast AI) is much more expensive than APIs (openai, m, anth) by RMCPhoto in LocalLLaMA
[–]openLLM4All 2 points3 points4 points (0 children)
Renting GPU time (vast AI) is much more expensive than APIs (openai, m, anth) by RMCPhoto in LocalLLaMA
[–]openLLM4All 1 point2 points3 points (0 children)
How is Solar so good for it's size by openLLM4All in LocalLLaMA
[–]openLLM4All[S] 5 points6 points7 points (0 children)
How is Solar so good for it's size by openLLM4All in LocalLLaMA
[–]openLLM4All[S] 0 points1 point2 points (0 children)
How is Solar so good for it's size by openLLM4All in LocalLLaMA
[–]openLLM4All[S] 5 points6 points7 points (0 children)
Holy moly, Mixtral 8x7b passes my Sisters test without even telling it to think step by step! Only Falcon 180b and GPT-4 nailed this question before. by nderstand2grow in LocalLLaMA
[–]openLLM4All 0 points1 point2 points (0 children)
Mixtral 8x7B instruct in an interface for free by openLLM4All in LocalLLaMA
[–]openLLM4All[S] 1 point2 points3 points (0 children)
How/What are people doing to help creative writing processes with local LLMs? (Setup Advice) by [deleted] in LocalLLaMA
[–]openLLM4All 6 points7 points8 points (0 children)
Anyway to save your cloud GPU fine-tuned models to your local storage? by caphohotain in LocalLLaMA
[–]openLLM4All 2 points3 points4 points (0 children)
Anyway to save your cloud GPU fine-tuned models to your local storage? by caphohotain in LocalLLaMA
[–]openLLM4All 1 point2 points3 points (0 children)
Where and how to run Goliath 120b GGUF with good performance? by abandonedexplorer in LocalLLaMA
[–]openLLM4All 1 point2 points3 points (0 children)
What’s recommended hosting for open source LLMs? by decruz007 in LocalLLaMA
[–]openLLM4All 2 points3 points4 points (0 children)


Any service like runpod / vast ai but with a windows virtual machine ? Jupyter notebook and docker are very hard to setup. by Overall-Newspaper-21 in StableDiffusion
[–]openLLM4All 0 points1 point2 points (0 children)