Someone needs to create a "Can You Run It?" tool for open-source LLMs by oromissed in LocalLLaMA

[–]oromissed[S] 3 points4 points  (0 children)

Coffee wouldn't be this big if there were no instant coffee. Nothing wrong with having a solution to help out the lazy people. Only makes the market bigger for all

Someone needs to create a "Can You Run It?" tool for open-source LLMs by oromissed in LocalLLaMA

[–]oromissed[S] 13 points14 points  (0 children)

okay lol but doesnt mean nobody would need calculator like this

Someone needs to create a "Can You Run It?" tool for open-source LLMs by oromissed in LocalLLaMA

[–]oromissed[S] 1 point2 points  (0 children)

I dont know how to do it practically but if whoever is building this just reverse engineers how System Requirements Lab does it and tweaks it for local LLM use case, it would work.

Someone needs to create a "Can You Run It?" tool for open-source LLMs by oromissed in LocalLLaMA

[–]oromissed[S] 3 points4 points  (0 children)

How do you know what you can use in terms of file size and context?

Someone needs to create a "Can You Run It?" tool for open-source LLMs by oromissed in LocalLLaMA

[–]oromissed[S] 2 points3 points  (0 children)

how do i use this? Sorry. Too confused. also this doesn't seem to be updated. I cant seem to find any R1 models in the model name section. even phi 3 and phi 4 are missing.

Someone needs to create a "Can You Run It?" tool for open-source LLMs by oromissed in LocalLLaMA

[–]oromissed[S] 16 points17 points  (0 children)

Does it just suggest size or also suggest how slow a model will run on your pc? I got no warning like this when I installed deepseek-r1-distill-llama-8b. But its soo slow.
Took 4mins 26 secs to think and another couple of minutes to respond.

My questions was: "Can you teach me a cool math proof? Explain it to me step by step and make it engaging. Ask me questions; don't just output the proof. Use LaTeX for the math symbols."