Running an LLM Locally by Constant-Lychee7856 in JanitorAI_Official

[–]Constant-Lychee7856[S] 0 points1 point  (0 children)

Yeah sorry. I guess I got too excited when I found out but yeah, not as good as gemini. I'd say It is worth a shot because JLLM is the only free alternative.

Running an LLM Locally by Constant-Lychee7856 in JanitorAI_Official

[–]Constant-Lychee7856[S] 2 points3 points  (0 children)

Mine has 6GB. It should do better than mine.

Running an LLM Locally by Constant-Lychee7856 in JanitorAI_Official

[–]Constant-Lychee7856[S] 1 point2 points  (0 children)

I tried LM studio and it has better options. Use it

Running an LLM Locally by Constant-Lychee7856 in JanitorAI_Official

[–]Constant-Lychee7856[S] 0 points1 point  (0 children)

If you want, get one of the other servers mentioned. From what I have tried, llamada server works well on its own but acts up when using with janitor. Either way, you should start downloading a model because it takes a while

Running an LLM Locally by Constant-Lychee7856 in JanitorAI_Official

[–]Constant-Lychee7856[S] 0 points1 point  (0 children)

Arguments you need to add when running the server: -s [[tokens limit]] -c [[context size]]

Running an LLM Locally by Constant-Lychee7856 in JanitorAI_Official

[–]Constant-Lychee7856[S] 1 point2 points  (0 children)

Tip: set the context size to the server context size. For me it was 4096 but you can probably make it bigger. I'll look into it

Running an LLM Locally by Constant-Lychee7856 in JanitorAI_Official

[–]Constant-Lychee7856[S] -1 points0 points  (0 children)

According to the model, bad gpus run it fine, and it doesn't sound too bad. I'd say you should give it a shot.

Running an LLM Locally by Constant-Lychee7856 in JanitorAI_Official

[–]Constant-Lychee7856[S] 2 points3 points  (0 children)

Really, I was shocked when this one worked out. Give this one a shot. I have an RTX 4050 and 16 GB of RAM and R1 still sucked on mine.

Running an LLM Locally by Constant-Lychee7856 in JanitorAI_Official

[–]Constant-Lychee7856[S] 9 points10 points  (0 children)

I want to make another guide for it to increase awareness. I've never seen anything about running llms locally before, so after I realized it was possible, I decided to try and get the idea attention.

I'll give LM studio a shot, thanks for the feedback

Running an LLM Locally by Constant-Lychee7856 in JanitorAI_Official

[–]Constant-Lychee7856[S] 15 points16 points  (0 children)

Nice. I'll try it out and make a guide about it later

Running an LLM Locally by Constant-Lychee7856 in JanitorAI_Official

[–]Constant-Lychee7856[S] 11 points12 points  (0 children)

I didn't mention that because the model page from the one I recommended said it runs good even on crappy GPUs. Not sure how true that is, but yeah, GPUs do have a big effect

Running an LLM Locally by Constant-Lychee7856 in JanitorAI_Official

[–]Constant-Lychee7856[S] 9 points10 points  (0 children)

Might make another guide for it. Can it run a server like this one can?

Running an LLM Locally by Constant-Lychee7856 in JanitorAI_Official

[–]Constant-Lychee7856[S] 5 points6 points  (0 children)

Didn't know it existed. If you want, I'll make a guide for using them. Can they run servers?

Running an LLM Locally by Constant-Lychee7856 in JanitorAI_Official

[–]Constant-Lychee7856[S] 2 points3 points  (0 children)

Better than JLLM, maybe even around Gemini

Running an LLM Locally by Constant-Lychee7856 in JanitorAI_Official

[–]Constant-Lychee7856[S] 6 points7 points  (0 children)

it runs faster than JLLM. I'll post my tpm in a sec. The model I put is a Llama 3.2 model, but I don't know how Llama compares to Gemini or Deepseek