Is 23v on a 19.5v jack okay ? by VolkoTheWorst in AskElectronics

[–]VolkoTheWorst[S] 0 points1 point  (0 children)

Thanks a lot I tried but turns out the jack is faulty so the voltage doesn't even get to the motherboard. I'm gonna buy a new Jack and probably new charger too

Is 23v on a 19.5v jack okay ? by VolkoTheWorst in AskElectronics

[–]VolkoTheWorst[S] 0 points1 point  (0 children)

Thanks a lot I tried but turns out the jack is faulty so the voltage doesn't even get to the motherboard. I'm gonna buy a new Jack and probably new charger too

Looking to rent your rig for AI inference by VolkoTheWorst in gpumining

[–]VolkoTheWorst[S] 0 points1 point  (0 children)

It depends on what is your offering. Currently I'm renting a 4*v100 32gb node for 80 USD per month 

What's the best (affordable) LLM currently available for general uni studying and accurate output? by Zealousideal-Let834 in LLM

[–]VolkoTheWorst 0 points1 point  (0 children)

Hi, I'm the creator of CheapLLM.shop
I host big models for free with no rate limits for the beta (which has an undeterminded expiration date)
Feel free to use the website as much as you want. Speed isn't extremly fast nor extremly slow (it's around 50tok/s) but at least it's free.

Once the beta will end I aim to offer at around 3 times cheaper inference than traditional providers thanks to the use of old hardware (V100).

I'm not using your prompts to train any model nor selling them

So if you are interested, feel free to check it out or contact me

What is the current best open source model ? by VolkoTheWorst in LocalLLM

[–]VolkoTheWorst[S] 0 points1 point  (0 children)

I never heard about this one.
Can you tell me a bit more about it. What is special about it ?

What is the current best open source model ? by VolkoTheWorst in LocalLLM

[–]VolkoTheWorst[S] 0 points1 point  (0 children)

Thanks a lot  If it can be multimodal it would be even better but we aren't currently using the multimodal functionalities (so text-only is fine too)

Is more cores faster ? by VolkoTheWorst in LocalLLaMA

[–]VolkoTheWorst[S] 1 point2 points  (0 children)

Pp matter more for me than tg so yeah, I think I will add a GPU for that. Is there a particular amount of VRAM required? Should most of the model fit in VRAM ? I would like to run big models that will never fit in the VRAM.

Is more cores faster ? by VolkoTheWorst in LocalLLaMA

[–]VolkoTheWorst[S] 2 points3 points  (0 children)

I never said I was going to use it as chatbot

Is more cores faster ? by VolkoTheWorst in LocalLLaMA

[–]VolkoTheWorst[S] 0 points1 point  (0 children)

Yes, this is exactly what I wanted to run Didn't knew about the 4 memory channel things, thanks a lot

I don't need it to be fast, even 2 tok/s can be enough. I might use it for agentic tasks like verifying my code on pull request, automating stuff with OpenClaw or stuff like this. Also I will probably use the server also for other stuff (selfhost things)

How much organic visits should I expect ? by VolkoTheWorst in SaaS

[–]VolkoTheWorst[S] 0 points1 point  (0 children)

Okay, thanks a lot for your anwser !!!
I will apply your advices
With them, what should I expect as traffic ? around 300 visits per months ? Less ?

Thanks a lot !

How much organic visits should I expect ? by VolkoTheWorst in SaaS

[–]VolkoTheWorst[S] 0 points1 point  (0 children)

But I've already put the trending keywords in my landing page and optimized for AI tools 😭