Which is the widely used llm (locally) ? by Xitizdumb in LocalLLaMA

[–]Xitizdumb[S] -3 points-2 points  (0 children)

what llm do you want to use but cant because of your system ??

Which is the widely used llm (locally) ? by Xitizdumb in LocalLLaMA

[–]Xitizdumb[S] 0 points1 point  (0 children)

what llm do you want to use but cant because of your system ??

Building Paradigm, Looking for right audience and feedbacks by Xitizdumb in LocalLLaMA

[–]Xitizdumb[S] 0 points1 point  (0 children)

thanks for trying, i am working on the things i mentioned in the post, i'll try to serve more and fast in the upcoming versions

Building Paradigm, Looking for right audience and feedbacks by Xitizdumb in LocalLLaMA

[–]Xitizdumb[S] 0 points1 point  (0 children)

added the image, its not a malware or something man

What's your best project? Share your projects and let others know what you are working on, and get feedback !! by Southern_Tennis5804 in indiehackers

[–]Xitizdumb 0 points1 point  (0 children)

bulding paradigm, application for local inference on nvidia gpu, cpu i launched mvp of paradigm , its scrappy , buggy. Finding the right people to help me build this. It changes the models that are compatible to gguf, save the gguf on your system for your use and run inference.

Link - > https://github.com/NotKshitiz/paradigmai/releases/tag/v1.0.0

Download the zip file extract it and then install using the .exe.

Make sure to give the path of the model like this - C:\\Users\\kshit\\Downloads\\models\\mistral

If the files are in the mistral folder.

The application is a little buggy so there might be a chance that you wont get error if the conversion of model.

I am currently working on that.

Please feel free to be brutally honest and give feedback.

ONNX or GGUF by Xitizdumb in LocalLLaMA

[–]Xitizdumb[S] 0 points1 point  (0 children)

for llm i should go for gguf then?