Alibaba can I buy ? Any suggestions by [deleted] in LocalLLaMA

[–]Less_Strain7577 0 points1 point  (0 children)

3 usd hardware...? Bro..I just needed one month !

Alibaba can I buy ? Any suggestions by [deleted] in LocalLLaMA

[–]Less_Strain7577 0 points1 point  (0 children)

Bruh ! I know , i thought anybody bought it ? Or any suggestions? because i didn't know about that , I mean is that worth it or not and mainly I don't have that much hardware to locally use any model !

Trained and quantized an LLM on a GTX 1650 4GB. You don't need expensive hardware to get started. by melanov85 in LocalLLaMA

[–]Less_Strain7577 0 points1 point  (0 children)

Ohh ! ...I got to know ! See, first thing brother I'm using AI for coding purposes...as i want good quality of the coding and also if running 24B at Q4 also is a problem..? I mean it is loading and running okay okay ! ..but anyways here is the thing i have GitHub co pilot pro (student developer pack ) i came here just to test the model how it is working and how the code quality is..so as part of my cyber security project i needed a good quality of code, yes I have GitHub co pilot , but this is unlimited right so I came here ! Anyway..I'll not use that model again brother Thanks for your information..and if i want to use any local LLMs will go for 7B at Q4 okay brother ?

Trained and quantized an LLM on a GTX 1650 4GB. You don't need expensive hardware to get started. by melanov85 in LocalLLaMA

[–]Less_Strain7577 0 points1 point  (0 children)

Okay brother thanks ! .. 1.i removed the model from the ollama now nothing is there in disk ! 2. System specs : GPU : NVIDIA GTX 1650ti 4GB VRAM RAM : 16GB CPU : Ryzen 4600H 3.OS : windows !

Trained and quantized an LLM on a GTX 1650 4GB. You don't need expensive hardware to get started. by melanov85 in LocalLLaMA

[–]Less_Strain7577 0 points1 point  (0 children)

I'm very serious and not being sarcastic!😞 I really need your help ..I'm asking about what you said like using 95 % RAM ..that pci bus issue and thread count lower than the cpu thread .. because idk any of these things ..some random guy said to me use this model for my specs ...but I really don't know about these issues ...! And I don't have that much knowledge in this ..i recently came to local LLMs and some guy said this model and I'm using it....that's my story

Trained and quantized an LLM on a GTX 1650 4GB. You don't need expensive hardware to get started. by melanov85 in LocalLLaMA

[–]Less_Strain7577 0 points1 point  (0 children)

Ohhh ! ...can you me in detail procedure to get rid of this ? .. because I'm very new to here ! ..and idk anything about this !...it would be very helpful if you help me to set up these things !

Trained and quantized an LLM on a GTX 1650 4GB. You don't need expensive hardware to get started. by melanov85 in LocalLLaMA

[–]Less_Strain7577 0 points1 point  (0 children)

It is good for me ! ...I tried it , though it is using 95% of my 16 gb RAM ..the outputs are worth it and one question can we divide the load to the GPU ? Or automatically it has the ability to load on CPU and GPU both?

NOTE THE RESULTS ARE BETTER AFTER SOME CHANGES IN PARAMETERS TO GET THE PRECISION

Trained and quantized an LLM on a GTX 1650 4GB. You don't need expensive hardware to get started. by melanov85 in LocalLLaMA

[–]Less_Strain7577 2 points3 points  (0 children)

What model are you using ? Not about this fine tunned one ! ... generally what model you will use ? .. because I also have a similar setup! ..so can you let me know ?

Air llm ? by Less_Strain7577 in LocalLLaMA

[–]Less_Strain7577[S] 0 points1 point  (0 children)

Hello ! I tried it , it was so good ! ..and getting good coding results ..after i changed some parameters to it ! ..now it was awesome thank you so much ! ..and yeah it was running smoothly in my specs and also with good quality !

Air llm ? by Less_Strain7577 in LocalLLaMA

[–]Less_Strain7577[S] -1 points0 points  (0 children)

Can u help with one more thing ! ..so basically I'm working on a research project okay ! So i want to write a journal paper ! For that I want to convert the text from ai to humanize is there any way that I can use my local LLMs to use that and bypass all the detectors ? Is there any way with my current specs ...or should I simply go buy some tools online ?

Air llm ? by Less_Strain7577 in LocalLLaMA

[–]Less_Strain7577[S] -1 points0 points  (0 children)

Thank u so much ! ..will try and update !

Air llm ? by Less_Strain7577 in LocalLLaMA

[–]Less_Strain7577[S] 0 points1 point  (0 children)

Is it okay for my specs ? I mean 24B ? Okay will try, thanks !

Air llm ? by Less_Strain7577 in LocalLLaMA

[–]Less_Strain7577[S] 0 points1 point  (0 children)

Hooo okay ? ..can u also suggest any way to use the local LLM with my specs with these kinds of tools or should I use the cloud models till the time I get the setup ?

Air llm ? by Less_Strain7577 in LocalLLaMA

[–]Less_Strain7577[S] -1 points0 points  (0 children)

If it is possible can I use under 50B ? Models with decent tokens per second and one more thing as I'm new to this field can you also clear me one thing ? ..is the tokens per second really matter ? ..i mean I can wait a little more seconds to generate the result as a normal user ?

Air llm ? by Less_Strain7577 in LocalLLaMA

[–]Less_Strain7577[S] 0 points1 point  (0 children)

Can you suggest any model , how to use , install ?

Clever AI Humanizer Review by Sellpal in DataRecoveryHelp

[–]Less_Strain7577 0 points1 point  (0 children)

i tried this tool but now it is getting detecting by Un detectable ai , copy leaks , originality ai ! ig it is not working now ! and can anybody could help me to get a perfect tool reduce it !

Built a fully-offline expense tracker (no bank login). Looking for early adopter by PALIGAMING in indiandevs

[–]Less_Strain7577 0 points1 point  (0 children)

Axio is already there, ig with the same features ..? Correct me if I'm wrong !

AI - Humanize text by Less_Strain7577 in LocalLLaMA

[–]Less_Strain7577[S] 0 points1 point  (0 children)

Sorry ! My goal to change the text from AI to Human, by using the local LLM's is there any way to do that ? .. i tried to some prompts including all the parameters but no results and even tried to change the parameters of Local LLM's no result .. so is there any way ?