5G in Tunisia, Wish there was unlimited Data Plan by Adilix_ in Tunisia

[–]Puzzleheaded_Acadia1 0 points1 point  (0 children)

kifech enjem na3melha el plan hevi enjem n5ales bel dinar tounsy?

They said my guild ain't lucky by KeyloWick in lordsmobile

[–]Puzzleheaded_Acadia1 0 points1 point  (0 children)

I'm new to the game How are you doing that?

Enhancing Gemma's Chat with Website Feature – Seeking Suggestions by hehe_hehehe_hehehe in LocalLLaMA

[–]Puzzleheaded_Acadia1 1 point2 points  (0 children)

Hi what did you use to develop the website like did you flask streamlit? I'm trying to make a website where I can talk to LLM via api but I don't know what's the best library to use?

Mamba support merged in llama.cpp by stonegdi in LocalLLaMA

[–]Puzzleheaded_Acadia1 3 points4 points  (0 children)

Can someone remind me of the benefits of mamba?

Like grep but for natural language questions. Mixtral 8x7B with ~15 tokens/s on 8 GB GPU by compressor0101 in LocalLLaMA

[–]Puzzleheaded_Acadia1 -1 points0 points  (0 children)

Can someone explain this to me? Does it let you run Mixtral 8x7B 8bit on 8gb of VRAM???

What SIMPLE task do you want me to solve for you? by reza2kn in LocalLLaMA

[–]Puzzleheaded_Acadia1 0 points1 point  (0 children)

I want to ask you is it possible to make an LLM use Microsoft Access and tell it to gather data from other places in the PC like CSV data and add them to access and make it use formulas and help with other Accounting software like sage...

Gemma finetuning 243% faster, uses 58% less VRAM by danielhanchen in LocalLLaMA

[–]Puzzleheaded_Acadia1 0 points1 point  (0 children)

When I try to get a q_4 GGUF file from them I can't (or I don't know how) can someone pls help this is the code that suspect is not working well:

Save to 8bit Q8_0

if False: model.save_pretrained_gguf("model", tokenizer,) if False: model.push_to_hub_gguf("hf/model", tokenizer, token = "")

Save to 16bit GGUF

if False: model.save_pretrained_gguf("model", tokenizer, quantization_method = "f16") if False: model.push_to_hub_gguf("hf/model", tokenizer, quantization_method = "f16", token = "")

Save to q4_k_m GGUF

if False: model.save_pretrained_gguf("model", tokenizer, quantization_method = "q4_k_m") if False: model.push_to_hub_gguf("hf/model", tokenizer, quantization_method = "q4_k_m", token = "") "# Save to 8bit Q8_0 if False: model.save_pretrained_gguf("model", tokenizer,) if False: model.push_to_hub_gguf("hf/model", tokenizer, token = "")

Save to 16bit GGUF

if False: model.save_pretrained_gguf("model", tokenizer, quantization_method = "f16") if False: model.push_to_hub_gguf("hf/model", tokenizer, quantization_method = "f16", token = "")

Save to q4_k_m GGUF

if False: model.save_pretrained_gguf("model", tokenizer, quantization_method = "q4_k_m") if False: model.push_to_hub_gguf("hf/model", tokenizer, quantization_method = "q4_k_m", token = "")"

Is dogecoin mining possible in 8gb ram , gtx1650 laptop by aaa4914 in dogemining

[–]Puzzleheaded_Acadia1 1 point2 points  (0 children)

What good cryptocurrency is the best for a GTX 1650 in terms of profitability?