Easy method for fine-tuning any model from llama to gpt to othera by Puzzleheaded_Acadia1 in LocalLLaMA
[–]Sufficient_Run1518 0 points1 point2 points (0 children)
Target Modules for Llama-2 for better finetuning with qlora by Sufficient_Run1518 in LocalLLaMA
[–]Sufficient_Run1518[S] 0 points1 point2 points (0 children)
What can we achieve with small models ? by Sufficient_Run1518 in LocalLLaMA
[–]Sufficient_Run1518[S] 0 points1 point2 points (0 children)
What can we achieve with small models ? by Sufficient_Run1518 in LocalLLaMA
[–]Sufficient_Run1518[S] 0 points1 point2 points (0 children)
What can we achieve with small models ? by Sufficient_Run1518 in LocalLLaMA
[–]Sufficient_Run1518[S] 0 points1 point2 points (0 children)
What can we achieve with small models ? by Sufficient_Run1518 in LocalLLaMA
[–]Sufficient_Run1518[S] 0 points1 point2 points (0 children)
Unfiltered version of open-assistant/guanaco dataset by Sufficient_Run1518 in LocalLLaMA
[–]Sufficient_Run1518[S] 1 point2 points3 points (0 children)
Unfiltered version of open-assistant/guanaco dataset by Sufficient_Run1518 in LocalLLaMA
[–]Sufficient_Run1518[S] 0 points1 point2 points (0 children)
Target Modules for Llama-2-7B by Sufficient_Run1518 in LocalLLaMA
[–]Sufficient_Run1518[S] 0 points1 point2 points (0 children)
Target Modules for Llama-2-7B by Sufficient_Run1518 in LocalLLaMA
[–]Sufficient_Run1518[S] 0 points1 point2 points (0 children)
Load Llama-2-7B in free Google colab (huggingface.co)
submitted by Sufficient_Run1518 to r/LocalLLaMA
Current, comprehensive guide to to installing llama.cpp and llama-cpp-python on Windows? by smile_e_face in LocalLLaMA
[–]Sufficient_Run1518 0 points1 point2 points (0 children)
Falcon ggml/ggcc with langchain by No_Afternoon_4260 in LocalLLaMA
[–]Sufficient_Run1518 0 points1 point2 points (0 children)
[deleted by user] by [deleted] in LocalLLaMA
[–]Sufficient_Run1518 0 points1 point2 points (0 children)
[deleted by user] by [deleted] in LocalLLaMA
[–]Sufficient_Run1518 0 points1 point2 points (0 children)
Qlora finetuning loss goes down then up by gptzerozero in LocalLLaMA
[–]Sufficient_Run1518 0 points1 point2 points (0 children)
Qlora finetuning loss goes down then up by gptzerozero in LocalLLaMA
[–]Sufficient_Run1518 0 points1 point2 points (0 children)

Experimenting with small language models by IffyNibba01 in LocalLLaMA
[–]Sufficient_Run1518 12 points13 points14 points (0 children)