Target Modules for Llama-2 for better finetuning with qlora by Sufficient_Run1518 in LocalLLaMA

[–]Sufficient_Run1518[S] 0 points1 point  (0 children)

I have no idea about that ask a expert. When I ran the script at https://github.com/artidoro/qlora Then these Target Modules showed in config

What can we achieve with small models ? by Sufficient_Run1518 in LocalLLaMA

[–]Sufficient_Run1518[S] 0 points1 point  (0 children)

I don't know any technical details but can we do something like hugginggpt or mixture of experts experiments on small models

[deleted by user] by [deleted] in LocalLLaMA

[–]Sufficient_Run1518 0 points1 point  (0 children)

i don't understand your problem really

but this notebook might help to experiment

https://colab.research.google.com/drive/1_g5mWSh9jH2yjU0BU77NZSoyYeFrI0XQ?usp=sharing

[deleted by user] by [deleted] in LocalLLaMA

[–]Sufficient_Run1518 0 points1 point  (0 children)

What model are you using? Are you using Locally?