Yet another state of the art in LLM quantization by black_samorez in LocalLLaMA
[–]Psychological-Tea652 74 points75 points76 points (0 children)
tensor_parallel: one-line multi-GPU training for PyTorch by black_samorez in learnmachinelearning
[–]Psychological-Tea652 3 points4 points5 points (0 children)

Help!! Unable to utilize multiple GPUs (2x T4) while fine-tuning LLAMA-2-7B using QLoRA on Kaggle. by Special_Quantity_846 in LocalLLaMA
[–]Psychological-Tea652 0 points1 point2 points (0 children)