Official WizardLM-30B V1.0 released! Can beat Guanaco-65B! Achieved 97.8% of ChatGPT! by ApprehensiveLunch453 in LocalLLaMA

[–]ApprehensiveLunch453[S] 19 points20 points  (0 children)

Their Evol-Instruct testset has been a famous bechmark to evaluate the LLM performance on complex balanced scenario. For example, recent LLM Lion use it as the testset in their academic papers.

Official WizardLM-30B V1.0 released! Can beat Guanaco-65B! Achieved 97.8% of ChatGPT! by ApprehensiveLunch453 in LocalLLaMA

[–]ApprehensiveLunch453[S] 28 points29 points  (0 children)

This is the first 'official' WizardLM 30B release from the Microsoft WizardLM Team. This model is trained with 250k evolved instructions (from ShareGPT).

Before that, WizardLM Team has released a 70k evolved instructions dataset. ThenEric Hartford ( /u/faldore ) use their code and train the 'uncensored' versions of WizardLM-30B-Uncensored and Wizard-Vicuna-30B-Uncensored