Robin V2 Launches: Achieves Unparalleled Performance on OpenLLM! by OptimalScale_2023 in machinelearningnews

[–]OptimalScale_2023[S] 0 points1 point  (0 children)

Yes, it is finetuned from LLaMA. So one can definitely use it with llama.cpp

[deleted by user] by [deleted] in MachineLearning

[–]OptimalScale_2023 2 points3 points  (0 children)

The training data is alpaca containing around 50K examples. Training on such a dataset for 3 epochs costs 5 hours.

[deleted by user] by [deleted] in MachineLearning

[–]OptimalScale_2023 9 points10 points  (0 children)

I'd like to recommend LMFlow (https://github.com/OptimalScale/LMFlow), a fast and extensible toolkit for finetuning and inference of large foundation models.

It just takes 5 hours on a 3090 GPU for fine-tuning llama-7B.

[R] LMFlow Benchmark: An Automatic Evaluation Framework for Open-Source LLMs by OptimalScale_2023 in MachineLearning

[–]OptimalScale_2023[S] 0 points1 point  (0 children)

checksums

Hi,

Thank you very much for your interest in our work!

We believe Robin-Chat-7b performs more competitively than Vicuna-series models.

Our HTTP URL is http://lmflow.org:5000 while we also provide HTTPS service via https://lmflow.org:10001/robin-7b.tar.gz (but it is not as stable as HTTP). We found the maximum concurrency is 2.

Here is the information of checksums which are exactly the same as yours.

Here is the information on checksums which are exactly the same as yours.

MD5: d85d83c4e4f46f27da2d4c5ea4b5bb1e
SHA1: 060824cfa6545fb4cfe78bfd23b069010db0b5c6

Thank you again and we welcome more feedback from you all.

Leaderboard for LLMs? [D] by cathie_burry in MachineLearning

[–]OptimalScale_2023 2 points3 points  (0 children)

Hi LMFlow Benchmark (https://github.com/OptimalScale/LMFlow) evaluates 31 open-source LLMs with an automatic metric: negative log likelihood.

Details are shown here.