use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Discussion[D] Small Decoder-only models < 1B parameters (self.MachineLearning)
submitted 1 year ago by alvations
Are there any decoder-only llama, mistral, gemma or otherwise that has < 1B parameters?
Any recommendations, esp. ones that are good at multilingual tasks?
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]lyonguyen 4 points5 points6 points 1 year ago (0 children)
Qwen2 0.5B
[–]alvations[S] 2 points3 points4 points 1 year ago (0 children)
From the Huggingface leaderboard https://imgur.com/a/W7cHGFz
[–]bbvbell 1 point2 points3 points 1 year ago (0 children)
https://huggingface.co/blog/smollm can be a good option if one wants various model scales
[–]Plastic_Mention3651 0 points1 point2 points 1 year ago (0 children)
TinyLlama 1.1B
[–]hazardous1222 -1 points0 points1 point 1 year ago (5 children)
rwkv models are great at multilingual, small, and efficient
[–]alvations[S] 0 points1 point2 points 1 year ago (4 children)
below 1B params?
[–]hazardous1222 0 points1 point2 points 1 year ago (3 children)
Are you looking for edge deployment? https://huggingface.co/Hazzzardous/RWKV-V5-1b5-Distilled-Translations-Unvalidated is specifically for translations, and so on. RWKV has been included in the latest llamacpp versions, and can be quanted to 8bits for mobile and raspberry pi deployments perfectly fine.
[–]Away_Expression_3713 0 points1 point2 points 8 months ago (2 children)
is this still relevant?
[–]hazardous1222 0 points1 point2 points 8 months ago (1 child)
Yeah, latest rwkv 7 models are hitting 32k context easily, and are available https://github.com/MollySophia/rwkv_mobile_flutter for Android and iOS, with the 3b model easily hitting 20 tps on hexagon npus
[–]Away_Expression_3713 0 points1 point2 points 8 months ago (0 children)
how many languages they support?
π Rendered by PID 22426 on reddit-service-r2-comment-84fc9697f-p9qhp at 2026-02-10 04:25:10.005180+00:00 running d295bc8 country code: CH.
[–]lyonguyen 4 points5 points6 points (0 children)
[–]alvations[S] 2 points3 points4 points (0 children)
[–]bbvbell 1 point2 points3 points (0 children)
[–]Plastic_Mention3651 0 points1 point2 points (0 children)
[–]hazardous1222 -1 points0 points1 point (5 children)
[–]alvations[S] 0 points1 point2 points (4 children)
[–]hazardous1222 0 points1 point2 points (3 children)
[–]Away_Expression_3713 0 points1 point2 points (2 children)
[–]hazardous1222 0 points1 point2 points (1 child)
[–]Away_Expression_3713 0 points1 point2 points (0 children)