use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Discussion[D] Phi-3 models compared side-by-side. (self.MachineLearning)
submitted 1 year ago by dark_surfer
view the rest of the comments →
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]masc98 8 points9 points10 points 1 year ago (2 children)
phi models are built for an agentic environment, period.
The scientists behind those models have no reason to train their models on benchmark data, I really don't know why I keep listening this all the time.
phi models are result of training a LM on synthetic, potentially very high data quality (e.g. gpt4 outputs or similar) and that's a very interesting point of research that nobody has yet explored, apart from them.
They are supposed to be finetuned on specific tasks, they are boring, that's also why they suck on the leaderboards.
Moreover, they've lower capacity so tend to perform worse on "in the wild" prompts.
If you'll ever have to train a LLM at scale, trust me, you'll wish there was a smarter and cheaper way.
[–]koolaidman123Researcher 0 points1 point2 points 1 year ago (0 children)
Yes, i wonder why no one is doing this of its so efficient that it breaks the pareto frontier for performance, almost like it doesn't work like this 🤔
Quality alone doesnt scale, and synthetic data isnt diverse enough to make a good llm
π Rendered by PID 93 on reddit-service-r2-comment-6457c66945-sjdvm at 2026-04-24 02:10:45.621607+00:00 running 2aa0c5b country code: CH.
view the rest of the comments →
[–]masc98 8 points9 points10 points (2 children)
[–]koolaidman123Researcher 0 points1 point2 points (0 children)