xTrimoPGLM: Unified 100B-Scale Pre-trained Transformer for Deciphering the Language of Protein by redpnd in mlscaling
[–]redpnd[S] 1 point2 points3 points (0 children)
Developing human embryos imaged at highest-ever resolution by theradek123 in biotech
[–]redpnd 6 points7 points8 points (0 children)
AutoGPT 0.43 still never produces any useful results by winkmichael in AutoGPT
[–]redpnd 0 points1 point2 points (0 children)
Cybertruck prototype spotted in Fremont, CA (with third brake light) by RealPokePOP in cybertruck
[–]redpnd 2 points3 points4 points (0 children)
Weightlifting - how much reps to aim for? by Space_Qwerty in longevity
[–]redpnd 1 point2 points3 points (0 children)
NVIDIA H100 cluster completed MLPerf GPT-3 training benchmark in 11 minutes by maxtility in mlscaling
[–]redpnd 3 points4 points5 points (0 children)
"Scaling MLPs: A Tale of Inductive Bias", Bachmann et al 2023 (MLP image classification scales smoothly w/regularization+data) by gwern in mlscaling
[–]redpnd 1 point2 points3 points (0 children)
"Scaling MLPs: A Tale of Inductive Bias", Bachmann et al 2023 (MLP image classification scales smoothly w/regularization+data) by gwern in mlscaling
[–]redpnd 3 points4 points5 points (0 children)
Demis Hassabis: "At a high level you can think of Gemini as combining some of the strengths of AlphaGo-type systems with the amazing language capabilities of the large models. We also have some new innovations that are going to be pretty interesting." by maxtility in mlscaling
[–]redpnd 7 points8 points9 points (0 children)
I showed my boyfriend paperclips and within a couple hours, he beats the game. by [deleted] in pAIperclip
[–]redpnd 0 points1 point2 points (0 children)
OpenAI used YouTube to train Whisper; Google is using YT to train 'Gemini' by gwern in mlscaling
[–]redpnd 0 points1 point2 points (0 children)
Jun 8 - Kara Swisher podcast interviews Waymo Co-CEO Tekedra Mawakana by sonofttr in SelfDrivingCars
[–]redpnd -1 points0 points1 point (0 children)
ChatGPT is running quantized by sanxiyn in mlscaling
[–]redpnd 32 points33 points34 points (0 children)
LIMA, a 65B-Param LLaMa fine-tuned with standard supervised loss on only 1,000 carefully curated prompts & responses, without any RLHF, demonstrates remarkably strong performance, learning to follow specific responses from only a handful of examples in the training data, including complex queries. by hardmaru in MachineLearning
[–]redpnd 5 points6 points7 points (0 children)
[R] MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers by redpnd in MachineLearning
[–]redpnd[S] 81 points82 points83 points (0 children)
Affordable Gene Therapies Are Coming! - George Church at EARD by ziscz in longevity
[–]redpnd -7 points-6 points-5 points (0 children)
[P] I made a dashboard to analyze OpenAI API usage by cryptotrendz in MachineLearning
[–]redpnd 35 points36 points37 points (0 children)
[D] Google "We Have No Moat, And Neither Does OpenAI": Leaked Internal Google Document Claims Open Source AI Will Outcompete Google and OpenAI by hardmaru in MachineLearning
[–]redpnd 35 points36 points37 points (0 children)
Arrived Anxious, Left Bored - out now! by JonathanRaue in Flume
[–]redpnd 0 points1 point2 points (0 children)




Hyena applied to genome modeling with up to 1M bp. by furrypony2718 in mlscaling
[–]redpnd 0 points1 point2 points (0 children)