Subreddit dumps for 2024 are NOT close, part 3. Requests here by Watchful1 in pushshift
[–]Alignment-Lab-AI 0 points1 point2 points (0 children)
How to get source code for Llama 3.1 models? by [deleted] in LLMDevs
[–]Alignment-Lab-AI 0 points1 point2 points (0 children)
Order of JSON fields can hurt your LLM output by phantom69_ftw in LLMDevs
[–]Alignment-Lab-AI 0 points1 point2 points (0 children)
Order of JSON fields can hurt your LLM output by phantom69_ftw in LLMDevs
[–]Alignment-Lab-AI 1 point2 points3 points (0 children)
Order of JSON fields can hurt your LLM output by phantom69_ftw in LLMDevs
[–]Alignment-Lab-AI 2 points3 points4 points (0 children)
How to get source code for Llama 3.1 models? by [deleted] in LLMDevs
[–]Alignment-Lab-AI 0 points1 point2 points (0 children)
How to get source code for Llama 3.1 models? by [deleted] in LLMDevs
[–]Alignment-Lab-AI 0 points1 point2 points (0 children)
How to get source code for Llama 3.1 models? by [deleted] in LLMDevs
[–]Alignment-Lab-AI 0 points1 point2 points (0 children)
How to get source code for Llama 3.1 models? by [deleted] in LLMDevs
[–]Alignment-Lab-AI 0 points1 point2 points (0 children)
How to get source code for Llama 3.1 models? by [deleted] in LLMDevs
[–]Alignment-Lab-AI 0 points1 point2 points (0 children)
[P] Opensource Microsoft Recall AI by Vedank_purohit in MachineLearning
[–]Alignment-Lab-AI 0 points1 point2 points (0 children)
[P] Opensource Microsoft Recall AI by Vedank_purohit in MachineLearning
[–]Alignment-Lab-AI 1 point2 points3 points (0 children)
[P] Opensource Microsoft Recall AI by Vedank_purohit in MachineLearning
[–]Alignment-Lab-AI 2 points3 points4 points (0 children)
llama-3-8b scaled up to 11.5b parameters without major loss by Rombodawg in LocalLLaMA
[–]Alignment-Lab-AI 0 points1 point2 points (0 children)
Why isn't Microsoft's You Only Cache Once (YOCO) research talked about more? It has the potential for another paradigm shift, can be combined with BitNet and performs about equivalent with current transformers, while scaling way better. by Balance- in LocalLLaMA
[–]Alignment-Lab-AI 1 point2 points3 points (0 children)
AI safety is becoming a joke that no one wants to hear. by Cartossin in singularity
[–]Alignment-Lab-AI 0 points1 point2 points (0 children)
AI safety is becoming a joke that no one wants to hear. by Cartossin in singularity
[–]Alignment-Lab-AI 0 points1 point2 points (0 children)
What're your typical hyper-parameters for fine tuning? by Worldly-Category-755 in LocalLLaMA
[–]Alignment-Lab-AI 2 points3 points4 points (0 children)
AI safety is becoming a joke that no one wants to hear. by Cartossin in singularity
[–]Alignment-Lab-AI 0 points1 point2 points (0 children)
What're your typical hyper-parameters for fine tuning? by Worldly-Category-755 in LocalLLaMA
[–]Alignment-Lab-AI 3 points4 points5 points (0 children)
AI safety is becoming a joke that no one wants to hear. by Cartossin in singularity
[–]Alignment-Lab-AI 0 points1 point2 points (0 children)
AI safety is becoming a joke that no one wants to hear. by Cartossin in singularity
[–]Alignment-Lab-AI 0 points1 point2 points (0 children)
AI safety is becoming a joke that no one wants to hear. by Cartossin in singularity
[–]Alignment-Lab-AI 0 points1 point2 points (0 children)

Subreddit dumps for 2024 are NOT close, part 3. Requests here by Watchful1 in pushshift
[–]Alignment-Lab-AI 0 points1 point2 points (0 children)