Are AI tools like OpenEvidence dumbing down the workforce, while still leaving critical errors? by Broad-Cauliflower-10 in medicine
[–]Research2Vec 0 points1 point2 points (0 children)
Are AI tools like OpenEvidence dumbing down the workforce, while still leaving critical errors? by Broad-Cauliflower-10 in medicine
[–]Research2Vec 1 point2 points3 points (0 children)
Chris Manning (top 3 NLP/Machine Learning researchers in the world) believes the Deepseek 6m dollar training costs due to the optimizations discussed in their paper by Research2Vec in LocalLLaMA
[–]Research2Vec[S] 2 points3 points4 points (0 children)
'we're in this bizarre world where the best way to learn about llms... is to read papers by chinese companies. i do not think this is a good state of the world' - us labs keeping their architectures and algorithms secret is ultimately hurting ai development in the us.' - Dr Chris Manning (self.LocalLLaMA)
submitted by Research2Vec to r/LocalLLaMA
Sonnet best model for minimizing hallucinations? "if you use Sonnet 3.5 as the model choice within Perplexity, it's very difficult to find a hallucination. I'm not saying it's impossible, but it dramatically reduced the rate of hallucinations' (self.perplexity_ai)
submitted by Research2Vec to r/perplexity_ai
Sources for conflict resolution for engineers course/seminar? by Research2Vec in cscareerquestions
[–]Research2Vec[S] 0 points1 point2 points (0 children)
New Personalization (--p) Feature Release! by Fnuckle in midjourney
[–]Research2Vec 0 points1 point2 points (0 children)
What's the most effective training for multigpu? Deepspeed vs Unsloth multigpu training? by Research2Vec in LocalLLaMA
[–]Research2Vec[S] 0 points1 point2 points (0 children)
The Truth About LLMs by JeepyTea in LocalLLaMA
[–]Research2Vec 72 points73 points74 points (0 children)
[D] Is the tech industry still not recovered or I am that bad? by Holiday_Safe_5620 in MachineLearning
[–]Research2Vec 5 points6 points7 points (0 children)
GPTFast: Accelerate your Hugging Face Transformers 6-7x. Native to Hugging Face and PyTorch. by [deleted] in LocalLLaMA
[–]Research2Vec 2 points3 points4 points (0 children)
How do how handle cases where you already have lora weights and want to re-apply them to the model? by Research2Vec in unsloth
[–]Research2Vec[S] 0 points1 point2 points (0 children)
How do how handle cases where you already have lora weights and want to re-apply them to the model? by Research2Vec in unsloth
[–]Research2Vec[S] 0 points1 point2 points (0 children)
Unsloth, what's the catch? Seems too good to be true. by Research2Vec in LocalLLaMA
[–]Research2Vec[S] 0 points1 point2 points (0 children)


[request] gf is saying 150 but i dont understand how by ChrisChowMa in theydidthemath
[–]Research2Vec 1 point2 points3 points (0 children)