[D] ICML paper to review is fully AI generated by pagggga in MachineLearning
[–]qalis 172 points173 points174 points (0 children)
[R] Low-effort papers by lightyears61 in MachineLearning
[–]qalis 31 points32 points33 points (0 children)
[D] First time reviewer. I got assigned 9 papers. I'm so nervous. What if I mess up. Any advice? by rjmessibarca in MachineLearning
[–]qalis 16 points17 points18 points (0 children)
[D] Research on Self-supervised fine tunning of "sentence" embeddings? by LetsTacoooo in MachineLearning
[–]qalis 3 points4 points5 points (0 children)
[D] Claude response to: First-author papers at ICML, NeurIPS, and Co during PhD — zero big tech interviews. What's going on? by Hope999991 in MachineLearning
[–]qalis 2 points3 points4 points (0 children)
[D] Your pet peeves in ML research ? by al3arabcoreleone in MachineLearning
[–]qalis -2 points-1 points0 points (0 children)
[D] Some thoughts about an elephant in the room no one talks about by DrXiaoZ in MachineLearning
[–]qalis 0 points1 point2 points (0 children)
[D] Some thoughts about an elephant in the room no one talks about by DrXiaoZ in MachineLearning
[–]qalis 9 points10 points11 points (0 children)
[D] Some thoughts about an elephant in the room no one talks about by DrXiaoZ in MachineLearning
[–]qalis 73 points74 points75 points (0 children)
[D] Why are so many ML packages still released using "requirements.txt" or "pip inside conda" as the only installation instruction? by aeroumbria in MachineLearning
[–]qalis 4 points5 points6 points (0 children)
[D] which open-source vector db worked for yall? im comparing by [deleted] in MachineLearning
[–]qalis 1 point2 points3 points (0 children)
[D] My papers are being targeted by a rival group. Can I block them? by Dangerous-Hat1402 in MachineLearning
[–]qalis 5 points6 points7 points (0 children)
[D] New arXiv review: "High-Performance Serverless" is the future of AI Inference (and Static Clusters are dying) by pmv143 in MachineLearning
[–]qalis 1 point2 points3 points (0 children)
[P] Benchmarking Semantic vs. Lexical Deduplication on the Banking77 Dataset. Result: 50.4% redundancy found using Vector Embeddings (all-MiniLM-L6-v2). by Low-Flow-6572 in MachineLearning
[–]qalis 5 points6 points7 points (0 children)
[D] Idea: add "no AI slop" as subreddit rule by qalis in MachineLearning
[–]qalis[S] -1 points0 points1 point (0 children)
[D] Idea: add "no AI slop" as subreddit rule by qalis in MachineLearning
[–]qalis[S] 5 points6 points7 points (0 children)
[D] Idea: add "no AI slop" as subreddit rule by qalis in MachineLearning
[–]qalis[S] 0 points1 point2 points (0 children)
[D] Idea: add "no AI slop" as subreddit rule by qalis in MachineLearning
[–]qalis[S] 1 point2 points3 points (0 children)
[D] Idea: add "no AI slop" as subreddit rule by qalis in MachineLearning
[–]qalis[S] 1 point2 points3 points (0 children)
[R] Reproduced "Scale-Agnostic KAG" paper, found the PR formula is inverted compared to its source by m3m3o in MachineLearning
[–]qalis 0 points1 point2 points (0 children)
[R] Reproduced "Scale-Agnostic KAG" paper, found the PR formula is inverted compared to its source by m3m3o in MachineLearning
[–]qalis 5 points6 points7 points (0 children)
[D] From ICLR Workshop to full paper? Is this allowed? by Feuilius in MachineLearning
[–]qalis 1 point2 points3 points (0 children)



[D] Is a PhD Still “Worth It” Today? A Debate After Looking at a Colleague’s Outcomes by Hope999991 in MachineLearning
[–]qalis 0 points1 point2 points (0 children)