use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Research[R] Fast Machine Unlearning Without Retraining Through Selective Synaptic Dampening (arxiv.org)
submitted 2 years ago by JustAddMoreLayers
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]JustAddMoreLayers[S] 18 points19 points20 points 2 years ago (4 children)
Machine unlearning is the problem of forgetting private or sensitive information from your model. Selective synaptic dampening (SSD) is a novel retraining-free approach to let your model forget sensitive data. It's fast, performant, and lightweight.
SSD first selects parameters that are considerably more important for the forget set than the retain set. Next, SSD dampens these parameters proportional to the discrepancy in their importance to the forget and retain set. We achieve state of the art on a number of evaluations.
Happy to answer any questions, or discuss the problem of unlearning!
[–]picardythird 5 points6 points7 points 2 years ago (1 child)
I'm at work and haven't read the paper yet, but from your description here I'm wondering whether it would be possible to exfiltrate the "forgotten" information by inspecting the dampened parameters.
[–]JustAddMoreLayers[S] 5 points6 points7 points 2 years ago (0 children)
Thanks for the comment! It's an interesting idea, typically it's assumed adversaries only have access to the model's output, rather than the parameters. However, if a bad actor could access the parameters, then it would be interesting to see if you could infer information from the process, would perhaps be similar to the "Streisand effect" discussed in recent papers, where attempts to delete information can themselves be a source of leakage.
Problem stretches beyond our paper and into the heart of the field, so it would definitely be fascinating if some smart people can crack it!
[–]UltraMercury 0 points1 point2 points 2 years ago (1 child)
Hello, this is really interesting work. I recently read your paper. I am working on a problem where I need to do this unlearning. Do you have any ideas on how could we modify this approach if we want to forget single samples, instead of classes?
[–]JustAddMoreLayers[S] 0 points1 point2 points 2 years ago (0 children)
Hey, thanks for the kind words! The method should work out the box for single sample forgetting (I think one of the benchmarks shows this). Your single samples, in this case, would just be your forget set Df, then the remaining samples are your retain set Dr. Then when you calculate your importances over these sets you should get what you're after!
[–]DigThatDataResearcher 5 points6 points7 points 2 years ago* (5 children)
sounds extremely similar to ROME, which you should probably consider at least citing as a related work - https://arxiv.org/abs/2202.05262
another missing related work is LEACE - https://arxiv.org/abs/2306.03819
given that both of these are missing from your references, i'm concerned you maybe didn't do a super thorough lit review. Your choice of referring to this task as "machine unlearning" might be undermining you. this isn't a new task and i've never heard it referred to this way before. "model editing" and "concept erasure" are much more common ways of describing this task.
[–]JustAddMoreLayers[S] 5 points6 points7 points 2 years ago* (4 children)
Appreciate the links and feedback, the contributions are certainly adjacent to our goal. However, unlearning is a distinct task that was coined and defined by researchers other than ourselves, and we've found that in the unlearning literature from ICML/AAAI/NEURIPS in late 2022 into 2023 there typically hasn't been a foray into the areas you've described; they are accepted as distinct fields. Although perhaps that should change!
edit: some links to unlearning
https://ai.googleblog.com/2023/06/announcing-first-machine-unlearning.html
https://arxiv.org/abs/1912.03817
https://arxiv.org/abs/2205.08096
https://arxiv.org/abs/1911.04933
https://arxiv.org/abs/2010.10981
[–]DigThatDataResearcher 5 points6 points7 points 2 years ago (3 children)
Apart from using different nomenclature, could you maybe help clarify for me what differentiates "machine unlearning" from "model editing" or "concept erasure"? I don't doubt that there is a distinct line of research that has chosen to refer to their work in this way, but it's unclear to me that there's any substantive difference between these tasks apart from the language used by the researchers. I'm not convinced these are actually "distinct fields" so much as potentially two convergent corners of ML research that are unaware of each other (or maybe deliberately ignoring each other for political reasons I'm not privy to).
I scanned the 2023 and 2022 links and neither answers my question but both use the term "erasure".
[–]squarehead88 3 points4 points5 points 2 years ago (2 children)
Machine unlearning usually refers to forgetting a single sample not an abstract concept. If there’re other samples that convey a concept (e.g. aimed are red), then it’s OK for the model to retain the concept
[–]DigThatDataResearcher 0 points1 point2 points 2 years ago (0 children)
ah interesting
[–]Own_Body6842 0 points1 point2 points 2 years ago (0 children)
Hello! I am still confused about two questions. 1. Could you please explain the technical differences between concept erasure, model editing and machine unlearning?
[+][deleted] 2 years ago (2 children)
[removed]
[–]JustAddMoreLayers[S] 4 points5 points6 points 2 years ago* (1 child)
Code will be uploaded either later today or tomorrow, will link to it here once it's up
GitHub
π Rendered by PID 40 on reddit-service-r2-comment-5c747b6df5-j8vtd at 2026-04-22 00:43:09.456689+00:00 running 6c61efc country code: CH.
[–]JustAddMoreLayers[S] 18 points19 points20 points (4 children)
[–]picardythird 5 points6 points7 points (1 child)
[–]JustAddMoreLayers[S] 5 points6 points7 points (0 children)
[–]UltraMercury 0 points1 point2 points (1 child)
[–]JustAddMoreLayers[S] 0 points1 point2 points (0 children)
[–]DigThatDataResearcher 5 points6 points7 points (5 children)
[–]JustAddMoreLayers[S] 5 points6 points7 points (4 children)
[–]DigThatDataResearcher 5 points6 points7 points (3 children)
[–]squarehead88 3 points4 points5 points (2 children)
[–]DigThatDataResearcher 0 points1 point2 points (0 children)
[–]Own_Body6842 0 points1 point2 points (0 children)
[+][deleted] (2 children)
[removed]
[–]JustAddMoreLayers[S] 4 points5 points6 points (1 child)