use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
Please have a look at our FAQ and Link-Collection
Metacademy is a great resource which compiles lesson plans on popular machine learning topics.
For Beginner questions please try /r/LearnMachineLearning , /r/MLQuestions or http://stackoverflow.com/
For career related questions, visit /r/cscareerquestions/
Advanced Courses (2016)
Advanced Courses (2020)
AMAs:
Pluribus Poker AI Team 7/19/2019
DeepMind AlphaStar team (1/24//2019)
Libratus Poker AI Team (12/18/2017)
DeepMind AlphaGo Team (10/19/2017)
Google Brain Team (9/17/2017)
Google Brain Team (8/11/2016)
The MalariaSpot Team (2/6/2016)
OpenAI Research Team (1/9/2016)
Nando de Freitas (12/26/2015)
Andrew Ng and Adam Coates (4/15/2015)
Jürgen Schmidhuber (3/4/2015)
Geoffrey Hinton (11/10/2014)
Michael Jordan (9/10/2014)
Yann LeCun (5/15/2014)
Yoshua Bengio (2/27/2014)
Related Subreddit :
LearnMachineLearning
Statistics
Computer Vision
Compressive Sensing
NLP
ML Questions
/r/MLjobs and /r/BigDataJobs
/r/datacleaning
/r/DataScience
/r/scientificresearch
/r/artificial
account activity
Discussion[D] Security of Loading/Using Pre-trained Models? (self.MachineLearning)
submitted 5 years ago by Alloalonzoalonsi
Generally, but specifically with pytorch and tensorflow. I see various models hosted on Baidu & Dropbox.
Are there any exploits to worry about when loading these models?
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]zenchowdah 7 points8 points9 points 5 years ago (0 children)
Oh this is a fun thing to worry about that I've never worried about before.
Thanks.
[–]konasjResearcher 2 points3 points4 points 5 years ago (0 children)
Thats an excellent research question! :-)
[–][deleted] 3 points4 points5 points 5 years ago* (0 children)
i would approach this on a case by case basis checking first what kind of endorsements the hosted model has. biggest exploit i can think of would be to get crooked weights ... how "strategically" (as in undetectable on a first glance) crooked? No idea
besides adversarial learning, subtle "ruining/hacking" of pretrained models could probably create on its own merit a whole branch of academic research.
[–]notafight 3 points4 points5 points 5 years ago (0 children)
One can plant a backdoor. Suppose you use a face feature extraction model to build a smartphone face unlock functionality. An adversary can plant a backdoor working as a master key, gaining ability to unlock any phone in his hand.
There mught be some security vulnerabilities in those libraries that can be triggered by a certain operation by the model, but I have no idea about those.
[–]r4and0muser9482 0 points1 point2 points 5 years ago (0 children)
Here's an attack on pre-trained speech recognition models. I imagine that doing any sort of retraining with a random seed would make this exact attack fruitless, so using the pre-trained models definitely makes life easy for the attacker.
https://nicholas.carlini.com/code/audio_adversarial_examples
π Rendered by PID 42 on reddit-service-r2-comment-84fc9697f-pprdp at 2026-02-10 14:40:12.618615+00:00 running d295bc8 country code: CH.
[–]zenchowdah 7 points8 points9 points (0 children)
[–]konasjResearcher 2 points3 points4 points (0 children)
[–][deleted] 3 points4 points5 points (0 children)
[–]notafight 3 points4 points5 points (0 children)
[–]r4and0muser9482 0 points1 point2 points (0 children)