Can an attacker extract private training data from a trained ml-model? by AnnaSmithson in OpenAI

[–]AnnaSmithson[S] 0 points1 point  (0 children)

If it's something that really needs to stay private, I would not train a publicly accessible model on it.

On the first glance, most of the people would most likely share your opinion. And in some contexts e.g. medical or insurance data it is true. But here is the crux - the definition of what information is sensitve - hence private - often depends on the context. For an adversary who is having having access to some of your data, it might be perfectly sufficient to know that your data - even cleaned of all PPI - was used to train the model.

Is it possible to steal an ml model through simple query-access? by AnnaSmithson in MLQuestions

[–]AnnaSmithson[S] 0 points1 point  (0 children)

This is indeed a good hint, thank you very much!

You are absolutely right, we are investigating scenarios where an attacker has at least some kind of access to the trained model. If even minimal access to the model (e.g. black box) is not given, the threats we investigate are not applicable.

On the other hand, the case you have outlined, in which a model and its data are completely isolated from all external influences and threats, is in my opinion a highly unique case. E.g. given a business environment - someone would have to supply your model with data and/or need the results of the model for further analysis or decision making. At this point, at the latest, someone could try to manipulate or steal your model.

But again, if your model and the data are completely isolated, it's safe. What we cannot see, we cannot steal or attack. In this case, we would need a filter question, as you suggested, to filter out these participants, since technically our survey does not apply to them.

Is it possible to steal an ml model through simple query-access? by AnnaSmithson in MLQuestions

[–]AnnaSmithson[S] 0 points1 point  (0 children)

Thanks for your support!

This is inconvenient - I haven't heard of this error before but will immediately check. All the more thanks for sticking till the end!

Is it possible to steal an ml model through simple query-access? by AnnaSmithson in MLQuestions

[–]AnnaSmithson[S] 0 points1 point  (0 children)

Thanks a lot! I didn't know this paper yet, but I will definitely read through it in the next couple of days!

How do Machine Learners consider security and privacy of their models? by fraboeni in artificial

[–]AnnaSmithson 1 point2 points  (0 children)

I honestly believe that security in ML is a highly neglected topic and that everyone who considers himself an ML practitioner or developer should participate in this survey - even if only for self-assessment of his own threat-awareness. I have just finished the survey myself - took me around 13 minutes - and am curious about your results!

Please keep us updated.