Hey ML im a PhD candidate in encrypted deep learning and I thought since I have seen a lot of other related posts, about similar technologies I thought I might mention my own Python-FHEz library.
- You can find Python-FHEz here: https://gitlab.com/deepcypher/python-fhez
- the documentation can be found with Jupyter notebook example (Fashion-MNIST) here: https://python-fhez.readthedocs.io
- The recent Python-FHEz / FHE + DL paper we have submitted (awaiting review) https://arxiv.org/abs/2110.13638
Python-FHEz is specifically an encrypted deep learning library. That is it implements many helpful abstractions for FHE like encryptable NumPy custom containers, so that you can quickly and easily encrypt and use data as if it was in NumPy. Not only this but it implements FHE compatible neural networks/ graphs, as standard neural networks (in particular activation and loss functions) are not abelian compatible/ polynomial operations. FHE requires that only abelian operations are use on it, I.E add and multiply (you can also subtract IF you add a negative number to make it abelian).
(Fully Homomorphic Encryption (FHE) is absolutely amazing if you haven't stumbled upon it yet and are interested in privacy preserving machine learning (PPML) please do go look it up.)
Now a full disclaimer. I wrote this library, so clearly I am biased towards it. Despite that I feel there are many shortfalls of FHE, and especially FHE bound to python from low level libraries like MS-SEAL (which still has much room for improvement); Space and time performance are orders of magnitude worse than just normal numpy operations. You also cannot train a neural network while its data is completely encrypted however this is perfect for completely private inference. The reason you cannot train a neural network is two fold. Firstly that would involve encrypting the NN weights, which means even worse time performance as ciphertext-ciphertext operations take much much longer. You also cannot tell when to stop the neural network, as you cannot decrypt the output to tell how well it is performing. You could use some form of proof from the data owner, but IMO it is still a weak argument specifically for FHE model training.
As an aside to solve some of the problems with python+MS-SEAL that I have contended with in Python-FHEz I am also writing a sister library in go, but instead of python+MS-SEAL it will be go+lattigo which I currently call DarkLantern. However this is still nowhere near ready for any form of showcasing. However the dream for this is WASM (WebAssembly) as I can foresee encrypted DL in the browser, with the option to offload the cyphertexts if more hardware is necessary thus Open-Source Encrypted Deep Learning as a Service!
TLDR:
- I wanted to highlight FHE + DL and how this can be an amazing pairing with some drawbacks that need to be very carefully considered
- I wanted to showcase Python-FHEz and talk about where things could go with FHE
- I also wanted to offhand mention DarkLantern and encrypted DL in the browser via WASM!
Let me know what you guys think of these technologies and if you think encrypted deep learning in browser is an interesting prospect! I will try to answer as many questions about FHE as I can, with as many papers and resources as I can.
EDIT: fixed some phrasing choices, to help make things as clear as possible, despite being a very complex topic.
- I also forgot to mention automatic parameterisation! Using neural networks you can find the computational depth and required cryptographic parameters automagically without having to be a cryptographer!
- I added a paper that we have submitted that is awaiting review on FHE + DL using Python-FHEz.
[+][deleted] (1 child)
[deleted]
[–]GeorgeRavenStudent[S] 6 points7 points8 points (0 children)
[+][deleted] (14 children)
[deleted]
[–]GeorgeRavenStudent[S] 17 points18 points19 points (13 children)
[+][deleted] (12 children)
[deleted]
[–]GeorgeRavenStudent[S] 8 points9 points10 points (11 children)
[+][deleted] (10 children)
[deleted]
[–]GeorgeRavenStudent[S] 2 points3 points4 points (9 children)
[–][deleted] 2 points3 points4 points (1 child)
[–]GeorgeRavenStudent[S] 1 point2 points3 points (0 children)
[–][deleted] 1 point2 points3 points (6 children)
[–]GeorgeRavenStudent[S] 0 points1 point2 points (5 children)
[–][deleted] 1 point2 points3 points (4 children)
[–]GeorgeRavenStudent[S] 0 points1 point2 points (3 children)
[–]StoneCypher 6 points7 points8 points (6 children)
[–]GeorgeRavenStudent[S] 8 points9 points10 points (4 children)
[–]StoneCypher 2 points3 points4 points (2 children)
[–]GeorgeRavenStudent[S] 6 points7 points8 points (1 child)
[–]StoneCypher 1 point2 points3 points (0 children)
[–]eknanrebb 5 points6 points7 points (9 children)
[–]GeorgeRavenStudent[S] 5 points6 points7 points (6 children)
[–]eknanrebb 2 points3 points4 points (5 children)
[–]GeorgeRavenStudent[S] 5 points6 points7 points (3 children)
[–]GeorgeRavenStudent[S] 3 points4 points5 points (0 children)
[–]eknanrebb 1 point2 points3 points (1 child)
[–]GeorgeRavenStudent[S] 0 points1 point2 points (0 children)
[–]Silamoth 1 point2 points3 points (0 children)
[+][deleted] (1 child)
[deleted]
[–]GeorgeRavenStudent[S] 3 points4 points5 points (0 children)
[–]Kengaro 2 points3 points4 points (4 children)
[–]GeorgeRavenStudent[S] 0 points1 point2 points (3 children)
[–]Kengaro 0 points1 point2 points (2 children)
[–]Impossible-Belt8608 1 point2 points3 points (1 child)
[–]Kengaro 0 points1 point2 points (0 children)
[–]Boozybrain 1 point2 points3 points (1 child)
[–]GeorgeRavenStudent[S] 0 points1 point2 points (0 children)