Statice's synthetic data technology is now part of Anonos Data Embassy, the award-winning data security and privacy solution.
We had the chance to (virtually) meet and discuss with privacy Machine Learning expert Franziska Boenisch. In the following interview, Franziska answered our questions and shared her knowledge on privacy-preserving machine learning. Through her answers, find out more about:
My name is Franziska Boenisch. I’ve a Master in Computer Science from Freie University Berlin and Technical University Eindhoven. For the last two years, I have been working as a researcher at the Fraunhofer Institute for Applied and Integrated Security (AISEC), an institution focused on applied research in the field of cyber security. My main topics of research there are privacy-preserving machine learning and intellectual property protection for Machine Learning (ML) models.
In parallel, I’m also doing a Ph.D. in privacy-preserving ML with Prof. Dr. Marian Margraf from Freie University Berlin. One of my latest projects, among others, was to look into attacking the privacy of ML models.
ML models, particularly the complex ones used for deep learning, are still mainly considered black boxes today. We put data in for the model to learn something, but we do not know how and what it exactly learns from each data point. Therefore, for a long time, there was a belief that training a model from data creates an anonymous abstraction of that data.
However, in recent years, research has proven this belief wrong by crafting powerful privacy attacks against trained ML models to retrieve information about their training data. So even if we do not understand what exactly the ML model learns from each of our training data points, it might still leak information about these points.
The mentioned attacks were able to breach privacy on several semantic levels and interpretations of the term privacy. One possible interpretation of training data privacy (1) tells us that it should not be possible to determine if a specific data point was used to train the model. This is important. For example, imagine that a classifier has been trained to predict a treatment for cancer patients. We know that it must have been trained on cancer patients' data. Thus, finding out that a specific data point of an individual was used to train the model corresponds to finding out that this individual has cancer.
Another interpretation (2) tells us that it should not be possible to find out information about an individual through access to an ML model trained on its data. In this case, let’s imagine that we know that a particular individual data point was used for our cancer classification (1). We might know some public information about this individual, but no private information, such as, for example, its weight. If we can use the model and its predictions to learn the weight, privacy would be breached.
The main privacy risks associated with ML models are closely related to the different interpretations of privacy for its training data. Malicious attacks can explicitly target these aspects of training data privacy. There are four different large groups of attacks that I am aware of:
I am not aware of real-world cases where these attacks were successful yet. However, I suspect that if an attacker who is not from academia manages to conduct such attacks, it might be more lucrative to use that against the company and get money rather than publish about it.
Different attacks need different protections. However, most protection methods have in common that they try to make the model generalize as well as possible, such that less information about individual training data points can be leaked or that no properties of the training data are so distinguished that they can be found out.
One method to protect individual privacy is Differential Privacy (DP). It originally comes from databases but can also be applied to ML. In general, its intuition is that after applying DP, an individual data point should not have a significant or noticeable impact on the results of analyses run on the whole data set. This is mainly achieved by adding controlled amounts of statistical noise that dissimulates distinguishable properties. Thereby, DP can protect against privacy attacks targeting individual data points, their sensitive attributes, or membership status to the model.
As stated above, DP was not initially intended to be used to make ML models more private, but rather for creating privacy-preserving summary statistics. However, in the last few years, several methods have been proposed on how the concept of DP can be integrated into ML model training to protect the privacy of the training data. The two most famous ones for ML classification are the PATE algorithm and DP-SGD. In the following, I will focus on DP-SGD only.
How DP-SGD works is that, during model training, it cuts down the gradients of individual training data points. These gradients contain the information on what parameters an ML model should update to learn the given point better. The gradients, therefore, contain highly private information.
If we cut the gradients, the intuition is now that we bound the influence that each individual training data point can have on the model updates. The model should, therefore, not contain information so specific about it. After the gradients are clipped, we also add a carefully chosen amount of noise to dissimulate the influence of the points even better.
So I am unsure whether I can say that DP-SGD makes a better generalization in general. But I can say for sure that it limits the influence of a point on the model parameters. Plus, it usually also results in less overfitting.
It appears that it does not really offer protection. Otherwise, we would not be able to successfully run the privacy attacks I’ve described above against the models.
In particular, the noises differ in a way that the DP-noise gives mathematical guarantees on the protection of individual data points. SGD noise expresses nothing concerning the influence of individual training data points. But the DP noise is crafted so carefully that it can dissimulate the presence or absence of a data point, according to the given privacy parameter epsilon.
There are several limitations. From a technical perspective, the most severe ones, in my opinion, are the difficulty of training high-utility models:
A potential solution to these problems could lie in the use of pre-trained non-private models and some DP-transfer learning to your private datasets.
Another issue that we have in DP is the choice of an adequate privacy level. As stated above, DP should achieve that a data point does not have a significant or noticeable impact on results of analyses or on models trained with it. The amount of impact it is allowed to have is quantified in DP through a privacy parameter denoted epsilon. Epsilon needs to be set by the data analyst or machine learning practitioner beforehand, and it determines the privacy level.
Since its choice is problem-specific, different epsilon between different problems can not easily be compared (see the next question on that point). But more importantly, we do not know what real-world implications the particular epsilon on the particular problem will have for the data points.
For now, the best we can do is to try different epsilons and then run existing privacy attacks and see how well they perform. If they are not successful, we have at least a lower bound for privacy loss. That means there are potentially stronger attacks out there that might even breach more privacy. So usually, we are not happy with lower bounds because we would like to give upper bounds on how much privacy can be lost maximally. Therefore, the quantification of the real-world implications is still an open question.
Epsilon is so problem-specific (dataset, data structure, problem, ML algorithm, etc.) that I think it is of no use to say what epsilons I've encountered. Usually, one says that small epsilon as, e.g., 1 are a good choice since low values of epsilon result in high privacy guarantees according to the formulation of DP.
It has, however, also been shown that low values for epsilon don't necessarily need to lead to bad utility (see here). So we might just have to keep in mind that just throwing DP into the classical way on how we solve problems might not be sufficient. Instead, we might have to rethink our workflows. For example, the referenced paper shows that using different features and then using DP can achieve much higher utility than if we throw DP on the classical ML pipeline.
I think it is of high interest to look into that direction. Because yes, currently, when we use DP in our classical workflows, it degrades utility, and the models are of little use afterward. But research in that direction here suggests there are ways to unify utility and privacy.
The assumptions on the attacker's capabilities depend on the specific attacks. Therefore, I can only state general ideas on that here.
Some attacks, however, need access to the model interna, such as the training algorithm, the model weights, etc. (white-box). This is not always realistic since many models are behind some APIs and well protected. But for example, membership inference attacks have also been shown to be successful in a black-box setting, where an attacker only obtains access to the model output. In such a case, the assumptions on the attacker would be more realistic.
Another thing to mention is the scalability of the attacks. In research papers, privacy attacks are usually evaluated using standard research-datasets that are much smaller than most datasets encountered in real-world applications. Therefore, it is not always clear how strong the attacks would be against real-world ML models. For example, membership inference has been shown less effective on larger datasets. Therefore, concept-wise, all attacks we have discussed are highly interesting and are worth to be considered. However, it is yet to be found out, how severe they affect practical use-cases.
In general, and in the long term, I do not think the current privacy attacks are powerful enough to be a sensitive empirical measure of privacy (again, lower bound, not upper bound). But for now, it looks like they are the best we have. They can give us vague impressions on the privacy of the ML models’ training data, but we need to be aware that it might be a wrong sense of security.
Some work showed that DP-SGD might have a negative impact on fairness. Especially when we have minority classes, these are small classes with few examples and very specific cases.
The data points in it would actually need the highest privacy since they are minorities. However, when applying DP to them, also with low privacy guarantees, the accuracy of predictions on them drops very much.
As a result, these data points might, for example, have less good medical treatments if the treatment is decided by a DP-ML model. This would be unfair towards them. So, to sum up, currently, research hints at the fact that DP might conflict with fairness.
My research experience and results are that many ML practitioners take care of privacy at different levels than the ML one. For example, data is cleaned and anonymized during collection, and there are probably no more big issues in the ML if anonymization is done properly. In these cases, we have private ML, not through privacy-preserving ML techniques but rather through traditional privacy methods. However, this might not be possible for some scenarios, e.g., if the data cleaning would degrade the data utility so much that we can and do not want to afford it. Here, privacy-preserving ML might be a good solution.
Note also that there is also not only DP to protect privacy in ML. Many methods are still under development, such as Homomorphic Encryption, Secure Multiparty Computation, or Privacy-Preserving Federated learning. They are maybe not all usable out of the box, yet, because of some computational overheads, but I believe that they will become more and more usable in the next few years such that they can, eventually, be effectively applied to real-world problems.
This is a great development that I am very much looking forward to. In particular, I think that we are just at the beginning of the exciting chapter of privacy-preserving ML. And while now, it might still be a nice-to-have rather than a standard, I believe that with all privacy laws and regulations we have over the world, and with the ones that are still to come, it will even gain more relevance.
If you also think that privacy-preserving ML is an exciting topic, or want to read more about it, feel free to check out Franziska’s blog . Also, if you want to exchange ideas on the topic, discuss questions, or be informed about her new blog posts, you are kindly invited to reach out to her on LinkedIn.
Contact us and get feedback instantly.