Statice is Now part of Anonos Data Embassy Platform

Statice's synthetic data technology is now part of Anonos Data Embassy, the award-winning data security and privacy solution.

LEARN MORE

On the privacy of machine learning models: discussion with researcher Franziska Boenisch

 Machine Learning and privacy researcher Franziska Boenisch
By
Dr. Matteo Giomi

We had the chance to (virtually) meet and discuss with privacy Machine Learning expert Franziska Boenisch. In the following interview, Franziska answered our questions and shared her knowledge on privacy-preserving machine learning. Through her answers, find out more about:

  • the importance of considering the privacy of ML models;
  • the main privacy risks and attacks type on ML models;
  • protection mechanisms like Differential Privacy (DP);
  • using DP with Machine Learning models, with DP-SGD, for example;
  • Privacy protection limitation and utility trade-off;
  • DP and fairness.

Can you present yourself and your current research work?


My name is Franziska Boenisch. I’ve a Master in Computer Science from Freie University Berlin and Technical University Eindhoven. For the last two years, I have been working as a researcher at the Fraunhofer Institute for Applied and Integrated Security (AISEC), an institution focused on applied research in the field of cyber security. My main topics of research there are privacy-preserving machine learning and intellectual property protection for Machine Learning (ML) models.


In parallel, I’m also doing a Ph.D. in privacy-preserving ML with Prof. Dr. Marian Margraf from Freie University Berlin. One of my latest projects, among others, was to look into attacking the privacy of ML models. 


Why is privacy important to consider with Machine Learning?


ML models, particularly the complex ones used for deep learning, are still mainly considered black boxes today. We put data in for the model to learn something, but we do not know how and what it exactly learns from each data point. Therefore, for a long time, there was a belief that training a model from data creates an anonymous abstraction of that data. 


However, in recent years, research has proven this belief wrong by crafting powerful privacy attacks against trained ML models to retrieve information about their training data. So even if we do not understand what exactly the ML model learns from each of our training data points, it might still leak information about these points.  

The mentioned attacks were able to breach privacy on several semantic levels and interpretations of the term privacy.  One possible interpretation of training data privacy (1) tells us that it should not be possible to determine if a specific data point was used to train the model. This is important. For example, imagine that a classifier has been trained to predict a treatment for cancer patients. We know that it must have been trained on cancer patients' data. Thus, finding out that a specific data point of an individual was used to train the model corresponds to finding out that this individual has cancer.


Another interpretation (2) tells us that it should not be possible to find out information about an individual through access to an ML model trained on its data. In this case, let’s imagine that we know that a particular individual data point was used for our cancer classification (1).  We might know some public information about this individual, but no private information, such as, for example, its weight. If we can use the model and its predictions to learn the weight, privacy would be breached.


What are the main privacy risks associated with ML models, and which types of attacks are models vulnerable to? Do you know of real-world cases of successful attacks?


The main privacy risks associated with ML models are closely related to the different interpretations of privacy for its training data. Malicious attacks can explicitly target these aspects of training data privacy. There are four different large groups of attacks that I am aware of:

  • Model inversion: This attack, given only a trained ML model, enables to obtain “average” representations of its training data classes. This becomes an issue if one class equals one individual, like in face classifiers. Then, restoring an average representation is equivalent to restoring someone’s face picture.
In model inversion attacks, the attacker uses a trained classifier to extract representations from the training dataset.


  • Membership inference: This attack finds out that a specific data point was used to train a model. It’s our example of the cancer classifier.
  • Attribute inference: For a training data point of an ML model, this attack uses the model to identify attributes that we do not know before (e.g. the weight in the cancer classifier). 
  • Property inference: This attack does not target individual privacy, rather dataset privacy. Here, an attacker tries to find properties about the whole dataset and not an individual. This could, for example, be the distribution of the classes. If it turns out that one class is an absolute minority in the model, one could use this information to target specific points from this class with further attacks.

I am not aware of real-world cases where these attacks were successful yet. However, I suspect that if an attacker who is not from academia manages to conduct such attacks, it might be more lucrative to use that against the company and get money rather than publish about it.


How does one protect the privacy of ML models?


Different attacks need different protections. However, most protection methods have in common that they try to make the model generalize as well as possible, such that less information about individual training data points can be leaked or that no properties of the training data are so distinguished that they can be found out.


One method to protect individual privacy is Differential Privacy (DP). It originally comes from databases but can also be applied to ML. In general, its intuition is that after applying DP, an individual data point should not have a significant or noticeable impact on the results of analyses run on the whole data set. This is mainly achieved by adding controlled amounts of statistical noise  that dissimulates distinguishable properties. Thereby, DP can protect against privacy attacks targeting individual data points, their sensitive attributes, or membership status to the model.

After applying DP, an individual data point should not have a significant or noticeable impact on the results of analyses run on the whole data set.

DP training should ensure that the model does not learn “too much” from any single data point, which is also the idea behind generalization. Does DP help the models to generalize better?


As stated above, DP was not initially intended to be used to make ML models more private, but rather for creating privacy-preserving summary statistics. However, in the last few years, several methods have been proposed on how the concept of DP can be integrated into ML model training to protect the privacy of the training data. The two most famous ones for ML classification are the PATE algorithm and DP-SGD. In the following, I will focus on DP-SGD only.


How DP-SGD works is that, during model training, it cuts down the gradients of individual training data points. These gradients contain the information on what parameters an ML model should update to learn the given point better. The gradients, therefore, contain highly private information.


If we cut the gradients, the intuition is now that we bound the influence that each individual training data point can have on the model updates. The model should, therefore, not contain information so specific about it. After the gradients are clipped, we also add a carefully chosen amount of noise to dissimulate the influence of the points even better.


So I am unsure whether I can say that DP-SGD makes a better generalization in general. But I can say for sure that it limits the influence of a point on the model parameters. Plus, it usually also results in less overfitting.


SGD is already noisy due to its statistical nature. Does this intrinsic noisiness offer some protection already? In which way the noise from DP is different?


It appears that it does not really offer protection. Otherwise, we would not be able to successfully run the privacy attacks I’ve described above against the models.


In particular, the noises differ in a way that the DP-noise gives mathematical guarantees on the protection of individual data points. SGD noise expresses nothing concerning the influence of individual training data points. But the DP noise is crafted so carefully that it can dissimulate the presence or absence of a data point, according to the given privacy parameter epsilon.



What are the main practical limitations of training an ML model with DP? What are the main knobs one can tweak?


There are several limitations. From a technical perspective, the most severe ones, in my opinion, are the difficulty of training high-utility models

  • Due to noise introduction, the model performance is degraded.
  • DP-SGD training takes much longer due to noise addition and other specific operations.
  • Usually, in ML, one would perform some hyperparameter search to make excellent models. In DP-SGD, we have more hyperparameters than in usual models, due to the noise addition and clipping. Together with the increased training times mentioned above, hyperparameter search gets more difficult and costly. Thus, obtaining high utility is more complicated.

A potential solution to these problems could lie in the use of pre-trained non-private models and some DP-transfer learning to your private datasets.

Another issue that we have in DP is the choice of an adequate privacy level. As stated above, DP should achieve that a data point does not have a significant or noticeable impact on results of analyses or on models trained with it. The amount of impact it is allowed to have is quantified in DP through a privacy parameter denoted epsilon. Epsilon needs to be set by the data analyst or machine learning practitioner beforehand, and it determines the privacy level. 


Since its choice is problem-specific, different epsilon between different problems can not easily be compared (see the next question on that point). But more importantly, we do not know what real-world implications the particular epsilon on the particular problem will have for the data points. 

For now, the best we can do is to try different epsilons and then run existing privacy attacks and see how well they perform. If they are not successful, we have at least a lower bound for privacy loss. That means there are potentially stronger attacks out there that might even breach more privacy. So usually, we are not happy with lower bounds because we would like to give upper bounds on how much privacy can be lost maximally. Therefore, the quantification of the real-world implications is still an open question.



A model with bad utility is of no use. Where do you stand in the privacy-utility tradeoff? What epsilons have you encountered in practice?


Epsilon is so problem-specific (dataset, data structure, problem, ML algorithm, etc.) that I think it is of no use to say what epsilons I've encountered. Usually, one says that small epsilon as, e.g.,  1 are a good choice since low values of epsilon result in high privacy guarantees according to the formulation of DP.


It has, however, also been shown that low values for epsilon don't necessarily need to lead to bad utility (see here). So we might just have to keep in mind that just throwing DP into the classical way on how we solve problems might not be sufficient. Instead, we might have to rethink our workflows. For example, the referenced paper shows that using different features and then using DP can achieve much higher utility than if we throw DP on the classical ML pipeline.


I think it is of high interest to look into that direction. Because yes, currently, when we use DP in our classical workflows, it degrades utility, and the models are of little use afterward. But research in that direction here suggests there are ways to unify utility and privacy.  


Are the privacy attacks we discussed earlier good in practice? Do they make reasonable assumptions on attackers' capabilities? Are they powerful enough to be a sensitive empirical measure of privacy?


The assumptions on the attacker's capabilities depend on the specific attacks. Therefore, I can only state general ideas on that here. 


Some attacks, however, need access to the model interna, such as the training algorithm, the model weights, etc. (white-box). This is not always realistic since many models are behind some APIs and well protected. But for example, membership inference attacks have also been shown to be successful in a black-box setting, where an attacker only obtains access to the model output. In such a case, the assumptions on the attacker would be more realistic.


Another thing to mention is the scalability of the attacks. In research papers, privacy attacks are usually evaluated using standard research-datasets that are much smaller than most datasets encountered in real-world applications. Therefore, it is not always clear how strong the attacks would be against real-world ML models. For example, membership inference has been shown less effective on larger datasets. Therefore, concept-wise, all attacks we have discussed are highly interesting and are worth to be considered. However, it is yet to be found out, how severe they affect practical use-cases.


In general, and in the long term, I do not think the current privacy attacks are powerful enough to be a sensitive empirical measure of privacy (again, lower bound, not upper bound). But for now, it looks like they are the best we have. They can give us vague impressions on the privacy of the ML models’ training data, but we need to be aware that it might be a wrong sense of security. 


Are there other aspects of ML performances to consider besides privacy? For example, can you talk about the relationship between privacy and fairness?


Some work showed that DP-SGD might have a negative impact on fairness. Especially when we have minority classes, these are small classes with few examples and very specific cases. 

The data points in it would actually need the highest privacy since they are minorities. However, when applying DP to them, also with low privacy guarantees, the accuracy of predictions on them drops very much. 

As a result, these data points might, for example, have less good medical treatments if the treatment is decided by a DP-ML model. This would be unfair towards them. So, to sum up, currently, research hints at the fact that DP might conflict with fairness.



Where do you think all of this is going? Will we live in a world of privacy-preserving ML models, or do you think it will always be a niche technology?



My research experience and results are that many ML practitioners take care of privacy at different levels than the ML one. For example, data is cleaned and anonymized during collection, and there are probably no more big issues in the ML if anonymization is done properly. In these cases, we have private ML, not through privacy-preserving ML techniques but rather through traditional privacy methods. However, this might not be possible for some scenarios, e.g., if the data cleaning would degrade the data utility so much that we can and do not want to afford it. Here, privacy-preserving ML might be a good solution.


Note also that there is also not only DP to protect privacy in ML. Many methods are still under development, such as Homomorphic Encryption, Secure Multiparty Computation, or Privacy-Preserving Federated learning. They are maybe not all usable out of the box, yet, because of some computational overheads, but I believe that they will become more and more usable in the next few years such that they can, eventually, be effectively applied to real-world problems.

This is a great development that I am very much looking forward to. In particular, I think that we are just at the beginning of the exciting chapter of privacy-preserving ML. And while now, it might still be a nice-to-have rather than a standard, I believe that with all privacy laws and regulations we have over the world, and with the ones that are still to come, it will even gain more relevance.


If you also think that privacy-preserving ML is an exciting topic, or want to read more about it, feel free to check out Franziska’s blog . Also, if you want to exchange ideas on the topic, discuss questions, or be informed about her new blog posts, you are kindly invited to reach out to her on LinkedIn.

Get our latest articles by email

Get the latest content straight in your inbox!

Get the latest content straight in your inbox!

Articles you might like

About data privacy in insurance: Statice on the Digital Insurance Podcast

Read more

Synthetic data, real use-cases: a talk with our product team

Read more

Privacy technology, data regulations, and ethical AI: our interview with Behrang Raji

Read more