Chemical Safety Incidents
Teaching Systems to Forget Data
Friday, March 25, 2016 @ 03:03 PM gHale
Machine learning systems predict the weather, forecast earthquakes, provide recommendations based on the books and movies we like, and even apply the brakes on our cars when we’re not paying attention.
To do this, software programs in these systems calculate predictive relationships from massive amounts of data.
The systems identify these predictive relationships using advanced algorithms and “training data.” This data then constructs the models and features that enable a system to determine the latest best-seller you wish to read or to predict the likelihood of rain next week.
This intricate process means a piece of raw data often goes through a series of computations in a system. The computations and information derived by the system from that data together form a complex propagation network called the data’s “lineage.” The term came from Yinzhi Cao, an assistant professor of computer science and engineering, and his colleague, Junfeng Yang of Columbia University, who are pioneering an approach to make learning systems forget.
Considering how important this concept is to increasing security and protecting privacy, Cao and Yang believe easy adoption of forgetting systems will be increasingly in demand.
The two researchers have developed a way to do it faster and more effectively than current methods.
Their concept, called “machine unlearning,” is so promising that Cao and Yang earned a four-year, $1.2 million National Science Foundation grant to develop the approach.
“Effective forgetting systems must be able to let users specify the data to forget with different levels of granularity,” said Cao, a principal investigator on the project. “These systems must remove the data and undo its effects so that all future operations run as if the data never existed.”
There are a number of reasons why an individual user or service provider might want a system to forget data and its complete lineage. Privacy is one.
The iCloud photo hacking incident in 2014 where celebrities’ private photos ended up accessed via Apple’s cloud services suite led to online articles teaching users how to completely delete iOS photos including the backups. New research found machine learning models for personalized medicine dosing leak patients’ genetic markers. Only a small set of statistics on genetics and diseases are enough for hackers to identify specific individuals, despite cloaking mechanisms.
Naturally, users unhappy with these newfound risks want their data, and its influence on the models and statistics, to be completely forgotten.
Security is another reason. Consider anomaly-based intrusion detection systems used to detect malicious software.
In order to positively identify an attack, the system must learn to recognize normal system activity. Therefore, the security of these systems hinges on the model of normal behaviors extracted from the training data. By polluting the training data, attackers pollute the model and compromise security. Once the polluted data ends up identified, the system must completely forget the data and its lineage in order to regain security.
Widely used learning systems such as Google Search are, for the most part, only able to forget a user’s raw data — and not the data’s lineage — upon request. This is problematic for users who wish to ensure any trace of unwanted data ends up completely removed. It is also a challenge for service providers who have strong incentives to fulfill data removal requests and retain customer trust.
Service providers will increasingly need to be able to remove data and its lineage completely to comply with laws governing user data privacy, such as the “right to be forgotten” ruling issued in 2014 by the European Union’s top court. In October 2014, Google removed more than 170,000 links to comply with the ruling, which affirmed users’ right to control what appears when someone searches their names. In July 2015, Google said it had received more than a quarter-million such requests.
The basis of Cao and Yang’s “machine unlearning” method relies on the fact most learning systems can end up converted into a form that can be updated incrementally without costly retraining from scratch.
Their approach introduces a layer of a small number of summations between the learning algorithm and the training data to eliminate dependency on each other. So, the learning algorithms depend only on the summations and not on individual data. Using this method, unlearning a piece of data and its lineage no longer requires rebuilding the models and features that predict relationships between pieces of data. Recomputing a small number of summations would remove the data and its lineage completely — and much faster than through retraining the system from scratch.
The success of these initial evaluations has set the stage for the next phases of the project, which include adapting the technique to other systems and creating verifiable machine unlearning to statistically test whether unlearning has indeed repaired a system or completely wiped out unwanted data.
In their paper’s introduction, Cao and Yang said “machine unlearning” could play a key role in enhancing security and privacy and in our economic future.
“We foresee easy adoption of forgetting systems because they benefit both users and service providers. With the flexibility to request that systems forget data, users have more control over their data, so they are more willing to share data with the systems. More data also benefit the service providers, because they have more profit opportunities and fewer legal risks.
“We envision forgetting systems playing a crucial role in emerging data markets where users trade data for money, services, or other data because the mechanism of forgetting enables a user to cleanly cancel a data transaction or rent out the use rights of her data without giving up the ownership.”