Gong Wins ARO Young Investigator Award to Secure Machine Learning

4/21 Pratt School of Engineering

Award will support efforts to build machine learning methods that are provably secure against multiple types of attacks

computer data projected on to the face of a human being
Gong Wins ARO Young Investigator Award to Secure Machine Learning

Neil Gong, assistant professor of electrical and computer engineering at Duke University, has won an Army Research Office Young Investigator Award, which recognizes outstanding young university faculty members by supporting their research and to encouraging their teaching and research careers. The three-year, $360,000 award will support Gong’s efforts to build machine learning methods that are provably secure against adversarial examples and poisoning attacks.

As machine learning algorithms continue to mature, they are being used in more and more commonplace applications. While their adoption brings great opportunities for advancement in a wide variety of fields, it also brings new opportunities for hackers to manipulate their performance.

At their core, the basic function of machine learning algorithms is to teach themselves to find distinguishing characteristics to classify data. For example, researchers are creating algorithms that distinguishimages of stop signs from speed limit signs, flag emails containing malware in an overloaded inbox, and spot cellular patterns associated with various types of cancer. To achieve these outcomes, algorithms must be fed training data consisting of examples of what they’re learning to find.

“The attacker’s goal is to force the algorithm to make an incorrect classification. This project’s goal is to develop training techniques that can secure against these types of attacks and guarantee that the resulting AI will make correct predictions.”

According to Gong, there are two primary ways that bad actors might try to interfere with this process. In a poisoning attack, incorrect examples are inserted into the training data so that the machine learning algorithm’s learning process is led astray. Imagine a specific line of code designating emails “safe” being slipped into training data, tricking the algorithm into misclassifying messages carrying that code—even if the message contains malware.  Adversarial examples, meanwhile, are slightly altered in a specific way so as to make machine learning algorithms make a mistake—sort of like an optical illusions for machines.

“In both cases, the attacker’s goal is to force the algorithm to make an incorrect classification,” said Gong. “This project’s goal is to develop training techniques that can secure against these types of attacks and guarantee that the resulting AI will make correct predictions.”

One approach Gong is pursuing is called randomized smoothing. By putting random noise into the training samples and having a variety of algorithms “vote” on the correct answer, the process is more secure because the final classification is an average of multiple solutions. With enough research and training data, scientists will be able to develop machine learning programs that can be mathematically proven to be secure against these types of attacks, according to Gong.”

“In areas of traditional cybersecurity such as detecting malware, people are always trying to find ways to evade a security classifier, so this research is immediately applicable to that effort,” said Gong. “As far as people using stickers on traffic signs to fool self-driving vehicles, that’s a more futuristic problem that isn’t impacting people’s lives just yet, but we’re always trying to address potential problems before they actually happen.”