AI is on a Runaway Cybersecurity Train: All Aboard, or Pull the Brakes?

10/15/24 Pratt School of Engineering

With great potential comes great cybersecurity risks, from uncontrollable drones to deep faked illegal images

AI is on a Runaway Cybersecurity Train: All Aboard, or Pull the Brakes?

When Charley Kneifel heard that a colleague had been training an AI to narrate video in his own voice, it gave him pause. He could easily see the benefits his colleague was after. “It’s really useful if you’re doing voiceovers from scripts,” he said. 

But this “cool” potential also raised alarms. Because while this use of AI was “good,” it could also be flipped around. A cybercriminal could use the same approach to trick voice recognition systems or create a targeted phishing attack. “It’s a really cool thing that bleeds over into something else.”

As the chief technology officer of Duke’s Office of Information Technology (OIT), Kneifel’s job isn’t to be the fun police when it comes to exploring the possibilities of large language models (LLMs) and generative AI. But it is his job to recognize the potential risks and threats these burgeoning technologies introduce. Spoofing someone’s voice or image. Scraping terabytes of data from across the internet to perfect socially engineered attacks. Corrupting the AI itself with malicious code to return incorrect, wrong or even dangerous results.  

DALLE artificial intelligence reimagines what the Duke Chapel might look like in an exotic location.

“It’s a little bit of an arms race,” said Alexander Merck, an information security architect at Duke’s OIT, who works with Kneifel. “We have the same tools at our disposal…and these tools are going to provide some really novel ways of handling vulnerabilities and threats.” 

Who will win out? It depends on how the race is run, and if legislative and legal actions will tilt AI away from the dark side.

The Promise of Generative AI, Curdled

Turning AI into a weapon is not an idle threat. IBM’s X-Force Threat Intelligence 2024 report found over 800,000 references to emerging generative AI technology on illicit and dark web forums last year. Generative AI has already been used to create politically charged deepfake images surrounding the wars in Gaza and Ukraine. 

Concerns about generative AI being used to disrupt elections is also sky-high.  According to Crowdstrike, a cybersecurity technology firm, more than 42% of the global population will be voting in presidential, parliamentary and/or general elections this year. If someone can create deepfake videos of celebrities to try to sell products, as happened to Jennifer Aniston, Taylor Swift and Selena Gomez, then what’s to stop them from creating a fake video of a presidential candidate?

Miroslav Pajic

With their increasing levels of autonomy, systems like cars and drones are pushing humans more and more out of the loop.

Miroslav Pajic Dickinson Family Associate Professor of Electrical and Computer Engineering

Or if fake videos don’t scare you, how about hijacking physical AI systems to create havoc and harm on your city’s streets? “With their increasing levels of autonomy, systems like cars and drones are pushing humans more and more out of the loop,” said Miroslav Pajic, the Dickinson Family Associate Professor of Electrical and Computer Engineering at Duke. “Which means these systems are more vulnerable to AI attacks.”

Pajic’s research focuses on how these types of devices and equipment operate in contested environments, like war zones, and what kind of safety guarantees can be created. That’s vital, since the radar used by autonomous vehicles or drones, including those used by the world’s militaries, can be disrupted to make other objects and vehicles disappear. Connectivity can also be stopped altogether. 

“I can’t stop a drone for five seconds to figure out whether you are a malicious agent or not,” Pajic said. “I need to figure those things out on the fly—am I hitting a house or not a real house? These kinds of real-time decisions are really important.” 

The good news is that these types of systems already require more robust security than standard machine learning components used to make or read an image, for example. “But if your vehicle makes correct decisions 97% of the time and wrong ones 3% of the time, someone is probably going to get hurt,” he said. 

Different examples of artwork including the originals, mimicked art generated by AI, and mimicked art generated by AI when a watermark intending to fool the AI is used.
Watermarks invisible to the human eye can fool AI trying to mimic an artistic style. Each row shows two examples of a famous artist’s paintings, followed by a piece created in that style by AI (left) and then a style of painting the watermark intends to fool AI into seeing instead, followed by AI’s thwarted reproduction attempts (right).

And it’s not just devices that rely on machine learning to operate; Duke researchers have also shown that AI-enabled security measures can also be easily overcome. “Somewhat surprisingly, we found it’s not that hard,” said Michael Reiter, the James B. Duke Distinguished Professor of Computer Science and Electrical and Computer Engineering at Duke. 

In a 2019 study, researchers designed eyeglass frames that enabled a wearer to impersonate someone else and were difficult to distinguish from regular eyeglasses for sale on the Internet. Some of the frames were on the more colorful side, but weren’t that flashy and didn’t cover a person’s face—“not like crazy, big, Elton John eyeglasses,” Reiter said. But even these basic frames fooled facial recognition software, creating questions as to whether or not AI-enabled biometric security is practical in real life.

Michael Reiter

Somewhat surprisingly, we found it’s not that hard [to overcome AI-enabled security measures]

Michael Reiter James B. Duke Distinguished Professor of Computer Science and Electrical and Computer Engineering

While being able to hack things like cars and biometric scanners is still theoretical, AI is causing real harms right now, as it’s being used to make toxic, harmful and unsafe images, said Neil Gong, assistant professor of electrical and computer engineering at Duke. For example, generative AI is creating child sex abuse materials and pornographic images of real people that are being distributed as a kind of “revenge porn.” Recently, high school students in New Jersey and California were caught making and sharing nude deepfakes of their classmates. 

“It’s inappropriate information, some of which violates the law, and it can do tremendous harm to people’s mental health,” Gong said. “And as of right now, the safety mechanisms on the programs that create these images are not robust or secure enough. An attacker can slightly modify their request and the generative AI will generate harmful content.”

An AI generated image of the Blue Devil and Tar Heel mascots holding a sign that says Deep Fake.
Would Blue Devils and Tar Heels ever be friends? Only in a deepfake created by AI.

Efforts to Reign AI Back In

While stopping the runaway train that is AI is not going to be an easy task, the cybersecurity world isn’t just sitting idly by.

“We’re working to build more secure guardrails and more secure safety mechanisms,” Gong said.

In October 2023, the White House released an executive order that includes a battery of directives aimed at reigning in AI. For example, in accordance with the Defense Production Act, developers of AI systems must share their safety test results and other critical information with the U.S. government. The National Institute of Standards and Technology has also been tasked with developing standards, tools and tests to help ensure that AI systems are safe, secure and trustworthy. Agencies that fund life-science projects are also creating standards for biological synthesis screening, toprotect against the risks of using AI to engineer dangerous biological materials

The executive order also gives directives on how AI can be used to better the life of American citizens as well as directives aimed at AI-enabled fraud, protecting Americans’ privacy and keeping bias out of AI models. 

Neil Gong

This space still needs a lot of research, but research alone is not going to be enough. The law always comes after the new technology.

Neil Gong Assistant Professor of Electrical and Computer Engineering

Cybersecurity professionals are looking at how to use AI to create better defenses as well. “There’s two sides of this coin. We’re going to see attacks that are leveraging generative AI, but we’re also going to get tools that leverage it ourselves,” said Nick Tripp, interim chief information security officer at Duke’s OIT. “Having LLMs and machine learning combing through our alerts data is going to be really important for us.” That way, they can use LLMs to keep pace with the quickly accelerating number of automated attacks.  

“It’s a little bit of an arms race, much like everything else in security,” Merck of OIT said. “There are going to be a lot of new attacks coming out of these tools, but there’s also going to be some really novel ways of handling alert data or identifying vulnerabilities and threats.” 

Legislation and Lawsuits

The good guys are trying to stop the bad guys from using AI, but they’re starting from behind. “This space still needs a lot of research, but research alone is not going to be enough,” said Gong. That means legislation. “The law always comes after the new technology,” he said.

Ten states, including California, Florida, Georgia, Hawaii, Illinois, Minnesota, New York, South Dakota, Texas and Virginia, have enacted penalties for the creation and distribution of AI-generated deepfakes. Lawsuits, or at least the potential for them, are also in play. 

In May, actress Scarlett Johansson threatened a lawsuit and called for legislation after she alleged OpenAI copied her voice for their ChatGPT’s voice mode named “Sky.” In a statement, Johansson, who voiced a computer operating system in the movie Her, said this happened after she told OpenAI CEO’s Sam Altman no—twice—when he asked if they could use her voice. 

“In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity. I look forward to resolution in the form of transparency and the passage of appropriate legislation to help ensure that individual rights are protected,” she said in her statement.

Emily Wenger

Maybe now that a very big name is caught in this net, more resources will be devoted to figuring out proper legislation or guidelines

Emily Wenger Assistant Professor of Electrical and Computer Engineering

“What she’s doing is not all that different than what other artists and publishers have attempted to keep their work from being used as training fodder for AI,” said Emily Wenger, assistant professor of electrical and computer and engineering at Duke. For example, digital watermarking, which embeds information into images that AI will pick up, is one such tactic being used to foil AI scrapers. Wenger herself was named one of Forbes 30 Under 30 for her work on Glaze, a tool designed to protect artists from their work being sucked up by AI without their consent.

But Johansson—and other celebrities—have a much bigger platform, and are drawing attention to the real issues surrounding AI. 

“Maybe now that a very big name is caught in this net, more resources will be devoted to figuring out proper legislation or guidelines,” Wenger said. “It feels a little like how fake porn has been a problem ever since deepfakes got somewhat believable…but it took Taylor Swift being targeted for the world to realize how big the problem really was.”

Like with other new technologies, AI isn’t all good or all bad, but people making decisions about how to use them will ultimately determine how they become woven into our daily lives. “We have the balancing act,” said Kneifel of OIT. “We have to help [people] make deliberate choices, I don’t want to say in an easy fashion, but at least understanding the tradeoffs.”

Input/Output Magazine

There’s an old adage that you get out of an endeavor whatever you put in. But just as important as the inputs and outputs is the slash between them—the planning, the infrastructure, the programs, the relationships. We hope the content within these pages helps you not only discover a little more about Duke Engineering, but also ideas and inspiration that make your own slashes a bit bigger.