Duke Faculty Join Federal Roundtable Focused on AI 

2/27/24 Pratt School of Engineering

Representatives from Congress, the White House and federal agencies met with Research Triangle AI experts at North Carolina State University 

A group of people standing in front of a screen that says Artificial Intelligence
Duke Faculty Join Federal Roundtable Focused on AI 

A group of roughly two dozen leaders and representatives in the artificial intelligence (AI) space gathered together on February 20, 2024, for a roundtable discussion intended to inform and influence future regulations and policy decisions regarding the quickly evolving technology.

Held on the campus of North Carolina Central University, the meeting was a convergence of expertise from government, industry and academia. 

The gathering was convened by U.S. Rep. Valerie Foushee, who serves North Carolina’s Fourth District, and U.S. Rep. Deborah Ross, of North Carolina’s Second District. The pair represent the Research Triangle region’s enormous footprint in the technology sector across top-tier universities and businesses. Both also serve on the House Committee on Space, Science, and Technology. 

They were joined by two more high-ranking officials from the federal government working on developing AI standards and regulations: White House Office of Science and Technology Policy (OSTP) Director Arati Prabhakar, who helped develop President Biden’s executive order on AI and continues helping to lead the executive branch’s efforts in this space, and National Institute of Standards and Technology (NIST) U.S. AI Safety Institute Director (and Duke graduate) Elizabeth Kelly, who is charged with leading efforts to standardize policies, safeguards and other measures related to AI. 

A group of people standing in front of a screen that says Artificial Intelligence

“Early in 2023, the president and vice president said, ‘Look, it’s really clear that AI is becoming one of most powerful technologies of our time,’” Prabhakar said. “’And we know what people do with powerful technologies; we use them for good and we use them for ill.’ They laid out a very clear task for us. We have got to manage AI’s risks so we can seize its benefits, and one thing that I have really appreciated from both of them is keeping promise and peril in frame at all times.” 

Members of the roundtable discussion included executives from tech giants IBM, Cisco and the SAS Institute, local startups and policy centers in the AI realm, and faculty from NC Central, North Carolina State University and Duke. 

Both of Duke’s representatives on the panel were affiliated with the Pratt School of Engineering: Shaundra B. Daily, the Cue Family Professor of the Practice of Electrical & Computer Engineering, and Cynthia Rudin, the Earl D. McLean, Jr. Professor of Computer Science, Electrical & Computer Engineering, Statistical Science, Mathematics, and Biostatistics & Bioinformatics. 

Cynthia Rudin of Duke University

You have these cases where in the criminal justice system people are subject to models that produce risk scores that determine peoples’ freedom, and they don’t have any access to what those scoring systems are. So how in the world are we going to trust those?

Cynthia Rudin Earl D. McLean, Jr. Professor at Duke University

The conversation opened with the topic of AI trust and transparency. With AI poised to inform so many large-scale decision-making programs, what are the technical solutions that need to be created and standardized to improve overall trust and safety across AI ecosystems? 

Rudin was quick to jump into the discussion.  

Shaundra B. Daily, the Cue Family Professor of the Practice of Electrical & Computer Engineering at Duke

“I think an essential component of trust is interpretability. It’s whether you can actually understand what the models are doing,” said Rudin, who won the $1 million Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity in 2022 for her work promoting interpretable AI. “You have these cases where in the criminal justice system people are subject to models that produce risk scores that determine peoples’ freedom, and they don’t have any access to what those scoring systems are. So how in the world are we going to trust those?” 

The conversation continued into the technical hurdles of achieving transparency in AI, like whether or not data could be watermarked to make it easier to identify when AI was involved in producing results, including in industries such as music and art. It then turned to questions about how we prepare students coming up through our education systems today to handle these problems, both technically and ethically. 

Multiple participants talked about how morality and ethics are built into every curriculum they teach, including some who run entire centers focused on these issues. But Rudin was again quick to counterpoint. 

Right now what happens is these companies just fire their ethical AI groups because it’s specifically not in their financial interests to be ethical. You need to regulate it.

Cynthia Rudin Earl D. McLean, Jr. Professor at Duke

“I teach ethics in my introductory graduate course right at the very beginning. That is not going to stop the problem,” Rudin said. “Right now what happens is these companies just fire their ethical AI groups because it’s specifically not in their financial interests to be ethical. You need to regulate it. And I think you should start with things like access to biometric datasets so that we don’t have facial recognition proliferating everywhere and ruining our privacy.” 

Discussion turned to horizontal versus vertical regulation—whether overarching regulations for AI would stifle innovation in specific sectors, and if creating separate rules for separate use cases was even possible. Others were quick to point out also that the rapidly evolving space was an access concern for older and underprivileged people who might not be able to interact with future systems effectively. 

The conversation continued through whether European regulations might already be addressing some of these issues, as large international companies would not want to create separate systems for separate markets. Further discussion advocated for proactively creating standards and procedures through legislation rather than letting law be slowly dictated through legal battles. 

Before the group broke into smaller informal conversations, Rudin made one final comment about the abilities for smaller companies and research outfits to compete with those with deeper pockets and proprietary data.  

“NIST has been so incredible with, for example, their facial recognition tests and I really am hoping that you’ll expand into health care because health care researchers are really at a disadvantage right now,” Rudin said. “For instance, with smart watches, I would love to build a better atrial fibrillation detector— or perhaps I already have, but the problem is I can’t test it relative to the Apple Watch or any of the others because there’s no testing platform. NIST can have a huge impact on health if you provide data sets and a test platform the same way you did for facial recognition.” 

More on Artificial Intelligence