The Increasingly Inescapable Need to Integrate Ethics

10/15/24 Pratt School of Engineering

Engineers are facing more ethical dilemmas in their professional lives than ever before. Students need to be taught how to handle them

Pranam Chatterjee speaks with a student in his lab
The Increasingly Inescapable Need to Integrate Ethics

In 2019, Charles Gersbach was on a peaceful, if extremely long, flight from Durham, North Carolina to Hong Kong to speak at the 2nd International Summit on Human Genome Editing. He assumed the conference would be a helpful, if relatively uneventful, couple of days talking with and learning from colleagues.

He was wrong.

After turning on his phone after landing, the notifications came in a tidal wave of beeps and rings. While Gersbach was in the air, He Jiankui, a scientist at the Southern University of Science and Technology in Shenzhen, China, had announced that he had genetically modified the DNA of newborn twin girls named Lulu and Nana.

“My phone blew up with emails and phone calls from reporters looking for comments
on the story,” said Gersbach, the John W. Strohbehn Distinguished Professor of Biomedical Engineering at Duke University. “He was scheduled to speak at the meeting, and when he took the stage there were more reporters in the room than scientists. The sound of the cameras clicking was deafening. I’ve never seen anything like it at a scientific meeting.”

During the conference, He explained that, when the twin girls were each a single embryo, he had used CRISPR to delete a key gene involved in HIV infection. Disrupting this single gene would—in theory—make the children immune to infection by HIV.

Using CRISPR to treat diseases isn’t a new idea, especially for illnesses like muscular dystrophy, Huntington’s disease and sickle cell anemia, where researchers have identified key genes. But this work involves altering cells that will not pass the genetic editing down to the patient’s children. By changing the genome in early-stage embryos, He also changed the DNA of the embryo’s future eggs or sperm—meaning the change would be inheritable for the twin’s children as well.

The work ignited a firestorm of controversy and anger from genome engineers, physicians and ethicists around the globe, who claimed that the project exposed the children to potential risks of off-target gene editing for no significant health benefit, especially when there are already safe and effective ways to protect people from HIV.

Charles Gersbach

In the moment it was difficult to decipher what we were hearing. There was a general feeling of disbelief and outrage. But there was also a realization that this was a monumental step in human intervention in our own biology.

Charles Gersbach John W. Strohbehn Distinguished Professor of Biomedical Engineering

This is just one example of the many ethical dilemmas facing professional engineers in the 21st century. Creating AI that can teach itself; engineering robots that can chase people down; deciding where new sea walls are placed. It’s more essential now than ever before that students be prepared to think through the potential consequences of their decisions.

Duke Engineering recently hired Rich Eva as director of its new Character Forward program to specifically address this issue. With funding from Duke’s Undergraduate Program Enhancement Fund, the Lord Foundation and the Kern Entrepreneurial Engineering Network, he is leading an initiative to provide seed grants to modify courses or pilot ideas for co-curricular activities that can strengthen students’ understanding of these issues.

The need for this level of engagement is self-evident in the materials already being taught across the school.

The He Jiankui experiment and its fallout, for example, is one of several case studies students explore in Duke’s BME290: Ethics in Biomedical Engineering course. Led by Cameron Kim,
a professor of the practice of biomedical engineering at Duke, the course helps ensure that engineering students have a strong foundational understanding of the different ethical guidelines that shape the biomedical and engineering fields. Armed with this knowledge, they examine topics ranging from AI in medicine to brain-computer interfaces to discuss and debate the ethical merits and challenges of new and emerging technologies.

Cameron Kim

We could always discuss challenges from 30 years ago and learn from them, but I want these students to be forward-thinking. So many of these technologies are evolving and changing in ways that necessitate us talking about them now. It’s important that we recognize what can happen when these rapidly evolving tools are misused.

Cameron Kim Professor of the Practice, Duke BME

CRISPR may be the poster child for this argument because its power to transform the fundamental code of what makes humans human is already so readily available.

“Gene editing is deceptively simple,” said Pranam Chatterjee, an assistant professor of biomedical engineering at Duke. With CRISPR, a Cas protein is bound to guide RNA that gives Cas directions to cut or alter DNA at a specific location, leading to changes in the DNA sequence.

“Having even an undergraduate-level knowledge of biology could essentially be enough for you to go in and edit pretty much any gene in almost any organism that you’d like,” said Chatterjee. “That ease has widened its use, but it’s also made it incredibly difficult to regulate people’s activity.”

The Start of an Important Conversation

Recognizing the widespread potential—and dangers—of this tool, the U.S. National Academy of Sciences and the National Academy of Medicine organized a multidisciplinary, international committee in 2017 to review the technology and make recommendations outlining its ethical uses in human genome editing.

While the committee agreed that gene therapy for non-reproductive cells could proceed under existing medical and ethical guidelines, they drew a much stronger line in the sand for CRISPR’s use in reproductive cells, decreeing that heritable human genome editing only be used to prevent a serious disease or condition, and only if there are no reasonable alternatives available.

“We’ve mitigated a lot of the safety issues with CRISPR in the years since its discovery, but it’s still not the safest therapy to put in your body,” said Chatterjee. “It comes from bacteria, so it can trigger an immune response, and it requires very expensive delivery strategies. It’s not the right approach for 99% of things, but it is still the best, and sometimes only, option for a subset of diseases.”

Pranam Chatterjee discusses protein design with a student

One such illness is sickle cell disease, a group of inherited blood disorders that affect 100,000 people in the U.S., and can cause severe pain, organ damage and microcirculation obstructions that can lead to disabilities and even death. Previously, the only long-lasting treatment was a bone marrow transplant, but in 2023, the FDA approved two non-hereditable gene therapies to treat patients 12 and older.

As these first FDA-approved trials begin to show results, Chatterjee is curious if researchers will push to use gene editing therapies for diseases that already have alternative treatment options. In those cases, he says, doctors and researchers who are today’s students will have to look closer at the risks and benefits to using each therapy.

After all, the committee’s decision in 2017 was intended to be the beginning of a conversation about the ethics of human gene editing—not the end of one. But for now, Chatterjee is firmly focused on what he can accomplish in the present in both his role as a scientist and as a professor.

“I think overall we haven’t done a good enough job of preparing for the fact that so many people can use this tool,” Chatterjee said. “There are people who have this capability, but how do we make sure they’re going to use it the right way? And to do that, we need to incorporate ethics into any educational lesson we do with this technology. That’s got to be a part of the conversation.”

Chasing Down a Runaway Train

Another technology that is advancing ethical debates and is already surprisingly easy for people to use is AI. But rather than focusing on the implications of creating AI that could become self-aware and even out think humans—which we’ll get to later—many researchers are worried about how simpler decision-making algorithms are reaching their conclusions. After all, most are “black boxes” that do not show their work.

Cynthia Rudin wants to change that. Rudin, the Gilbert, Louis, and Edward Lehrman Distinguished Professor of Computer Science, leads the Interpretable Machine Learning Lab at Duke. There, she develops machine learning algorithms with her students that help humans peer behind the curtain to see exactly how and why the AI made certain decisions in ways that are clear and easily accessible.

But as she’s working to unlock and clarify the inner workings of the AI her lab builds, Rudin has kept a close—and concerned—eye on the technology as it’s undergone explosive growth by those who don’t mind a black box.

Cynthia Rudin of Duke University

AI technology is like a runaway train, and we’re trying to chase it on foot. Very often, we don’t train our computer scientists in the right topics. For instance, we don’t train them in basic statistics, and we don’t really train them in ethics. They’ve developed this technology without worrying about what it could be used for, and that’s a problem.

Cynthia Rudin Gilbert, Louis, and Edward Lehrman Distinguished Professor of Computer Science

One of Rudin’s biggest concerns is how quickly and easily AI can generate misinformation that is taken and shared as fact. She’s also wary about the proliferation of AI in facial recognition software, and how it can be used outside of highly regulated scenarios.

But Rudin is doing her best to push back. She serves on several government committees on AI to share
her knowledge and concerns about the unregulated proliferation of these technologies. And while she’s aware that many of her own students will go on to jobs in the big tech companies that help develop these technologies, Rudin is upfront about the existing ethical shortcomings they will need to address in these roles.

Illustration by Joanne Park

“Either I don’t teach them, and they perpetuate these problems, or I do teach them about these issues and how to solve them,” she said. “If my people are going to go into big tech, then at the very least they are going to know how and why these tools can be harmful.”

For Rudin, this means probing the context surrounding the development of these tools. By understanding how and why they were initially created, students get a sense for how drastically tools they create can evolve and be used for sometimes unexpected purposes. For example, the vision systems in self-driving cars were trained using algorithms that can identify deer in images. Today, those same algorithms are used in facial recognition software.

“I also teach my students how to use interpretable machine learning algorithms and how to derive them in class so they can experiment with them against black box models,” said Rudin. “They see first-hand that a lot of data doesn’t require neural networks to achieve the highest performance. Not only can my students point this out, but they then know how to create interpretable algorithms to use instead of black boxes. That’s very unusual, as most maching learning courses just teach that black box approach.”

Unlocking a New Way of Thinking

Now, back to those sentient killer robot scenarios.

Boyuan Chen, an assistant professor of mechanical engineering and materials science, electrical and computer engineering, and computer science, and the leader of the General Robotics Lab at Duke, is very familiar with these types of science fiction storylines. But that doesn’t stop him from working on robots that can learn, act and improve by perceiving and interacting with the complex world around them, just like a human child does.

“Ultimately, I hope that robots and machines can be equipped with high-level cognitive skills to assist people and unleash human creativity,” he said. (See our story on his work to learn more about this research).

But Chen isn’t looking to make the prototype for the next Terminator or Westworld guide. On the contrary, he’s actually concerned that there is a major gap between what the public thinks robots can do and what they actually can do. And that gap is perpetuated by societal needs to impress and obtain more funding or sell more products.

“In robotics, you can’t publish a paper without a real robot video demo. But what people don’t know is that many such video demos are cherry picked,” Chen explained. “In a lot of cases, limitations with the systems may be buried in the paper’s appendix, but they are left out of the main sections of the paper or do not even get mentioned. This practice gives students and the general public an incorrect perception of the field in terms of what we have actually achieved and cannot do yet.”

Boyuan Chen is photographed in the North Building for the IO Magazine story on the future of robotics in education. The photos feature Boyuan Chen's Duke Robotics lab and some of his different robot projects.

In one of Chen’s undergraduate classes, he tasks his students to build a legged robot from scratch. At the end of the semester, they hold a dancing and walking competition in front of the Duke Chapel. Students think it’s a way to test their robots, when in fact it’s a way to show that live demos don’t work well. If that point doesn’t immediately come across in the lab, it usually does when students take their machines to the stones outside the chapel. Suddenly the robots can barely walk with the friction on the new surface, even if they were working somewhere else before.

“When they make a summary of the project, nearly 80% of the video is students documenting about how various unexpected challenges and failures happened and how they overcame them,” explained Chen. “I think it is essential for students to fully experience the robot design and control process so that they can not only understand the essential knowledge in robotics, but also identify the key challenges to tackle next. Documenting the entire process teaches the students how they can communicate complex technologies to general audiences, and that makes the development process much more transparent.”

As students progress through these classes loaded with ethical dilemmas, they often learn that there often isn’t a stark “good” or “bad” answer. Instead, the lessons from their readings, case studies, class discussions and real-life experiences help them learn how to articulate their thoughts, concerns and decisions about different technologies and their uses.

“It felt like this class unlocked a new way of thinking for me that I haven’t experienced in any other class I’ve taken at Duke,” says Morgan Sindle, a senior in biomedical engineering at Duke who took Kim’s ethics class in the spring of 2023. “Being able to look at problems with the perspectives I’ve gained from this class feels like a skill that’s just as necessary as the engineering skills I’ve learned at Duke.”

This is common feedback from students, which can also be seen in takeaways from Ethics in Robotics, another robotics course at Duke in which a series of speakers from industry and academia talk about ethical topics in the field of robotics and automation.

“One of the examples we talked about is how pulse oximeters work better on some skin colors than others,” said Allison Taub, a recent Duke graduate who double-majored in mechanical engineering and computer science. “It’s interesting to learn about all of the ways these new technologies impact society and talk through all the nuances involved. We’ve had a lot of great speakers, so it’s been great to get a deep dive into so many different fields.”

“Very few things are black and white, and there’s a lot of grey area,” said Siobhan Oca, a professor of the practice of mechanical engineering and materials science at Duke, who teaches the class. “But we need to have these conversations and be able to hold ourselves and our technology accountable.”

Cameron Kim

What we do as engineers doesn’t occur in a societal, ethical or legal vacuum. We need classes where we can challenge our students to think about the possible impacts of the work we do as engineers, and think not only about our current challenges, but also the future ethical challenges we may face as these technologies evolve.

Cameron Kim Assistant Professor of the Practice. Duke BME

Input/Output Magazine

There’s an old adage that you get out of an endeavor whatever you put in. But just as important as the inputs and outputs is the slash between them—the planning, the infrastructure, the programs, the relationships. We hope the content within these pages helps you not only discover a little more about Duke Engineering, but also ideas and inspiration that make your own slashes a bit bigger.