Adapting to AI in the Classroom
Michaela Martinez
How Duke Engineering faculty are working to effectively—and thoughtfully—use generative AI tools.
USING AN AI CHATBOT is deceptively simple: users submit a question or a prompt, like “Explain how metamaterials work,” and the platform uses a combination of deep learning techniques and language models to scrape the internet for relevant information and compose it into a (relatively) coherent answer.
Generative AI models aren’t a new technology, but when ChatGPT was launched in November 2022, news headlines ran the spectrum from apocalyptic to optimistic. The Atlantic warned that “ChatGPT Will End High-School English,” while The New York Times created a specialized beat to cover the topic, with articles ranging from “A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn,” to advice on how office workers can use chatbots more effectively.
But those working in academic institutions have additional concerns to contend with, namely: How is this tool going to affect our students?
The concern was well-placed. Students could use these platforms to draft essays in minutes or use the answers provided by the chatbots to complete assignments and study for tests. Many institutions, from high schools to universities, issued outright bans on AI chatbots, warning students that using the software would be considered cheating.
But that wasn’t the case for the Pratt School of Engineering at Duke University.
“It isn’t an honor code violation if a student is caught using ChatGPT or other AI chatbots on an assignment,” said Jerome Lynch, the Vinik Dean of Engineering at Duke. “We know that these are exciting tools that will shape our future. It’s important for students to learn how to use them appropriately.”
“We know that these are exciting tools that will shape our future. It’s important for students to learn how to use them appropriately.”
Jerome Lynch
Vinik Dean of Engineering
It’s a Tool, Not a Crutch
As an instructor in Duke’s Financial Technology Master of Engineering program, Craig Hurwitz has followed the discussions and fears about students using AI to cheat or find shortcuts on assignments.
But rather than worry about these potential issues, Hurwitz immediately sought to use these tools to save time and improve performance on tasks like writing assignments.
“As everyone was saying that we need to pump the breaks and stop this, I was thinking about how these tools are already being used in the workplace,” he said. “Our students are going to need to learn how to use it, and isn’t that what a university like Duke is for?”
Hurwitz, an executive-in-residence at Pratt, designed an assignment for the graduate students in his “Emerging Trends for Financial Technology (FinTech),” course. Students were assigned a case study and told to choose a FinTech approach to address a challenge posed by the case. But Hurwitz required the students to use one of the AI chatbots to create the first draft of their executive summaries.
“I had the students import their first draft into a word document and turn the track changes on so I could see how the draft generated by ChatGPT or Bing evolved into their final submission,” explained Hurwitz.
“Some students used a very broad prompt and got a broad piece of garbage that they edited into a good final product,” he said. “But some students kept iterating their prompt so that each response from the chatbot got more specific. They went through 25 to 30 iterations until they were happy with their final result, and from there they only had to do minor edits for their final submission.”
According to Hurwitz, the exercise gave students valuable insight about some best practices for using chatbots. For example, many of the students who continued to iterate their prompt discovered that they received a better response when they asked the chatbot for an initial, simple response in bullet points. They could then ask the program to expand on specific points to help keep the response focused and clear.
In a survey about the assignment, many students noted that they were surprised by how much of a time-saver the program was, even if they had to work through multiple iterations of a prompt. Students who speak English as a second language also commented that using AI chatbots helped them write as if English was their native language.
The exercise let the students see first-hand the limitations of the AI platforms. According to Hurwitz, many students noticed that some of the information provided by the chatbot was incorrect. They also commented that many of the citations were wrong, or that an outdated source was used.
“To me, this is the ultimate efficiency tool,” said Hurwitz. “The students saw that there were problems that prevent ChatGPT and other platforms from creating the final product you use, but it was rewarding to see how quickly they could adapt and optimize it for their specific needs.”
A Digital Teaching Assistant
Jon Reifschneider first piloted his Classwise platform in his “Deep Learning Applications” course in the spring of 2023. He’d been developing the program since 2020, when he was hired as the director of Duke’s AI for Product Innovation (AIPI) Master of Engineering program. Reifschneider wanted to design a tool that could help students learn and provide faculty with specific feedback about how well students understood a specific topic.
“A lot of student assessments are artificial,” he said. “If instructors have a better way to see what students had mastered and what topics they were struggling with, they are in a much better position to deliver effective, personalized guidance to their students. And if students can see what topics they haven’t mastered or don’t understand, they can focus their efforts more efficiently.”
Like other AI chatbots, Classwise is trained on data. But rather than open it up to scrape the entire internet, Reifschneider specifically fed the machine learning algorithms with information from his own teaching materials and lectures.
“Tools like this have unlimited patience and time, so students can ask as many questions as they want until they feel comfortable with a topic. The opportunities are endless if teachers and students know how to use them effectively.”
Jon Reifschneider
Executive in Residence & Director, AI for Product Innovation
“Because we keep the data limited to the class and in line with what the instructor is actually teaching, we minimize the problem of the AI hallucinating, which is when it generates incorrect answers but presents them as facts,” explained Reifschneider. “We don’t get down to zero errors, but we’re pretty close.”
As students watch lecture videos or complete exercises, the Classwise chatbot asks questions about topics covered in the lesson. The tool then assesses students’ answers and provides and explains any information it thinks is missing. Students can also prompt the chatbot if they need further clarification or examples. Once an assignment is submitted, Reifschneider and other faculty review the student responses and identify any particularly challenging topics that may need additional coverage in class.
Reifschneider has already seen enormous success with the program. Several faculty members across Duke have started using Classwise for their fall 2023 courses to engage students in using active learning with the AI chatbot. He has also worked with Duke Translation & Commercialization to make Classwise available to instructors from other schools and universities, and over a thousand educators and students ranging from middle school through graduate school are now using the tool.
Reifschneider is hopeful that these AI platforms—when used correctly—can serve as an effective complement to instructors. “Students don’t need to wait for me to read and respond to an email if there is something they don’t understand in a lesson,” he said.
“Tools like this have unlimited patience and time, so students can ask as many questions as they want until they feel comfortable with a topic. The opportunities are endless if teachers and students know how to use them effectively.”
Identifying Cracks in the Foundation
Despite their success so far, Duke Engineering faculty aren’t blindly adapting AI platforms into their curriculums. Many are still concerned about the ethical quandaries and problems associated with the algorithms that drive the chatbots and their data sources.
Allegra Jordan, an executive-in-residence at Pratt, teaches “Management of High-Tech Industries,” a core class in Duke’s Master of Engineering Management program that focuses on the skills students need to cultivate to be successful leaders.
Despite the course’s title, Jordan’s iteration of the course doesn’t immediately focus on technology. Instead, her curriculum includes exercises that help students learn valuable interpersonal skills, like how to manage unexpected challenges without panic, conduct difficult conversations, and identify red flags or concerning behaviors in a company’s work
culture.
“For students to be successful leaders, they need to know what their values are, and that requires significant introspection,” said Jordan. “My goal is to teach our students how they can make quality, ethical decisions in industries where technology is constantly developing. Unfortunately, those places can have a lot of conflict, which makes that challenging.”
During each semester, the class typically explores the leadership and decision-making at several companies as case studies. But during the 2023 spring semester, Jordan had the class focus on several of the generative AI companies.
“There is a gap between what you are told is happening and what is actually happening,” said Jordan. “For example, these companies said they were concerned with the ethics of AI, but then they would fire or reduce the size of their ethics teams.”
The class also discussed the technology itself. While the students could identify a plethora of uses for the AI chatbots, they also kept coming back to an Achilles heel with the programs: the online data they were learning from.
“There isn’t a way for the public to qualify the data that these companies are scraping from,” said Jordan. “There is a lot of misinformation online, and if these tools are using and spreading that misinformation as fact, that’s something that needs to be addressed.”
Jordan is far from the only faculty member with these concerns. Cynthia Rudin, the Earl D. McLean, Jr. Professor of Computer Science, Electrical and Computer Engineering, Statistical Science, Mathematics and Biostatistics and Bioinformatics, is an expert in AI. Rudin specializes in interpretable machine learning algorithms, which is an AI that has been programmed to explain how it reaches its conclusion.
But the algorithms that power generative AI essentially work in a black box; they spit out an answer without showing how they got there.
“In the near future, generative AI will be powerful enough to cite sources for its information, and it will also be more difficult to tell apart from human authors,” cautioned Rudin.
Jordan thinks that it’s just as important for Duke students to understand how these tools work as it is for them to learn how to use them. This context is key for thinking critically about the tools they use and the decisions they make surrounding the new technology.
“We want our students to be able to recognize what doing this work well looks like, whether it’s having a team of ethicists or sharing their data sources,” she said. “Our goal is for our students to become leaders who can change industries and change what’s possible.”
Tips for Incorporating AI into the Classroom
The proverbial cat may be out of the bag when it comes to AI. Here are a few ways to help students learn about the tool’s capabilities—and limitations:
- Challenge students to use generative AI tools like ChatGPT or Bing to create first drafts so they can learn how they need to edit the prompts to get the most effective and accurate information.
- Students in Craig Hurwitz’s FinTech class noted that asking for a simple, limited response in bullet points was the most effective starting point.
- Have students track where information was accurate and where the weak points in the response were. Was the information out of date, incorrectly sourced, or just plain wrong?
- Work with programs like Classwise that keep the topics and data limited to the class. This improves accuracy while still letting students learn how to navigate generative AI tools.
- Ask students to examine the behaviors of the tech companies running generative AI systems. Do they respond to calls for change? Do they have ethics teams to navigate AI-related issues?
- Make sure students are aware of limitations in the data. The internet is full of misinformation. Make sure students can think critically about how and where these tools get their data.
AI for Product Innovation
Become a leader in applying AI & machine learning.
Read More Stories
Explore additional stories in this issue of I/O Magazine.