Robots Learn to Perceive Objects Using Acoustic Vibrations
Created by Boyuan Chen’s lab, SonicSense allows robots to interact with their surroundings in ways previously limited to humans
Researchers at Duke University’s General Robotics Lab present CREW, a platform facilitating seamless interaction between humans and AI to enable more intuitive and effective partnerships
Artificial intelligence (AI) has already become an invisible but indispensable collaborator in our lives. It helps filter spam from your inbox, improves your Netflix recommendations, and, as an automotive copilot, suggests optimal routes, monitors blind spots, and assists with parking.
These seamless collaborations between people and AI allow us to complete daily tasks and achieve goals more efficiently. But as human-AI teaming becomes an integral part of our daily lives, it raises important questions: What roles should humans and AI play to best complement each other? How can different forms of human feedback accelerate AI training? What is the ideal level of trust humans should place in AI to enhance collaboration without risking over-reliance? How can we address decision-making bias in both humans and AI to ensure they do not reinforce or amplify each other?
To tackle these pressing questions and advance our understanding of human-AI teaming, researchers at Duke University have developed an innovative platform called CREW to help answer these questions.
“The goal of any AI-human teaming is to harness the strengths of both by fostering dynamic, collaborative and adaptable relationships, ” explained Boyuan Chen, professor of mechanical engineering and materials science, electrical and computer engineering, and computer science at Duke, where he also directs the Duke General Robotics Lab. “But until now, we’ve lacked a comprehensive way to study and improve these interactions. CREW changes that.”
Published on November 24 in the journal Transactions of Machine Learning Research (TMLR), CREW provides researchers with a versatile toolkit to explore the nuances of human-AI collaboration across various scientific disciplines.
“CREW is like a giant virtual playground where humans and AI can work together on diverse tasks,” explained Lingyu Zhang, the lead author and a first-year PhD student in Chen’s lab. “But rather than just playing for fun, we use these games understand how humans and AI can work together most effectively.”
MEMS PhD student at Duke UniversityCREW is like a giant virtual playground where humans and AI can work together on diverse tasks. But rather than just playing for fun, we use these games understand how humans and AI can work together most effectively.
The CREW platform features several pre-built games, including bowling, treasure hunting and hide-and-seek, each designed to explore different aspects of collaboration. It also supports the integration of customized tasks, enabling researchers to tailor the platform to their specific research goals.
Unlike existing platforms that primarily focus on AI performance on its own, CREW places a strong emphasis on the human element. One standout feature is its ability to capture continuous, nuanced feedback from humans, moving beyond the traditional scalar options of “good,” “bad,” and “neutral.” By enabling users to hover a mouse cursor over a gradient scale and provide real-time feedback as AI performs tasks, CREW facilitates a richer interaction. This approach not only enhances the quality of human feedback but also significantly accelerates the AI’s learning process, making collaboration more effective and adaptive.
CREW also offers advanced interfaces to collect passive physiological signals, such as eye movement, brain activity, heart rate, speech and written texts. This comprehensive dataset offers deeper insights into how humans interact with AI and opens new possibilities for designing more intuitive, adaptive and effective human-AI collaboration frameworks.
As part of this effort, CREW incorporates a set of cognitive tests designed to quantify traits that may impact teaming efficiency. In a benchmark study involving 50 adults, researchers found that certain cognitive skills, such as spatial reasoning and rapid decision-making, significantly influence how effectively a person can work with an AI agent in specific tasks.
“These results highlight exciting possibilities, such as enhancing human abilities through targeted training and identifying new factors that contribute to effective AI guidance to train faster and more responsive AI,” said Chen. “They also point to the potential for developing more adaptive training frameworks that not only improve AI but also enhance human skills, paving the way for stronger and more collaborative human-AI teams.”
Professor of Mechanical Engineering and Materials Science, Electrical and Computer Engineering, and Computer Science at Duke UniversityThese results highlight exciting possibilities, such as enhancing human abilities through targeted training and identifying new factors that contribute to effective AI guidance to train faster and more responsive AI.
CREW is fully open-source, inviting researchers worldwide to explore new possibilities in human-AI collaboration. Future updates aim to introduce more diverse tasks, including multiplayer scenarios with complex strategies and robotics physics-based environments. The platform also plans to improve human physiological data processing and analysis, further advancing human-AI teaming research.
“We’re just scratching the surface,” Zhang enthuses. “The potential for human-AI collaboration is enormous, and CREW gives us the tools to explore it systematically while actively shaping it to ensure these partnerships are enhancing human capabilities rather than replacing what makes us uniquely human.”
Multiple universities, research institutions and government agencies have already started to experiment with CREW in their research. Meanwhile, the team at Duke General Robotics Lab is also actively working to expand their efforts to more scalable and interactive human-AI teaming research.
This work is supported in part by ARLSTRONG program under awards W911NF2320182 and W911NF2220113.
Lingyu Zhang, Zhengran Ji, Boyuan Chen. “CREW: Facilitating Human-AI Teaming Research”. Transactions on Machine Learning Research, 2024.Project Website: CREW: Facilitating Human-AI Teaming Research – Research Blog
Created by Boyuan Chen’s lab, SonicSense allows robots to interact with their surroundings in ways previously limited to humans
New framework enables AI to learn through real-time human feedback, paving the way for more responsive AI systems
Three new courses teach students the fundamentals of robotic coding, challenge students to build a walking robot and make students think about the ethics of the