Dancing With AI: MIT Students Experiment With Interactive Intelligence
Students from MIT’s Interaction Intelligence course (4.043/4.044) presented a series of projects at the 38th annual Neural Information Processing Systems (NeurIPS) conference in December 2024, exploring new ways AI can shape creativity, education, and human interaction.
In a Rush? Here are the Quick Facts!
- Be the Beat generates music from dance movements using AI and PoseNet.
- A Mystery for You teaches fact-checking through an AI-powered, cartridge-based game.
- Memorscope creates AI-generated shared memories from face-to-face interactions.
The conference, one of the most recognized in artificial intelligence and machine learning research, brought together over 16,000 attendees in Vancouver, as reported on MIT’s press release.
Under the guidance of Professor Marcelo Coelho from MIT’s Department of Architecture, the students developed interactive AI-driven projects that examine the evolving role of AI in everyday experiences.
One of the projects, Be the Beat, developed by Ethan Chang and Zhixing Chen, integrates AI into dance by generating music that adapts to a dancer’s movements.
Using PoseNet to analyze motion and a language model to interpret style and tempo, the system shifts the relationship between dance and music, allowing movement to shape sound rather than the other way around. Participants described it as an alternative approach to choreographing and discovering new dance styles.
Another project, A Mystery for You, by Mrinalini Singha and Haoheng Tang, is an educational game designed to develop fact-checking skills in young learners. The game presents AI-generated news alerts that players must investigate using a tangible cartridge-based interface. By eliminating touchscreen interactions, the design encourages a slower, more deliberate engagement with information, contrasting with the rapid consumption of digital news.
Keunwook Kim’s Memorscope examines memory and human interaction through AI. The device allows two people to look at each other through opposite ends of a tube-like structure, with AI generating a collective memory based on their shared perspective.
By incorporating models from OpenAI and Midjourney, the system produces evolving interpretations of these interactions, reframing how memories are recorded and experienced.
Narratron, by Xiying (Aria) Bao and Yubo Zhao, introduces AI into traditional storytelling through an interactive projector. The system interprets hand shadows as characters and generates a real-time narrative, combining visual and auditory elements to engage users in an AI-assisted form of shadow play.
Karyn Nakamura’s Perfect Syntax explores AI’s role in video editing and motion analysis. The project uses machine learning to manipulate and reconstruct video fragments, questioning how technology interprets movement and time.
By examining the relationship between computational processes and human perception, the work reflects on the ways AI reshapes visual media.
Together, these projects examine AI’s potential beyond automation, focusing on its role in shaping artistic expression, critical thinking, and shared experiences.
Leave a Comment
Cancel