The smartphone-enabled, one-foot-tall robot, called Shimi, is billed as an interactive “musical buddy.”
“Shimi is designed to change the way that people enjoy and think about their music,” Professor Gil Weinberg, director of Georgia Tech’s Center for Music Technology and the robot’s creator, said.
Shimi is essentially a docking station with a “brain” powered by an Android phone. Once docked, the robot gains the sensing and musical generation capabilities of the user’s mobile device. In other words, if there’s an “app for that,” Shimi is ready.
For instance, by using the phone’s camera and face-detecting software, the bot can follow a listener around the room and position its “ears,” or speakers, for optimal sound.
Another recognition feature is based on rhythm and tempo. If the user taps or claps a beat, Shimi analyzes it, scans the phone’s musical library and immediately plays the song that best matches the suggestion.
Once the music starts, Shimi dances to the rhythm.
“Many people think that robots are limited by their programming instructions,” Mason Bretan, Music Technology Ph.D. candidate, said.
“Shimi shows us that robots can be creative and interactive,” he said.
Future apps in the works will allow the user to shake their head in disagreement or wave a hand in the air to alert Shimi to skip to the next song or increase-decrease the volume.
The robot will also have the capability to recommend new music based on the user’s song choices and provide feedback on the music play list.
The robot will be unveiled at Wednesday’s Google IO conference in San Francisco, where a band of three Shimi robots will strut its stuff for guests, dancing in sync to music created in the lab and composed according to its movements.