Objectifier only knows how to turn on and off, but it has an eye that’s always on the alert and an entire neural network in its Raspberry Pi. Enough for it to learn simple tricks, just like a pet.
Objectifier is an elegant box made from natural wood that recalls its Nordic origins. It’s also the graduation project of Bjørn Karmann at the Copenhagen Institute of Interaction Design.
At first sight, you might mistake it for another smart home assistant, which you would command verbally, like Amazon’s listening and talking Alexa. But Objectifier can’t hear. It learns by watching, thanks to machine learning. Second notable difference, there is no code between Objectifier and you. The relationship is closer to training a dog, which doesn’t necessarily understand what you’re saying but gets very excited the moment you pick up the leash.
“As a maker, I wanted to bring the power of machine learning into the hands of everyday people.”
Bjørn Karmann
Objectifier, equipped with a 220V outlet in which you can plug in any electric device or appliance, knows how to do two things: turn on and turn off. With its camera and its learning software, it’s capable of interpreting an object, a gesture or a movement as a command.
Basic learning concept of Objectifier:
Everything comes down to the creativity of the user, who chooses what event to associate with the switch using a minimalist application (yes/no). While training Objectifier does require patience, in the end satisfaction is greater than with a preprogrammed command.
Objectifier user testing:
How does it work? Objectifier is based on spatial programming, where objects and gestures are the equivalent of functions in a program. Bjørn Karmann has gone through five prototypes, all based on the open source program Wekinator, which, among other learning methods, responds to computer vision.
The different versions experimented with making the object as intuitive as possible for the non-programmer. For example, the first prototype, Pupil, presents basic interaction with pushable buttons (data validation, processing and feedback), which are later found in the application. On the hardware side, the final prototype is structured around a Raspberry Pi 3, a Pi Zero and a Pi Camera, running Wekinator, Processing and Openframeworks software.
Bjørn Karmann, 24, had already been spotted during the 2016 edition of Youfab. He was a finalist in the international digital manufacturing competition organized by the Fabcafe network (originated in Tokyo), with his geographical globe Newsglobus (read more here). He has since found a job as creative technologist for Tellart in Amsterdam. His latest creation as a multidisciplinary interaction designer is Pyrograph, in collaboration with Nicolas Armand and Lars Kaltenbach—a soldering iron printer that produces burning images!