SixthSense frees information from its confines and integrates it with the physical world. With SixthSense, MIT Media Lab researcher Pranav M
SixthSense (also known as WUW: Wear Ur World) is a wearable, gestural interface that augments our physical world with digital information, and lets us use natural hand gestures to interact with that information. SixthSense uses a camera and a tiny projector in a pendant-like device to see what we see, and visually augment the surfaces or objects with which we interact. SixthSense projects information onto any surface—such as walls and other objects around us—and allows us to interact with the information through natural hand gestures, arm movements, or with the object itself.
SixthSense: A Wearable, Gestural Interface to Augment Our World
Turning any surface into a touch-screen display
SixthSense is a wearable, gestural interface that augments our physical world with digital information, and lets us use natural hand gestures to interact with that information.
SixthSense brings intangible, digital information into the tangible world, and allows us to interact with this information via natural hand gestures. SixthSense frees information from its confines, seamlessly integrating it with reality, thus making the entire world your computer.
The SixthSense prototype comprises a pocket projector, mirror, and camera worn in a pendant-like mobile device. Both the projector and the camera are connected to a mobile computing device in the user’s pocket. The system projects information onto the surfaces and physical objects around us, making any surface into a digital interface; the camera recognizes and tracks both the user's hand gestures and physical objects using computer-vision-based techniques. SixthSense uses simple computer-vision techniques to process the video-stream data captured by the camera and follows the locations of colored markers on the user’s fingertips (which are used for visual tracking). In addition, the software interprets the data into gestures to use for interacting with the projected application interfaces.
The current SixthSense prototype supports several types of gesture-based interactions, demonstrating the usefulness, viability, and flexibility of the system. The current prototype system costs approximately $350 to build.
TOFU is a project to explore new ways of robotic social expression by leveraging techniques that have been used in 2d animation for decades. Disney Animation Studios pioneered animation tools such as "squash and stretch" and "secondary motion" in the 50's. Such techniques have since been used widely by animators, but are not commonly used to design robots. TOFU, who is named after the squashing and stretching food product, can also squash and stretch. Clever use of compliant materials and elastic coupling, provide an actuation method that is vibrant yet robust. Instead of using eyes actuated by motors, TOFU uses inexpensive OLED displays, which offer highly dynamic and lifelike motion.