Personal Robots
How to build socially engaging robots and interactive technologies that provide people with long-term social and emotional support to help people live healthier lives, connect with others, and learn better.
Robots are an intriguing technology that can straddle both the physical and social world of people. Inspired by animal and human behavior, our goal is to build capable robotic creatures with a "living" presence, and to gain a better understanding of how humans will interact with this new kind of technology. People will physically interact with them, communicate with them, understand them, and teach them, all in familiar human terms. Ultimately, such robots will possess the social savvy, physical adeptness, and everyday common sense to partake in people's daily lives in useful and rewarding ways.

Research Projects

  • AIDA: Affective Intelligent Driving Agent

    Cynthia Breazeal and Kenton Williams
    Drivers spend a significant amount of time multi-tasking while they are behind the wheel. These dangerous behaviors, particularly texting while driving, can lead to distractions, and ultimately accidents. Many in-car interfaces designed to address this issue still do not take a proactive role to assist the driver nor leverage aspects of the driver's daily life to make the driving experience more seamless. In collaboration with Volkswagen/Audi and the SENSEable City Lab, we are developing AIDA (Affective Intelligent Driving Agent), a robotic driver-vehicle interface that acts as a sociable partner. AIDA elicits facial expressions and strong non-verbal cues for engaging social interaction with the driver. AIDA also leverages the driver's mobile device as its face, which promotes safety, offers proactive driver support, and fosters deeper personalization to the driver.
  • Animal-Robot Interaction

    Brad Knox, Patrick Mccabe and Cynthia Breazeal

    Like people, dogs and cats live among technologies that affects their lives. Yet little of this technology has been designed with these pets in mind. We are developing systems that interact intelligently with animals to entertain, exercise, and empower them. Currently, we are developing a laser-chasing game, in which dogs or cats are tracked by a ceiling-mounted webcam, and a computer-controlled laser is moved with knowledge of the pet's position and movement. Machine learning will be applied to optimize the specific laser strategy. We envision enabling owners to initiate and view the interaction remotely through a web interface, providing stimulation and exercise to pets when the owners are at work or otherwise cannot be present.

  • Cloud-HRI

    Cynthia Breazeal, Nicholas DePalma, Adam Setapen and Sonia Chernova

    Imagine opening your eyes and being awake for only half an hour at a time. This is the life that robots traditionally live. This is due to a number of factors such as battery life and wear on prototype joints. Roboticists have typically muddled though this challenge by crafting handmade models of the world or performing machine learning with synthetic data–and sometimes real-world data. Robotics researchers, meanwhile, have traditionally used large distributed systems to do perception, planning, and learning, cloud-based robotics aims to link all of a robot's experiences. This movement aims to build large-scale machine learning algorithms that use experience from large groups of people, whether sourced from a large number of tabletop robots or a large number of experiences with virtual agents. Large-scale robotics aims to change embodied AI as it changed non-embodied AI.

  • Computationally Modeling Interpersonal Trust Using Nonverbal Behavior for Human-Robot Interactions

    Jin Joo Lee, Brad Knox, Cynthia Breazeal, David DeSteno (Northeastern University) and Fei Sha (University of Southern California)

    After meeting someone for the first time, we come away with an intuitive sense of how much we can trust that person. Nonverbal behaviors, such as gaze patterns, body language, and facial expressions, have been explored as honest or “leaky” signals that are salient cues in gaining trust insights. Our research is toward creating a computational model for recognizing interpersonal trust in social interactions. By observing the trust-related nonverbal cues expressed in the social interaction, we aim to design a machine learning algorithm that is capable of differentiating whether a person finds their socially assistive robot to be a trustworthy or untrustworthy partner. We aim to enable robots to understand our nonverbal signals. With so much of our communication being passed in these nonverbal streams, we hope that by enabling robots to understand these signals, we can design robots that can more effectively communicate with us.

  • DragonBot: Android Phone Robots for Long-Term HRI

    Adam Setapen, Natalie Freed, and Cynthia Breazeal

    DragonBot is a new platform built to support long-term interactions between children and robots. The robot runs entirely on an Android cell phone, which displays an animated virtual face. Additionally, the phone provides sensory input (camera and microphone) and fully controls the actuation of the robot (motors and speakers). Most importantly, the phone always has an Internet connection, so a robot can harness cloud-computing paradigms to learn from the collective interactions of multiple robots. To support long-term interactions, DragonBot is a "blended-reality" character–if you remove the phone from the robot, a virtual avatar appears on the screen and the user can still interact with the virtual character on the go. Costing less than $1,000, DragonBot was specifically designed to be a low-cost platform that can support longitudinal human-robot interactions "in the wild."

  • Huggable: A Robotic Companion for Long-Term Health Care, Education, and Communication

    Cynthia Breazeal, Walter Dan Stiehl, Robert Toscano, Jun Ki Lee, Heather Knight, Sigurdur Orn Adalgeirsson, Jeff Lieberman and Jesse Gray
    The Huggable is a new type of robotic companion for health care, education, and social communication applications. The Huggable is much more than a fun, interactive robotic companion; it functions as an essential team member of a triadic interaction. Therefore, the Huggable is not meant to replace any particular person in a social network, but rather to enhance it. The Huggable is being designed with a full-body sensitive skin with over 1,500 sensors, quiet back-drivable actuators, video cameras in the eyes, microphones in the ears, an inertial measurement unit, a speaker, and an embedded PC with 802.11g wireless networking. An important design goal for the Huggable is to make the technology invisible to the user. You should not think of the Huggable as a robot but rather as a richly interactive teddy bear.
  • Mind-Theoretic Planning for Robots

    Cynthia Breazeal and Sigurdur Orn Adalgeirsson

    Mind-Theoretic Planning (MTP) is a technique for robots to plan in social domains. This system takes into account probability distributions over the initial beliefs and goals of people in the environment that are relevant to the task, and creates a prediction of how they will rationally act on their beliefs to achieve their goals. The MTP system then proceeds to create an action plan for the robot that simultaneously takes advantage of the effects of anticipated actions of others and also avoids interfering with them.

  • Robot Learning from Human-Generated Rewards

    Brad Knox, Robert Radway, Tom Walsh, and Cynthia Breazeal

    To serve us well, robots and other agents must understand our needs and how to fulfill them. To that end, our research develops robots that empower humans by interactively learning from them. Interactive learning methods enable technically unskilled end-users to designate correct behavior and communicate their task knowledge to improve a robot's task performance. This research on interactive learning focuses on algorithms that facilitate teaching by signals of approval and disapproval from a live human trainer. We operationalize these feedback signals as numeric rewards within the machine-learning framework of reinforcement learning. In comparison to the complementary form of teaching by demonstration, this feedback-based teaching may require less task expertise and place less cognitive load on the trainer. Envisioned applications include human-robot collaboration and assistive robotic devices for handicapped users, such as myolectrically controlled prosthetics.

  • Robotic Textiles

    Cynthia Breazeal and Adam Whiton

    We are investigating e-textiles and fiber-electronics to develop unique soft-architecture robotic components. We have been developing large area force sensors utilizing quantum tunneling composites integrated into textiles creating fabrics that can cover the body/surface of the robot and sense touch. By using e-textiles we shift from the metaphor of a sensing skin, often used in robotics, to one of sensing clothing. We incorporated apparel design and construction techniques to develop modular e-textile surfaces that can be easily attached to a robot and integrated into a robotic system. Adding new abilities to a robot system can become as simple as changing its clothes. Our goal is to study social touch interaction and communication between people and robots while exploring the benefits of textiles and the textile aesthetic.

  • Socially Assistive Robotics: An NSF Expedition in Computing

    Tufts University, University of Southern California, Kasia Hayden with Stanford University, Sooyeon Jeong, Brad Knox, Cynthia Breazeal, Jacqueline Marie Kory, Jin Joo Lee, David Robert, Edith Ackermann, Catherine Havasi, Willow Garage and Yale University

    Our mission is to develop the computational techniques that will enable the design, implementation, and evaluation of "relational" robots, in order to encourage the social, emotional, and cognitive growth in children, including those with social or cognitive deficits. Funding for the project comes from the NSF Expeditions in Computing program. This Expedition has the potential to substantially impact the effectiveness of education and healthcare, and to enhance the lives of children and other groups that require specialized support and intervention. In particular, the MIT effort is focusing on developing second language learning companions for pre-school aged children, ultimately for ESL (English as a Second Language).

  • Storytelling in the Preschool of the Future

    David Robert

    Using the Preschool of the Future environment, children can create stories that come to life in the real world. We are developing interfaces that enable children to author stories in the physical environment—stories where robots are the characters and children are not only the observers, but also the choreographers and actors in the stories. To do this, children author stories and robot behaviors using a simple digital painting interface. By combining the physical affordances of painting with digital media and robotic characters, stories can come to life in the real world. Programming in this environment becomes a group activity when multiple children use these tangible interfaces to program advanced robot behaviors.

  • The Helping Hands

    Cynthia Breazeal, David Nunez and Tod Machover

    Two robot arms are in constant motion and hard at work. From a distance, they can be seen considering their tasks, communicating with each other, and struggling to make sense of their abstract mission. Participants are encouraged to approach the hands to engage in a dialog with the robots to offer assistance or advice. The project demonstrates a chatter system to simulate affective conversation and a parametric animation engine to provide dynamic, autonomous character-driven movement in the robots. This project premiered at The Other Festival 2013.

  • TinkRBook: Reinventing the Reading Primer

    Cynthia Breazeal, Angela Chang, and David Nunez
    TinkRBook is a storytelling system that introduces a new concept of reading, called textual tinkerability. Textual tinkerability uses storytelling gestures to expose the text-concept relationships within a scene. Tinkerability prompts readers to become more physically active and expressive as they explore concepts in reading together. TinkRBooks are interactive storybooks that prompt interactivity in a subtle way, enhancing communication between parents and children during shared picture-book reading. TinkRBooks encourage positive reading behaviors in emergent literacy: parents act out the story to control the words onscreen, demonstrating print referencing and dialogic questioning techniques. Young children actively explore the abstract relationship between printed words and their meanings, even before this relationship is properly understood. By making story elements alterable within a narrative, readers can learn to read by playing with how word choices impact the storytelling experience. Recently, this research has been applied in developing countries.
  • World Literacy Tablets

    Cynthia Breazeal, David Nunez, Maryanne Wolf (Tufts) and Robin Morris (GSU)

    We are developing a system of early literacy apps, games, toys, and robots that will triage how children are learning, diagnose literacy deficits, and deploy dosages of content to encourage reading. Currently, over 200 Android-based tablets have been sent to children around the world; these devices are instrumented to provide a very detailed picture of how kids are using these technology. We are using this big data to discover usage and learning models that will inform future educational development.

  • Zipperbot: Robotic Continuous Closure for Fabric Edge Joining

    Cynthia Breazeal and Adam Whiton

    In robotics, the emerging field of electronic textiles and fiber-electronics represents a shift in morphology from hard and rigid mechatronic components toward a soft-architecture–and more specifically, a flexible planar surface morphology. It is thus essential to determine how a robotic system might actuate flexible surfaces for donning and doffing actions. Zipperbot is a robotic continuous closure system for joining fabrics and textiles. By augmenting traditional apparel closure techniques and hardware with robotic attributes, we can incorporate these into robotic systems for surface manipulation. Through actuating closures, textiles could shape-shift or self-assemble into a variety of forms.