Article

Our devices, ourselves

By Pattie Maes

Media coverage of AI in the past few years has often focused on how the technology might cost people their jobs. But I think a much more exciting possibility is a future in which people are augmented with intelligent interfaces—thereby elevating and combining human decision-making with machine intelligence. In my research group at the MIT Media Lab, we like to talk about intelligence augmentation rather than artificial intelligence, and we view the future of interaction with our devices as more natural and intimate.

There are three ways in which many of us would like to see our devices change. First, we live in two worlds now—the physical and the digital—and they’re not well integrated. We are constantly forced to multitask and shift our attention from one to the other. Second, while today’s personal devices provide access to the world’s information, they don’t do much to assist with other issues important to being successful, such as attention, motivation, memory, creativity, and ability to regulate our emotions. And third, our devices today pick up on only the very deliberate inputs we give them through type, swipe, and voice. If they had access to more implicit inputs such as our context, behavior, and mental state, they could offer assistance without requiring so much instruction.

Today’s devices are aggregating more and more information about users. But in the future, they will also likely gather data on the surrounding environment and current situation, perhaps by analyzing what we are looking at or sensing what our hands are doing. This context will enable our devices to provide data based on explicit intent and well-defined actions, as well as our state of mind, unspoken preferences, and even desires. These systems will gain an increased awareness about users and their context and will form predictions about their behaviors and intentions.

Related Content