art
health
artificial intelligence
human-machine interaction
learning + teaching
robotics
design
technology
architecture
consumer electronics
kids
music
human-computer interaction
wearable computing
bioengineering
politics
data
sensors
networks
machine learning
entertainment
social science
environment
cognition
economy
space
wellbeing
history
storytelling
computer science
interfaces
covid19
ethics
engineering
developing countries
prosthetics
creativity
community
civic technology
alumni
biology
privacy
social robotics
communications
augmented reality
social media
neurobiology
imaging
urban planning
public health
computer vision
virtual reality
synthetic biology
industry
biotechnology
food
transportation
biomechanics
energy
affective computing
government
social networks
data visualization
social change
ocean
behavioral science
fabrication
climate change
data science
zero gravity
medicine
startup
women
agriculture
blockchain
cognitive science
materials
genetics
prosthetic design
racial justice
manufacturing
diversity
3d printing
gaming
neural interfacing and control
banking and finance
fashion
ecology
electrical engineering
construction
cryptocurrency
bionics
microfabrication
civic action
human augmentation
security
open source
systems
performance
natural language processing
marginalized communities
language learning
healthcare
internet of things
autonomous vehicles
perception
collective intelligence
microbiology
sustainability
interactive
mechanical engineering
visualization
water
social justice
mapping
physiology
physics
nanoscience
nonverbal behavior
code
chemistry
voice
rfid
cities
long-term interaction
clinical science
sports and fitness
trust
biomedical imaging
hacking
orthotic design
networking
pharmaceuticals
algorithms
mechatronics
soft-tissue biomechanics
open access
assistive technology
autism research
textiles
mental health
member company
law
gender studies
real estate
internet
culture
exhibit
wireless
science
news
cells
business
decision-making
asl
How to build intelligent music systems out of interacting audio-processing agents.
Often, we neglect to see the city as living, complex, and dynamic. However, shrouded by its masses of concrete and steel lie unique ecosyst…
The Intertidal Experimentation Workshop will take place September 29 and 30 (9am to 2pm) at the MIT Media Lab, open to students ages 8-14. …
The City Symphony project by the Opera of the Future group brings creative musical participation to everyone while encouraging collaboratio…
A conversation between Ed Boyden and Tyler Cowen on optogenetics and expansion microscopy to storytelling and the nature of consciousness.
Mike Bove, head of the Object-Based Media group, on the current state of the technology and what his research team is working on.
Avery Normandin and Devora Najjar are on a mission to build literacy and appreciation for urban ecology.
EEEeb Spring 2019: Urban OceansMarch 24, April 7 and 21, May 19, June 2 To register, please visit this link.…
“Happiness makes the world go round."
The news is probably one of the first things people check in the morning, but how much does what you know and understand about the world de…
Enhancing mobile life through improved user interactions
Ariel Ekblaw speaks at Sónar+D
The Media Lab panel at San Diego Comic-Con covered inclusion, exploration, and the reciprocity between fiction and research at the Lab
The Storytelling project uses machine-based analytics to identify the qualities of engaging and marketable media. By developing models with…
Chowdhury, S. K. (2018). Pintail: A Travel Companion for Guided Storytelling.
Dan Novy, MIT Media Lab engineer, is a strong advocate of bridging this gap, especially in media innovation.
Realtime detection of social cues in children’s voicesIn everyday conversation, people use what are known as backchannels to signal to some…
The Shelley Project studies how to produce horror stories as a result of collaboration between humans and artificial intelligence.
An AI algorithm can predict which parts of a film will generate the greatest emotional responses in audiences.
Hae Won Park, Mirko Gelsomini, Jin Joo Lee, and Cynthia Breazeal. 2017. Telling Stories to Robots: The Effect of Backchanneling on a Child's Storytelling. In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (HRI '17).
Research scientist Manuel Cebrian discusses Shelley with CNN en Español host Andrés Oppenheimer.
New research can predict how plots, images, and music affect your emotions while watching a movie.
Computers don’t cry during sad stories, but they can tell when we will.
Artificial intelligence (AI) may not be ready to write the next blockbuster movie, but a team of AI researchers from the Massachusetts…
This brief excerpt video shows a glimpse of some of Tod Machover’s innovative, unusual opera realized at—and with the collaboration of—the …
Jin Joo Lee. A Bayesian Theory of Mind Approach to Nonverbal Communication for Human-Robot Interactions. PhD Thesis, Massachusetts Institute of Technology, 2017.
HW Park, M Gelsomini, JJ Lee, T Zhu, and C Breazeal (2017). Backchannel Opportunity Prediction for Social Robot Listeners. In Proceedings of the International Conference on Robotics and Automation (ICRA).
Designing systems that become experiences to transcend utility and usability
Music software that lets anyone compose music. The first music software program designed to teach students and adults how to compose music …
Pintail is a travel companion app for guided storytelling. It will start by capturing your travel plan so that it can nudge you with person…
The Hyperinstruments project creates expanded musical instruments and uses technology to give extra power and finesse to virtuosic performe…