(AI)-powered media manipulations have widespread societal implications for journalism and democracy, national security, and art.
A new study uses the famous trolley problem to show how our culture shapes our moral beliefs.
How does it feel to be the internet?
What our experiment in the desert taught us about social networks and human cooperation.
Maybe we don’t need to look inside the black box after all. We just need to watch how machines behave, instead.
CIFAR Fellows Hugo Larochelle and Matthew Jackson argue for a new scientific discipline to study the broad effects of AI.
A paper introduces machine behavior: the interdisciplinary study of AI systems as a new class of actors with unique behavioral patterns.
A new paper frames the emerging interdisciplinary field of machine behavior
Researchers find that, in published AI research, industry is overtaking academia, and the diversity of fields represented has declined.
Scientists seek to learn the decisions human drivers make in collision situations to help predict how autonomous vehicles should respond.
Four Lab researchers caution against credulous acceptance of rosy humanistic pronouncements.
An NYC exhibit imagines driverless buses and mobile medical units resembling Popemobiles.
Survey maps global variations in ethics for programming autonomous vehicles
Massive global Moral Machine study reveals ethics preferences and regional differences
Climate change presents a grave threat to overall human well-being, including mental health.
A report from the Symposium on Trust and Ethics of Autonomous Vehicles (STEAV).
Transitioning from low-income to high-income occupations is difficult because the skills associated with each type of job are so different.
This blog post summarizes the key findings of our new paper Unpacking the polarization of workplace skills published in Science Advances ...
Media Lab researchers are part of a growing movement at MIT to explore the regulatory frontiers of AI—in society and in our hearts and minds
Institute-wide effort will study the evolution of jobs in an age of technological advancement.
The Shelley Project studies how to produce horror stories as a result of collaboration between humans and artificial intelligence.
We cannot certify that an AI agent is ethical by looking at its source code.
How can we reap the benefits of Artificial Intelligence while minimizing the risks it poses to society?
At UNICEF Innovation, we’re supporting our partners to help build models that give us a better understanding of empathy. We hear in the n...
Research scientist Manuel Cebrian discusses Shelley with CNN en Español host Andrés Oppenheimer.
Researchers want to know what images make us feel more attached to other people. They call their project “Deep Empathy.”
It’s the holiday season, which to many people means a season of giving—to loved ones, colleagues, public radio and television, and to any...
Timing can be everything when it comes to successfully expanding constitutional rights. Now, a study looking at how constitutions around ...
It’s long been clear that urbanization and automated technologies are shaping society, but it hasn’t been obvious how the two forces affe...
At EmTech 2017, Iyad Rahwan discusses how can we anticipate and respond to major disruptions from artificial intelligence, the web, and s...
At EmTech 2017, Iyad Rahwan discussed artificial intelligence, urbanization, and the future of work with MIT Technology Review senior edi...
“Shelley” is an artificial intelligence bot who writes stories inspired by a subreddit of aspiring horror writers.
Which is scarier: vampires, demon-possessed children, or the idea that artificial intelligence could one day rule the world?A new project...
Sometimes the scariest place to be is your own mind. Or Reddit at night.Shelley is an AI program that generates the beginnings of horror ...
Shelley's artificial neural network takes turns with humans in collaborative storytelling.
With its increasingly humanlike design, artificial intelligence gives many people the creeps—though usually not intentionally. But a new ...
An artificial intelligence is creating worlds where possessed dolls and other creatures chase after frightened, helpless h...
With Shelley, the world’s first artificial intelligence-human horror story collaboration, MIT researchers aim for goosebumps.
Introducing Shelley: the world’s first AI-human horror story collaboration
The robot takeover will start in the smaller cities.
DeepMoji uses emoji to teach a machine learning algorithm to understand slang and sarcasm.
Iyad Rahwan on the trust gap between people and autonomous vehicles.
Understanding sarcasm could help AI fight racism, abuse, and harassment.
We teach our model an understanding of emotions by finding millions of tweets with one of the top 64 emojis.
Toyota pushes into blockchain tech to enable the next generation of cars.
Calling on you to help track down those who challenge norms or laws for the good of society.
Bridging the gap between the humanities, the social sciences, and computing by addressing the global challenges of artificial intelligence.
“People are afraid of artificial intelligence, from autonomous cars making unethical decisions in accidents, to robots taking our jobs an...
Iyad Rahwan, associate professor at the MIT Media Lab, explores how Artificial Intelligence challenges our morality. If a driverless car ...