Researchers find that, in published AI research, industry is overtaking academia, and the diversity of fields represented has declined.
The metaphor of the frog in water that is heated slowly has long been used to describe the potential risk of climate change.
How does the public think autonomous vehicles should resolve moral trade-offs? Can we use their responses to build a new moral compass?
Scientists seek to learn the decisions human drivers make in collision situations to help predict how autonomous vehicles should respond.
Four Lab researchers caution against credulous acceptance of rosy humanistic pronouncements.
Global climate change is projected to change our planet this century and beyond.
An NYC exhibit imagines driverless buses and mobile medical units resembling Popemobiles.
Visualizing Scissors Congruence was selected as one of the Experts' Choice Winners in the 2018 Visualization Challenge (the Vizzies).
Sohan Dsouza speaks on the topic of moral machines.
The reality distortion field is real, and it’s getting better every day.
Small rises in temperature and dramatic weather events resulting from warming can affect mental health.
How the role of AI and systems like Deep Angel influence conversations about ethics.
An MIT experiment is handing a single person’s free will to the crowd to test how the digital hive mind works.
The Media Lab's BeeMe project is the love child of ‘Black Mirror’ and Stanley Milgram’s notorious experiments on free will and obedience.
Researchers at the Media Lab are reportedly planning to hand an individual’s free will over to the internet.
This Halloween, the creepiest event to attend might be a mass online social experiment hosted by researchers at the Media Lab.
Something eerie has been brewing at the Massachusetts Institute of Technology Media Lab.
Survey maps global variations in ethics for programming autonomous vehicles
Massive global Moral Machine study reveals ethics preferences and regional differences
A growing number of researchers are trying to reveal the potential dangers of A.I.
Climate change presents a grave threat to overall human well-being, including mental health.
Media Lab students, faculty, researchers and affiliates pitching panels at SXSW 2019
A report from the Symposium on Trust and Ethics of Autonomous Vehicles (STEAV).
The morality of preschoolers, trolleys, cookie-eating squirrels, and attempting to define “the consensus” with the Moral Machine Project.
Transitioning from low-income to high-income occupations is difficult because the skills associated with each type of job are so different.
Add deadly car crashes and food safety risks — and the officials overseeing them — to the list of things affected by climate change.
If we are unable to adapt to climate change, it may amplify the marginal gap between citizen need and government assistance.
First, climate change was blamed for coastal flooding and wildfires. The links seemed intuitive and the effects observable. But more...
On excessively hot days, there are more likely to be fatal car accidents and food safety problems, and police officers and government foo...
The Scalable Cooperation Group directed by Iyad Rahwan at the MIT Media Lab is seeking a full-time or part-time lab coordinator to start ...
This blog post summarizes the key findings of our new paper Unpacking the polarization of workplace skills published in Science Advances ...
Here is my secret to staying in computer science: first, have great perseverance. Second, have great friends.
Media Lab researchers are part of a growing movement at MIT to explore the regulatory frontiers of AI—in society and in our hearts and minds
Institute-wide effort will study the evolution of jobs in an age of technological advancement.
The Shelley Project studies how to produce horror stories as a result of collaboration between humans and artificial intelligence.
Weather really can affect our mood—or at least the way we express our emotions on social media, according to a study published Wednesday ...
“So, what do you study?” Uber drivers, doctors, airplane seat-mates—they all want to know. I often envy my husband, a PhD student in Cour...
We cannot certify that an AI agent is ethical by looking at its source code.
Move aside, smiley faces, clapping hands and dancing ladies. Emoji are finally ready to tackle serious issues....The project may be the f...
How can we reap the benefits of Artificial Intelligence while minimizing the risks it poses to society?
At UNICEF Innovation, we’re supporting our partners to help build models that give us a better understanding of empathy. We hear in the n...
Research scientist Manuel Cebrian discusses Shelley with CNN en Español host Andrés Oppenheimer.
Researchers want to know what images make us feel more attached to other people. They call their project “Deep Empathy.”
It’s the holiday season, which to many people means a season of giving—to loved ones, colleagues, public radio and television, and to any...
How well have humans adapted to the current climate, and how will we adapt to new climate complexities?This week, the Climate Conversatio...
Timing can be everything when it comes to successfully expanding constitutional rights. Now, a study looking at how constitutions around ...
It’s long been clear that urbanization and automated technologies are shaping society, but it hasn’t been obvious how the two forces affe...
At EmTech 2017, Iyad Rahwan discusses how can we anticipate and respond to major disruptions from artificial intelligence, the web, and s...
At EmTech 2017, Iyad Rahwan discussed artificial intelligence, urbanization, and the future of work with MIT Technology Review senior edi...
Three Media Lab students are recipients of the new Marvin Minsky fellowship.
“Shelley” is an artificial intelligence bot who writes stories inspired by a subreddit of aspiring horror writers.
Which is scarier: vampires, demon-possessed children, or the idea that artificial intelligence could one day rule the world?A new project...
Sometimes the scariest place to be is your own mind. Or Reddit at night.Shelley is an AI program that generates the beginnings of horror ...
Shelley's artificial neural network takes turns with humans in collaborative storytelling.
With its increasingly humanlike design, artificial intelligence gives many people the creeps—though usually not intentionally. But a new ...
An artificial intelligence is creating worlds where possessed dolls and other creatures chase after frightened, helpless h...
With Shelley, the world’s first artificial intelligence-human horror story collaboration, MIT researchers aim for goosebumps.
Introducing Shelley: the world’s first AI-human horror story collaboration
The robot takeover will start in the smaller cities.
DeepMoji uses emoji to teach a machine learning algorithm to understand slang and sarcasm.
Iyad Rahwan on the trust gap between people and autonomous vehicles.
DeepMoji is a model that uses millions of tweets to learn about emotional concepts in text like sarcasm and irony.This content is license...
Understanding sarcasm could help AI fight racism, abuse, and harassment.
We teach our model an understanding of emotions by finding millions of tweets with one of the top 64 emojis.