health
art
human-machine interaction
learning
artificial intelligence
robotics
architecture
consumer electronics
design
technology
kids
music
wearable computing
networks
politics
entertainment
bioengineering
economy
cognition
history
human-computer interaction
data
archives
machine learning
social science
storytelling
sensors
interfaces
space
environment
wellbeing
covid19
computer science
developing countries
prosthetics
engineering
privacy
social robotics
ethics
civic technology
social media
civic media
imaging
synthetic biology
communications
neurobiology
urban planning
public health
augmented reality
affective computing
virtual reality
biology
transportation
energy
community
biomechanics
data visualization
social change
biotechnology
computer vision
industry
ocean
3d printing
zero gravity
alumni
food
government
genetics
blockchain
racial justice
agriculture
manufacturing
medicine
gaming
data science
women
prosthetic design
construction
creativity
materials
social networks
fashion
open source
behavioral science
banking and finance
security
crowdsourcing
cryptocurrency
systems
collective intelligence
climate change
makers
wiesner
fabrication
internet of things
language learning
performance
neural interfacing and control
cognitive science
bionics
interactive
ecology
autonomous vehicles
perception
human augmentation
civic action
nonverbal behavior
mapping
extended intelligence
physiology
visualization
startup
diversity
physics
clinical science
holography
long-term interaction
gesture interface
networking
point of care
sports and fitness
autism research
orthotic design
marginalized communities
water
trust
natural language processing
electrical engineering
voice
microfabrication
hacking
mechanical engineering
nanoscience
pharmaceuticals
member company
primary healthcare
mechatronics
clinical trials
trade
academia
member event
open access
microbiology
gender studies
rfid
soft-tissue biomechanics
chemistry
real estate
randomized experiment
publishing
biomedical imaging
gis
event
exhibit
cartography
metamaterials
installation
code
Looking beyond smart cities
Tangible Swarm is a tool that displays relevant information about a robotics system (e.g., multi-robot, swarm, etc.) in real time wh...
Can robots find and grasp hidden objects?Robots are not capable of handling tasks as simple as restocking grocery store shelves as ...
Health 0.0
Promoting deeper learning and understanding in human networks
System uses penetrative radio frequency to pinpoint items, even when they’re hidden from view.
Camera Culture head Ramesh Raskar talks to Rashmi Mohan about his interdisciplinary research and entrepreneurial endeavors.
Boroushaki, Tara, et al. "Robotic Grasping of Fully-Occluded Objects using RF Perception." IEEE International Conference on Robotics and Automation (ICRA 2021).
View the main City Science Andorra project profile.Research in dynamic tools, mix users (citizens, workers) amenities, services, and land...
Real World Data (RWD) and Real World Evidence (RWE) are playing an increasing role in healthcare decisions to support innovative use of E...
Technical summary: Future of clinical development is on the verge of a major transformation due to convergence of large new digital data ...
We built a low-cost and open source 405 nm imaging device to capture red fluorescence signatures associated with the oral biomarker porph...
General overview:Sepsis, a life-threatening complication of bacterial infection, leads to millions of worldwide deaths requires significa...
Imaging fluorescent disease biomarkers in tissues and skin is a non-invasive method to screen for health conditions. We report an automat...
We report a novel method that processes biomarker images collected at the point of care and uses machine learning algorithms to provide a...
Making the invisible visible–inside our bodies, around us, and beyond–for health, work, and connection
The full text of our paper is available here.Sometimes the thing that we want to see is hidden behind something else. A neighboring vehi...
Henley, C., Maeda, T., Swedish, T., & Raskar, R. (2020). Imaging Behind Occluders Using Two-Bounce Light. Computer Vision – ECCV 2020 Lecture Notes in Computer Science, 573-588. doi:10.1007/978-3-030-58526-6_34
The startup OpenSpace is using 360-degree cameras and computer vision to create comprehensive digital replicas of construction sites.
Changing storytelling, communication, and everyday life through sensing, understanding, and new interface technologies
Transforming data into knowledge
Computer vision uncovers predictors of physical urban change
Research in our laboratory reduces dependence on specialized medical imaging devices, biological and chemical processes and creates new p...
www.ajl.orgAn unseen force is rising—helping to determine who is hired, granted a loan, or even how long someone spends in prison. This f...
The relationship between news content and its presentation has been a long-studied problem in the communications domain. Often, chan...
City Science researchers are developing a slew of tangible and digital platforms dedicated to solving spatial design and urban planning c...
MIT City Science is working with HafenCity University to develop CityScope for the neighborhood of Rothenburgsort in Hamburg, Germany. Th...
Two-dimensional radiographs are commonly used for evaluating sub-surface hard structures of teeth, but they have low sensitivity for earl...
The Electome: Where AI meets political journalismThe Electome project is a machine-driven mapping and analysis of public sphere content a...
A.I. systems are shaped by the priorities and prejudices…of the people who design them, a phenomenon that I refer to as "the coded gaze."
This project depicts the design, deployment and operation of a Tangible Regulation Platform, a physical-technological apparatus made for ...
Algorithmic auditing has emerged as a key strategy to expose systematic biases embedded in software platforms, yet scholarship on the imp...
This is an open source geospatial exploration tool. Using various public APIs including Open Street Map and the United States Census, we ...
MIT researchers have developed a system that can produce images of objects shrouded by fog so thick that human vision can’t penetrate it.
Seeing through dense, dynamic, and heterogeneous fog conditions. The technique, based on visible light, uses hardware that is similar to ...
This project focused on pedestrian accessibility in collaboration with Singapore Centre for Liveable Cities. Researchers and planners cam...
This project is part of a parallel research endeavor with GSK Manufacturing. By simulating how scientists at the Upper Providence site in...
This project is the first of two projects in collaboration with GSK. We are developing a computational simulation that allows a human use...
We recently led a workshop in Saudi Arabia, with staff from the Riyadh Development Authority, to test a new version of our CityScope plat...
The Storytelling project uses machine-based analytics to identify the qualities of engaging and marketable media. By developing models wi...
Ira Winder and the Tactile Matrix won the award for best demonstration at the IEEE Future Technologies Conference.
A case study implemented by Inioluwa Raji under the guidance of Joy Buolamwini
The Gender Shades project pilots an intersectional approach to inclusive product testing for AI.Algorithmic Bias PersistsGender Shades is...
All people are created equal, but in the eyes of the algorithm, not all faces are just yet.A new study from MIT and Microsoft r...
A new review of face recognition software found that, when identifying gender, the software is most accurate for men with light skin...
Examination of facial-analysis software shows error rate of 0.8 percent for light-skinned men, 34.7 percent for dark-skinned women.
New research out of MIT’s Media Lab is underscoring what other experts have reported or at least suspected before: facial recognition tec...
Developed by Ira Winder with the MIT Centre for Transportation and Logistics, the model seeks to use real population data and create a si...
Facebook volunteers and work-at-home moms might be making city planning decisions, thanks to AI research conducted by MIT scientists. Res...
Using computer vision to examine Google Street View, the researchers analyzed how streets and blocks have changed in five American cities.
Tested with five American cities, Streetchange quantifies the physical improvement or deterioration of neighborhoods.
A recently published paper in the Proceedings of the National Academy of Sciences (PNAS) looks at factors that predict neighborhood change.
Researchers have used machine learning to quantify the physical improvement or deterioration of neighborhoods in five American cities.
With over a billion people carrying camera-phones worldwide, we have a new opportunity to upgrade the classic bar code to encourage a fle...
Paiva, Prada, W., (Eds.)., 4738, datePaiva, Prada, W., (Eds.)., 4738, datePaiva, Prada, W., (Eds.)., 4738, date