• Login
  • Register

Work for a Member company and need a Member Portal account? Register here with your company email address.

Project

Latent Lab: Legacy

Kevin Dunnell

A tool for exploring the Media Lab’s research and synthesizing new project ideas.

This page details early iterations of Latent Lab. The most up-to-date project page for Latent Lab can be found here.

Latent Lab builds upon our previous work of enabling creators to harness the power of AI to implicitly synthesize ideas. Our last iteration, Computer-Aided-Synthesis, demonstrated this concept with ideas represented as images. Images of existing cars can be used as guides through a generative model's implicit feature space to collaboratively arrive at newly synthesized car designs. With Latent Lab we consider a new proxy for ideas and shift our synthesized subject from cars to research in the Media Lab.

There are now over 4000+ projects on the Media Lab’s website, which presents challenges both internally and externally. Internally we want to identify project overlap and collaboration opportunities, while externally we want to find obvious connections with our sponsors’ interests and ongoing research in the lab. More importantly, we want to synthesize new engagement opportunities with our sponsors that mutually align with our interests but have yet to be explored. Latent Lab itself will not accomplish this, it requires the expertise of the folks at the Media Lab. Rather, our goal is for Latent Lab to help spur ideation and provide us a starting point that is beyond an intelligent average of our past work.

Latent Lab Explorer

The Latent Lab Explorer allows users to explore prior and ongoing Media Lab research in a completely new way. Users can search conventionally along the left side to filter projects of interest. Once identified, these projects can be placed in the latent canvas, where users can infer similarities and differences in the content of the projects that text-matching-driven search does not provide. The latent canvas can be interactively explored to reveal synthesized research titles between existing projects.

Try it out: Latent Lab Explorer
(Note: Latent Lab is still under active development, so please expect bugs and oddities. Any and all feedback is appreciated!)

How Latent Lab Works

We’ve fine-tuned a large text model, OPTIMUS, with these titles of research. OPTIMUS was created and pre-trained by Microsoft and is a composition of transformer-based models – BERT as the encoder, GPT-2 as the decoder. Between these two transformer models is a Variational Auto-Encoder (VAE), creating a somewhat-smoothly explorable latent space, unique for text models given their sequential prediction method.

Selecting projects from the left rail of the Latent Lab Explorer will show their positions in the “latent canvas” on the right side of the screen. Currently our model only considers the titles of the research to represent each project. The latent canvas is a compressed representation of these titles, where each is represented by an array of 32 values. Further dimensionality reduction is applied to force them into a 2D layout for us to visualize. This visual uncovers groupings of research (opportunities for internal collaboration), as well as gaps between research (potentially interesting unexplored areas).

Users can navigate through the model’s latent space by moving the black square cursor in the latent canvas. As the cursor moves, new titles of research are synthesized and displayed in the bottom right corner. The synthesis is influenced by which projects are placed in the latent canvas and the cursors relative distance to each. Additionally, users can upload titles of their own or topics of interest in the bottom left corner. The title is then encoded into the same latent space, where it can also be used as an implicit representation to guide our navigation through the latent space of title ideas.

Looking Ahead...

Currently we are only considering the titles of research. We plan to extend that to multiple modalities, given what we learned in our exploration with images. We also would like to add another dimension (literally) to the visualization of the latent canvas and map it to the physical space of the Media Lab. Our hope is for visitors to physically explore the latent space of the Lab and have synthesized research rendered to an app on their phones.

Latent Lab V2.0

The latest iteration of Latent Lab offers a multi-modal experience for exploring existing research projects at the Media Lab and synthesizing new ones. In addition to research titles, Latent Lab 2.0 considers all of the available text for research projects to construct semantic embeddings. These embeddings are used to more accurately cluster research projects and enable finer control of new title synthesis. Additionally,  the application now uses a Stable Diffusion text-to-image model to generate project header images from the titles of projects.  The latent space is now explorable in three dimensions with the latest UI. 

Latent Lab V3.0

Latent Lab V3.0 is presented on a new project page, which can be found here.