Viral Communications
Creating scalable technologies that evolve with user inventiveness.
The Viral Communications group creates scalable technologies that evolve with user inventiveness. We have a rich history in proximal and infrastructure-free networks and their applications, as well as in applications that integrate mobile computing with the spaces around us. These include codes embedded in objects and in images that make them self-describing and detectable. In 2013, we introduced a new focus on Ultimate Media (see UM listing.) This multi-sponsor program envisions a unified interface for all visual media, including television, movies, magazines, and newspapers. It is a generalized platform for social and data-driven exploration and creation of news, sports, and narrative experiences.

Research Projects

  • 8K Time into Space

    Andrew Lippman and Hisayuki Ohmata

    8K Time into Space is a user interface for a video exploration system with an 8K display. 8K is an ultra high-definition video system and it can present a huge amount of visual content on one display. In our system, video thumbnails with shifted playback time in chronological order are spaced out like tiles. The time range of a scene that a viewer wants to check can be adjusted with a touch interface, and resolution of the thumbnails is changed depending of the range. 8K Time into Space aims to provide responsive and intuitive experiences for video consumption.

  • As You Need It

    Andrew Lippman and Yasmine Rubinovitz

    Video or broadcast news is viewed in a far wider set of circumstances than it ever has been before. It is composed with the assumption of a complete, situated viewing, but in fact it is often grabbed on-the-fly as a momentary experience. As You Need It is a semantic summarizer that deconstructs a multi-part segment for presentation as “chunks of importance.” We are learning if a story can be cut down to a useful update that takes less time than a traffic light, or as much time as a given user has. This project uses and contributes to another group project, SuperGlue.

  • Captions++

    Andrew Lippman and Tomer Weller

    Modern web presentations such as Youtube feature videos with commentary appended at the bottom. In our new imagination of “Videotext” we put the two together: comments appear as active bubbles along the playback time line. We thereby associate the commentary with the place in the video to which it refers. It gains context. This project is in the early test stage and is presented for discussion and further development in summer 2016.

  • DbDb

    Travis Rich and Andrew Lippman

    DbDb (pronounced DubDub) is a collaborative, visually based analysis and simulation platform. We promote open distribution of experimental data by allowing researchers to present a graphical representation of their data and processing techniques that collaborators can build on and augment. This helps test the reproducibility of results and allows others to learn and apply their own techniques. Our intention is for the research community as a whole to benefit from a growing body of open, analytical techniques. DbDb provides an interface for archiving data, executing code, and visualizing a tree of forked analyses. It is part of the Viral initiative on open, author-driven publishing, collaboration, and analysis. It is intended to be linked to PubPub, the main project.

  • GIFGIF

    Cesar A. Hidalgo, Andrew Lippman, Kevin Zeng Hu and Travis Rich

    An animated GIF is a magical thing. It has the power to compactly convey emotion, empathy, and context in a subtle way that text or emoticons often miss. GIFGIF is a project to combine that magic with quantitative methods. Our goal is to create a tool that lets people explore the world of GIFs by the emotions they evoke, rather than by manually entered tags. A web site with 200,000 users maps the GIFs to an emotion space and lets you peruse them interactively.

  • IoT Recorder

    Andrew Lippman and Thariq Shihipar

    The physical world is increasingly coming online. We have things that measure, sense, and broadcast to the rest of the world. We call this the Internet of Things (IoT). But our cameras are blind to this new layer of metadata on reality. The IoT recorder is a camera that understands what IoT devices it sees and what data they are streaming, thus creating a rich information "caption-track" for the videos it records. Using this meta-data, we intend to explore how this enables new video applications, starting with cooking.

  • Me.TV

    Andrew Lippman, Vivian Diep and Yasmine Rubinovitz

    Me.TV is a web platform that combines the benefits of traditional television and on-demand viewing. The center of Me.TV platform is a visual programming language that enables users to create complex rules with simple interactions that create static preferences, such as genre constraints, and plan for non-static ones, such as a current mood and time of day, in as many channels as they want.

  • MedRec

    Ariel Ekblaw, Asaf Azaria, Thiago Vieira, Joe Paradiso, Andrew Lippman

    We face a well-known need for innovation in electronic medical records (EMRs), but the technology is cumbersome and innovation is impeded by inefficiency and regulation. We demonstrate MedRec as a solution tuned to the needs of patients, the treatment community, and medical researchers. It is a novel, decentralized record management system for EMRs that uses blockchain technology to manage authentication, confidentiality, accountability, and data sharing. A modular design integrates with providers' existing, local data-storage solutions, enabling interoperability and making our system convenient and adaptable. As a key feature of our work, we engage the medical research community with an integral role in the protocol. Medical researchers provide the "mining" necessary to secure and sustain the blockchain authentication log, in return for access to anonymized, medical metadata in the form of "transaction fees."

  • NewsClouds

    Andrew Lippman and Thariq Shihipar

    NewsClouds presents a visual exploration of how the news reporting of an event evolves over time. Each “cloud” represents a publication and each competing news organization usually emphasizes different aspects of that same story. Using the time sliders, that evolution becomes evident. In addition, each word or phrase can be expanded to show its links and context. We are building an archive of events associated with ongoing US election developments.

  • Plethora

    Andrew Lippman and Tomer Weller

    Plethora creates simultaneous, localized, personal broadcasting networks that allow audiences to form on-the-fly and build their own media streams. The display ranges from personal devices such as phones, to embedded screens in kitchens. It uses Bluetooth LE beacons to emulate local broadcast stations that signal the proximity. Plethora inverts the relationship between mobility and the multiplicity of screens with which we interact on a moment-to-moment basis. It also considers media as migrating away from larger, high definition screens to a universe of personal, anytime, anywhere representations.

  • PubPub

    Cesar A. Hidalgo, Andrew Lippman, Kevin Zeng Hu, Travis Rich and Thariq Shihipar

    PubPub reinvents publication the way the web was designed: collaborative, evolving, and open. We do so in a graphical format that is deliberately simple and allows illustrations and text that are programs as well as static PDFs. The intention is to create an author-driven, distributed alternative to academic journals that is tuned to the dynamic nature of many of our modern experiments and discoveries— replicated, evaluated on-the-fly, and extended as they are used. It is optimized for public discussion and academic journals and is being used for both. It is equally useful for a newsroom to develop a story that is intended for both print and online distribution.

  • QUANTIFY

    Cesar A. Hidalgo, Andrew Lippman, Kevin Zeng Hu and Travis Rich

    QUANTIFY is a generalized framework and JavaScript library to allow rapid multi-dimensional "measurement" of subjective qualities of media. The goal is to make qualitative metrics quantized. For everything from measuring emotional responses of content to the cultural importance of world landmarks, QUANTIFY helps to elicit the raw human subjectivity that fills much of our lives, and makes it programmatically actionable.

  • Solar Micro-Mining

    Ariel Ekblaw, Jonathan Harvey-Buschel, Tal Achituv and Andrew Lippman

    Bitcoin generates net-new value from "mining" in a distributed network. In this work, we explore solar micro-mining rigs that transform excess energy capacity from renewable energy (hard to trade) into money (fungible). Each rig runs a small Bitcoin miner and produces Bitcoin “dust” for micropayments. We envision these micro-miners populating a highly distributed network, across rooftops, billboards, and other outdoor spaces. Where systematic or environmental restrictions limit the ability to freely trade the underlying commodity, micro-mining produces new economic viability. Renewable energy-based, micropayment mining systems can broaden financial inclusion in the Bitcoin network, particularly among populations that need a currency for temporary store of value and must rely on flexible electricity off the grid (e.g., unbanked populations in the developing world). This exploration seeds a longer-term goal to enable open access to digital currency via account-free infrastructure for the public good.

  • Sphera

    Amir Lazarovich, Andrew Lippman

    One future of media experience lies within a socially connected virtual world. VR started almost 30 years ago and is now wearable, real-time, integrated with sensing, and becoming transparent. Sphera realizes a socially driven 360-degree media space that includes ambient scenery, visual exploration, and integration with friends. By combining an Oculus Rift (VR heads-on display), Microsoft Kinect (depth sensor), and a natural voice command interface, we created a socially connected, 360-degree immersive virtual world for media exploration and selection, as well as big-data manipulation and visualization.

  • Super Cut Notes

    Tomer Weller, Robert Tran and Andy Lippman

    A large portion of popular media is remixed: existing media content is spliced and re-ordered in a manner that serves a specific narrative. Super Cut Notes is a semi-comical content remix tool that allows a user to splice and combine the smallest bits of media: words. By tapping into the dataset of our group's SuperGlue platform, it has access to a huge dictionary of words created by SuperGlue’s transcription module. Users are able to input a text of any length, choose video-bits of individual words that match their text, and create a video of their combination--in the style of cut-and-pasted ransom notes.

  • SuperGlue

    Tomer Weller, Jasmin Rubinovitz and Andy Lippman

    SuperGlue is a core news research initiative that is a "digestion system" and metadata generator for mass media. An evolving set of analysis modules annotate 14 DirecTV live news broadcast channels as well as web pages and tweets. The video is archived and synchronized with the analysis. Currently, the system provides named-entity extraction, audio expression markers, face detectors, scene/edit point locators, excitement trackers, and thumbnail summarization. We use this to organize material for presentation, analysis, and summarization. SuperGlue supports other news-related experiments.

  • The Third Hand

    Andrew Lippman and Brian Tice

    All too often there are tasks we have to perform that require a third hand: one to hold a part, one to hold a tool, and one to hold an auxiliary piece. In this project, we construct parallel data tracks that are embedded in video whose purpose is to cause active devices such as Raspberry Pis and Arduinos to take action in response to what is happening on the screen. The goal is to implement tools at the viewing site that assist the viewer in doing a task; for example, moving a part to the right position for assembly, holding it in place, or picking it up. This work is related to the IOT recorder.

  • This Is How

    Andrew Lippman and Tomer Weller

    This Is How is a platform for connecting makers with small businesses through stories. Small businesses share their stories in the form of video bytes in which they explain what they do and why, what their requirements and constraints are, and what kinds of issues they have. Makers can then annotate the video, ask further questions, and propose solutions for issues. The video is passed through SuperGlue for annotation and to categorize and find commonalities among requests.

  • VR Codes

    Andy Lippman and Grace Woo

    VR Codes are dynamic data invisibly hidden in television and graphic displays. They allow the display to present simultaneously visual information in an unimpeded way, and real-time data to a camera. Our intention is to make social displays that many can use at once; using VR codes, users can draw data from a display and control its use on a mobile device. We think of VR Codes as analogous to QR codes for video, and envision a future where every display in the environment contains latent information embedded in VR codes.

  • Wall of Now

    Andrew Lippman and Tomer Weller

    Wall of Now is a multi-dimensional media browser of recent news items. It attempts to address our need to know everything by presenting a deliberately overwhelming amount of media, while simplifying the categorization of the content into single entities. Every column in the wall represents a different type of entity: people, countries, states, companies, and organizations. Each column contains the top-trending stories of that type in the last 24 hours. Pressing on an entity will reveal a stream of video that relates to that specific entity. The Wall of Now is a single-view experience that challenges previous perceptions of screen space utilization towards a future of extremely large, high-resolution displays.

  • Watch People Watch

    Tomer Weller, Suzanne Wang and Andy Lippman

    Recording your reaction to a short video is becoming the new gossip; famous watchers get as many as 750,000 views. We attempt to transform this utterly useless and talentless event into a socially constructive alternative to simultaneous, synchronized, group viewing. Any user can opt in to be recorded and added to the shared, collective viewing experience. No talent or skills required.