In Nature Communications, Dava Newman and co-authors present a framework for more robust, reliable, and responsible machine learning systems


Courtesy of the researchers

Courtesy of the researchers

Media Lab Director Dava Newman has co-authored a paper published in Nature Communications on “Technology Readiness Levels for Machine Learning Systems,” which presents a general framework for developing robust, reliable, and responsible machine learning—from basic research through productization and deployment. 

The paper applies some of the principles used in engineering domains such as  civil and aerospace engineering, to the machine learning algorithms that drive everything from social media platforms to loan approvals and medical care. These principles find their fullest expression in spacecraft systems, where every step of the development process takes mission-critical measures and robustness into account.

However, while systems engineering approaches are standard at NASA and DARPA, they have too often been treated as an afterthought in machine learning, where models and algorithms are typically developed in isolation, without considering the contexts in which they will be used. Although machine learning systems do not need to be Mars-ready, they do need to be robust, reliable, and responsible. They also need to be better integrated into the software, hardware, data, and human systems in which they will be used. 

As a means of helping to accomplish this mission, “Technology Readiness Levels for Machine Learning Systems” introduces and provides a Machine Learning Technology Readiness Levels (MLTRL) framework and demonstrates its utility in real use  cases, taking it through stages from research to prototyping, productization, and deployment. The MLTRL framework prioritizes the role of machine learning ethics and fairness, and can help curb issues such as the automation of systemic human bias and generation of unjustifiable outcomes that can result from poorly deployed and maintained machine learning technologies. The framework defines technology readiness levels (TRLs) to guide and communicate machine learning and artificial intelligence development and deployment, with deliverables such as ethics checklists and TRL Cards that provide an at-a-glance overview of a given project. Templates and examples for MLTRL deliverables are open source and available at

MLTRL is not a cure-all for machine learning systems engineering, nor does it replace existing software development methodologies. Instead, it provides mechanisms to help practitioners, teams, and stakeholders minimize technical debt and risk by asking necessary questions throughout the development process. 

The MLTRL framework was developed by a diverse group of experts from both academia and industry: Alexander Lavin (Pasteur Labs & Institute for Simulation Intelligence (ISI); NASA Frontier Development Lab), Ciara ́n M. Gilligan-Lee (Spotify; University College London), Alessya Visnjic (WhyLabs), Siddha Ganju (NASA Frontier Development Lab; Nvidia), Dava Newman (Massachusetts Institute of Technology), Sujoy Ganguly (Unity AI), Danny Lange (Unity AI), Atılım Güneş Baydin (University of Oxford), Amit Sharma (Microsoft Research), Adam Gibson (Konduit), Stephan Zheng (Salesforce Research), Eric P. Xing (Petuum; Carnegie Mellon University), Chris Mattmann (NASA Jet Propulsion Lab), James Parr (NASA Frontier Development Lab), and Yarin Gal (Alan Turing Institute). 

Professor Newman, an expert in engineering systems and design, contributed significantly to the MLTRL framework, including “switchback mechanisms,” which introduce cyclic paths that intentionally regress aspects of technological innovation. The authors define three different types: Discovery switchbacks, which occur naturally as the technology is integrated into larger systems, revealing technical gaps; review switchbacks, which emerge from the gated review process that occurs at the end of each stage of the MLTRL project lifecycle; and embedded switchbacks, built-in loops that are predefined in the MLTRL process and help to mitigate technical debt and overcome other inefficiencies. This approach stands in contrast to traditional thought in software development that switchback events should be suppressed and minimized; in fact, they are a natural and necessary part of the development process, and efforts to eliminate them may stifle important innovations without improving efficiency. 

Related Content