Staining of tissues sections using chemical and biological dyes has been used for over a century for visualizing various tissue types and morphologic changes associated with cancer and other disorders for contemporary clinical diagnosis. This staining procedure often results in loss of irreplaceable tissue specimen and delays diagnoses. Other key challenges include sampling time, which can limit the amount of tissue that can be stained owing to time and cost involved, resulting in evaluation of only three 4-μm sections of tissue to represent a 1-mm diameter core. Irreversible dye staining of tissues leads to loss of precious biopsy samples that are no longer available for biomarker testing.
Dr. Shah's lab has previously described generative computational methods that use neural networks that rapidly stain photographs of non-stained tissues, providing physicians timely information about the anatomy and structure of the tissue. The lab also reported a "computational destaining" method that can remove dyes and stains from photographs of previously stained tissues, allowing reuse of patient samples. However, studies testing operational feasibility and validation of results obtained by these generative neural network models and machine learning algorithms in controlled clinical trials or hospital studies for virtual staining of whole-slide pathology images did not exist, precluding clinical adoption and deployment of these systems.
In this study led by Dr. Shah, in collaboration with Stanford University School of Medicine and Harvard Medical School, several novel mechanistic insights and methods to facilitate benchmarking and clinical and regulatory evaluations of generative neural networks and computationally H&E stained images were reported. Specifically, high fidelity, explainable, and automated computational staining and destaining algorithms to learn mappings between pixels of nonstained cellular organelles and their stained counterparts were trained. A novel and robust loss function was devised for the deep learning algorithms to preserve tissue structure. The study communicated that virtual staining neural network models were generalizable to accurately stain previously unseen images acquired from patients and tumor grades not part of training data. Neural activation maps in response to various tumors and tissue types were generated to provide the first instance of explainability and mechanisms used by deep learning models for virtual H&E staining and destaining. And image processing analytics and statistical testing were used to benchmark the quality of generated images. Finally, the computationally stained images were evaluated by multiple pathologists for prostate tumor diagnoses and clinical decision-making.