Project

Automated End-to-End Deep Learning Framework for Classification and Tumor Localization from Native Non-stained Pathology Images

Copyright

SPIE

Pratik Shah

This study reports an end-to-end deep learning framework for virtual H&E staining, automatic classification, and localization of prostate tumors from non-stained core biopsy images. Three deep learning methods, the pix2pix, ResNet, and GBP, were trained and validated for high performance. The proposed end-to-end system consists of the GAN-CS model, the ResNet-18 classifier, and the deep weekly supervised learning model. A computationally H&E stained patch was first generated from a non-stained input image using the GAN-CS model, and then was fed into a Resnet-18 classifier for classification as tumor or no tumors. A deep weekly-supervised learning GBP algorithm was used to localize class-specific (tumor) regions on images outputted from the Resnet-18 classifier. If an input image patch was classified as tumor (panel b), the GBP localization module generates a saliency map (panel c, green highlights) locating the tumor regions on computationally stained images as shown in the figure above.

This study reports an end-to-end deep learning framework for virtual H&E staining, automatic classification, and localization of prostate tumors from non-stained core biopsy images. Three deep learning methods, the pix2pix, ResNet, and GBP, were trained and validated for high performance. The proposed end-to-end system consists of the GAN-CS model, the ResNet-18 classifier, and the deep weekly supervised learning model. A computationally H&E stained patch was first generated from a non-stained input image using the GAN-CS model, and then was fed into a Resnet-18 classifier for classification as tumor or no tumors. A deep weekly-supervised learning GBP algorithm was used to localize class-specific (tumor) regions on images outputted from the Resnet-18 classifier. If an input image patch was classified as tumor (panel b), the GBP localization module generates a saliency map (panel c, green highlights) locating the tumor regions on computationally stained images as shown in the figure above.

  1. Why is this work important ?

    From the clinical perspective, fulfilling automated staining, classification, segmentation in one place can help the diagnosis and treatment of prostate cancer in the early stages by improving consistency and speed for digital pathology workflows.

    From the computational perspective, we communicate novel interpretable deep learning methods and models to  extend the value of non-stained prostate core biopsy images for tumor classification and localization.

    Digital pathology software providers can augment their existing slide scanners with  end-to-end staining capabilities.

  2. What are our key contributions ?

    To the best of our knowledge, this is the first work attempting to discriminate tumors as small as 1024×1024 pixels from non-cancerous ones in the prostate using a non-stained whole slide image. Our core contributions are to extend the utility and performance of generative virtual H&E staining deep learning methods and models. We also extend the utility of computationally H&E stained images for the medical imaging community to use them for tumor localization and classification.

Current researchers working on this project: 

 MIT:  Audrey Xie, Sam Ghosal and Pratik Shah

Stanford:   Yili Zhu and Alarice Lowe