Deep learning based quantitative assessment of digital pathology images and understanding the underlying reasons for a specific clinical decision is challenging, and automatic histology pattern classification and tumor localization in whole-slide pathology images are critical for interpretable learning systems. In this study, we propose an end-to-end deep learning framework for automatic detection and localization of tumors directly from non-stained whole slide prostate core biopsy images (WSI). We use a previously described Generative Adversarial Network (GAN)-based model from our laboratory for computational Hematoxylin and Eosin (H&E) staining of native non-stained pathology images. A convolutional neural network to detect and classify tumor regions in 1024×1024 virtually stained H&E pixel patches, and a concurrent deep weakly supervised (WSL) model that provides localization of predominant histologic patterns used for tumor classification without the need for pixel-level annotations are reported in this study for the first time. The end-to-end system was evaluated on a 17K hold out set of 1024×1024 non-stained patches extracted from 13 whole slide prostate biopsy images. Experimental results yielded 86.37% patch-level classification accuracy with 85.05% precision, and achieved a Dice index of 65.07±1.99 (compared to 70.24±1.86 Dice index in theU-Net reference model for pixel-level segmentation). The end-to-end deep learning framework thus automates digital pathology image workflow from tissue staining to interpretable prostate tumor classification and can be valuable for accurate grading of prostate cancer and generalized to other whole-slide image classification tasks.