Now showing 1 - 10 of 14
  • Placeholder Image
    Publication
    Deep Learning-based Diagnosis of Thyroid Tumors using Histopathology Images from Thyroid Nodule Capsule
    (2024-01-01)
    Shah, Nitya A.
    ;
    Suthar, Jinal
    ;
    Tejaswee, A.
    ;
    Enache, Adrian
    ;
    Eftimie, Lucian G.
    ;
    Hristu, Radu
    ;
    Histopathology analysis of thyroid nodule is the current gold standard for the differential diagnosis of thyroid tumors. Deep learning methods have been extensively used for the diagnosis of histopathology images. We look into the possibility of the differential diagnosis of thyroid tumors by analysing histopathology images of thyroid nodule capsules using different deep learning methods namely Residual Network (ResNet), Densely Connected Network (DenseNet) and Vision Transformer (ViT). To evaluate the performance in the classification task, we use various performance metrics including precision, recall, F1-score, and AUROC score. Our study shows the superiority of the histopathology images of thyroid nodule capsules for the differential diagnosis of thyroid tumors compared to histopathology images of thyroid nodules.
  • Placeholder Image
    Publication
    An extremely lightweight CNN model for the diagnosis of chest radiographs in resource-constrained environments
    (2023-12-01)
    Kumar, Gautam
    ;
    Sharma, Nirbhay
    ;
    Background: In recent years, deep learning methods have been successfully used for chest x-ray diagnosis. However, such deep learning models often contain millions of trainable parameters and have high computation demands. As a result, providing the benefits of cutting-edge deep learning technology to areas with low computational resources would not be easy. Computationally lightweight deep learning models may potentially alleviate this problem. Purpose: We aim to create a computationally lightweight model for the diagnosis of chest radiographs. Our model has only 0.14M parameters and 550 KB size. These make the proposed model potentially useful for deployment in resource-constrained environments. Methods: We fuse the concept of depthwise convolutions with squeeze and expand blocks to design the proposed architecture. The basic building block of our model is called Depthwise Convolution In Squeeze and Expand (DCISE) block. Using these DCISE blocks, we design an extremely lightweight convolutional neural network model (ExLNet), a computationally lightweight convolutional neural network (CNN) model for chest x-ray diagnosis. Results: We perform rigorous experiments on three publicly available datasets, namely, National Institutes of Health (NIH), VinBig, and Chexpert for binary and multi-class classification tasks. We train the proposed architecture on NIH dataset and evaluate the performance on VinBig and Chexpert datasets. The proposed method outperforms several state-of-the-art approaches for both binary and multi-class classification tasks despite having a significantly less number of parameters. Conclusions: We design a lightweight CNN architecture for the chest x-ray classification task by introducing ExLNet which uses a novel DCISE blocks to reduce the computational burden. We show the effectiveness of the proposed architecture through various experiments performed on publicly available datasets. The proposed architecture shows consistent performance in binary as well as multi-class classification tasks and outperforms other lightweight CNN architectures. Due to a significant reduction in the computational requirements, our method can be useful for resource-constrained clinical environment as well.
  • Placeholder Image
    Publication
    Anomaly Guided Generalizable Super-Resolution of Chest X-Ray Images Using Multi-level Information Rendering
    (2024-01-01)
    Yadagiri, Vamshi Vardhan
    ;
    Reddy, Sekhar
    ;
    Single image super-resolution (SISR) methods aim to generate a high-resolution image from the corresponding low-resolution images. Such methods may be useful in improving the resolution of medical images including chest x-rays. Medical images with superior resolution may subsequently lead to an improved diagnosis. However, SISR methods for medical images are relatively rare. We propose a SISR method for chest x-ray images. Our method uses multi-level information rendering by utilizing the cue about the abnormality present in the images. Experiments on publicly available datasets show the superiority of the proposed method over several state-of-the-art approaches.
  • Placeholder Image
    Publication
    Federated Learning Using Multi-institutional Data for Generalizable Chest X-ray Diagnosis
    (2023-01-01)
    Chowdari, Dabbara Keshava
    ;
    Radhasyam, Nunna
    ;
    Pal, Anabik
    ;
    Deep learning models have achieved great success for the automated analysis of chest x-rays.9 However, many such models lack generalizability, i.e., a model trained in one dataset often performs poorly in a different dataset. One possible reason of such performance drop is the difference in the distribution of data from different institutions. In this context, utilization of data from multiple institutions to train a deep learning model may be helpful towards including a wider variety of data during training. This can improve the generalizability of the trained model. However, such an approach do not to preserve data privacy. To deal with the aforementioned limitation, federated learning may be useful. Federated learning allows multiple institutions to develop a machine learning model utilizing data from all institutions without sharing the data. Thus, federated learning approaches help in preserving data privacy. Although there has been a significant advancement in federated learning,8 such methods are rare in the context of chest x-ray diagnosis.7,10 Furthermore, most of such models do not utilize chest x-ray datasets from multiple institutions. In this work, we design a federated learning framework for chest x-ray diagnosis using datasets from multiple institutions. Our model shows improved generalizability in chest x-ray diagnosis across several publicly available large-scale chest x-ray datasets.
  • Placeholder Image
    Publication
    Detail preserving conditional random field as 2-D RNN for gland segmentation in histology images
    (2022-07-01)
    Chattopadhyay, Aratrik
    ;
    ;
    Mukherjee, Dipti Prasad
    Grading of cancer offers crucial insights for treatment planning. Morphology of glands in histology images is of prime importance for grading several types of cancers. Therefore, accurate segmentation of glands plays a pivotal role in planning the treatment in case of such cancers. We introduce a first-of-its-kind detail preserving conditional random field for gland segmentation from histology images. Our design involves a novel formulation of Gibbs energy that captures the spatial interaction between neighboring pixels through the hidden state of a 2-D recurrent neural network (2-D RNN). We show that the iterative training of the 2-D RNN results in the minimization of the Gibbs energy leading to accurate gland segmentation. Experiments on publicly available histology image datasets show the efficacy of the proposed method in accurate gland segmentation. Our model achieves at least 7% improvement in terms of Hausdorff distance for gland segmentation compared to a number of state-of-the-art techniques.
    Scopus© Citations 5
  • Placeholder Image
    Publication
    Few-shot chest x-ray diagnosis using discriminative ensemble learning
    (2022-01-01) ;
    Tang, Yu Xing
    ;
    Shen, Thomas C.
    ;
    Summers, Ronald M.
    Few-shot learning, in spite of its recent popularity, remains almost unexplored in medical image analysis. We design a few-shot classifier for the diagnosis of chest x-rays using discriminative ensemble learning. Our method consists of a CNN-based coarse-learner for feature extraction from chest x-rays followed by a saliency-based classifier to classify chest x-rays through the extraction and utilization of disease-specific salient features. We propose a novel discriminative autoencoder ensemble to design the saliency-based classifier. Our algorithm proceeds through metatraining and metatesting phases. During the training phase of metatraining, we train the coarse-learner. However, during the training phase of metatesting, we train only the saliency-based classifier. Thus, our method is first-of-its-kind where the training phase of metatraining and the training phase of metatesting are architecturally and temporally disjoint. Consequently, the proposed method is architecturally modular and may be easily adaptable to new tasks requiring the training of only the saliency-based classifier. Experiments show as high as ∼19% improvement in terms of F1 score compared to the baseline in the diagnosis of chest x-rays from publicly available datasets.
    Scopus© Citations 1
  • Placeholder Image
    Publication
    Multi-task Learning for Few-Shot Differential Diagnosis of Breast Cancer Histopathology Images
    (2023-01-01)
    Thoriya, Krishna
    ;
    Mutreja, Preeti
    ;
    ;
    Deep learning models may be useful for the differential diagnosis of breast cancer histopathology images. However, most modern deep learning methods are data-hungry. But, large annotated dataset of breast cancer histopathology images are elusive. As a result, the application of such deep learning methods for the differential diagnosis of breast cancer is limited. To deal with this problem, we propose a few-shot learning approach for the differential diagnosis of the histopathology images of breast tissue. Our model is trained through two stages. We initially train our model for a binary classification task of identifying benign and malignant tissues. Subsequently, we propose a multi-task learning strategy for the few-shot differential diagnosis of breast tissues. Experiments on publicly available breast cancer histopathology image datasets show the efficacy of the proposed method.
  • Placeholder Image
    Publication
    Universal Lesion Detection and Classification Using Limited Data and Weakly-Supervised Self-training
    (2022-01-01)
    Naga, Varun
    ;
    Mathai, Tejas Sudharshan
    ;
    ;
    Summers, Ronald M.
    Radiologists identify, measure, and classify clinically significant lesions routinely for cancer staging and tumor burden assessment. As these tasks are repetitive and cumbersome, only the largest lesion is identified leaving others of potential importance unmentioned. Automated deep learning-based methods for lesion detection have been proposed in literature to help relieve their tasks with the publicly available DeepLesion dataset (32,735 lesions, 32,120 CT slices, 10,594 studies, 4,427 patients, 8 body part labels). However, this dataset contains missing lesions, and displays a severe class imbalance in the labels. In our work, we use a subset of the DeepLesion dataset (boxes + tags) to train a state-of-the-art VFNet model to detect and classify suspicious lesions in CT volumes. Next, we predict on a larger data subset (containing only bounding boxes) and identify new lesion candidates for a weakly-supervised self-training scheme. The self-training is done across multiple rounds to improve the model’s robustness against noise. Two experiments were conducted with static and variable thresholds during self-training, and we show that sensitivity improves from 72.5% without self-training to 76.4% with self-training. We also provide a structured reporting guideline through a “Lesions” sub-section for entry into the “Findings” section of a radiology report. To our knowledge, we are the first to propose a weakly-supervised self-training approach for joint lesion detection and tagging in order to mine for under-represented lesion classes in the DeepLesion dataset.
    Scopus© Citations 2
  • Placeholder Image
    Publication
    Correcting Class Imbalances with Self-Training for Improved Universal Lesion Detection and Tagging
    (2023-01-01)
    Shieh, Alexander
    ;
    Mathai, Tejas Sudharshan
    ;
    Liu, Jianfei
    ;
    ;
    Summers, Ronald M.
    Universal lesion detection and tagging (ULDT) in CT studies is critical for tumor burden assessment and tracking the progression of lesion status (growth/shrinkage) over time. However, a lack of fully annotated data hinders the development of effective ULDT approaches. Prior work used the DeepLesion dataset (4,427 patients, 10,594 studies, 32,120 CT slices, 32,735 lesions, 8 body part labels) for algorithmic development, but this dataset is not completely annotated and contains class imbalances. To address these issues, in this work, we developed a self-training pipeline for ULDT. A VFNet model was trained on a limited 11.5% subset of DeepLesion (bounding boxes + tags) to detect and classify lesions in CT studies. Then, it identified and incorporated novel lesion candidates from a larger unseen data subset into its training set, and self-trained itself over multiple rounds. Multiple self-training experiments were conducted with different threshold policies to select predicted lesions with higher quality and cover the class imbalances. We discovered that direct self-training improved the sensitivities of over-represented lesion classes at the expense of under-represented classes. However, upsampling the lesions mined during self-training along with a variable threshold policy yielded a 6.5% increase in sensitivity at 4 FP in contrast to self-training without class balancing (72% vs 78.5%) and a 11.7% increase compared to the same self-training policy without upsampling (66.8% vs 78.5%). Furthermore, we show that our results either improved or maintained the sensitivity at 4FP for all 8 lesion classes.
    Scopus© Citations 1
  • Placeholder Image
    Publication
    Differential diagnosis of thyroid nodule capsules using random forest guided selection of image features
    (2022-12-01)
    Eftimie, Lucian G.
    ;
    Glogojeanu, Remus R.
    ;
    Tejaswee, A.
    ;
    Gheorghita, Pavel
    ;
    Stanciu, Stefan G.
    ;
    Chirila, Augustin
    ;
    Stanciu, George A.
    ;
    ;
    Hristu, Radu
    Microscopic evaluation of tissue sections stained with hematoxylin and eosin is the current gold standard for diagnosing thyroid pathology. Digital pathology is gaining momentum providing the pathologist with additional cues to traditional routes when placing a diagnosis, therefore it is extremely important to develop new image analysis methods that can extract image features with diagnostic potential. In this work, we use histogram and texture analysis to extract features from microscopic images acquired on thin thyroid nodule capsules sections and demonstrate how they enable the differential diagnosis of thyroid nodules. Targeted thyroid nodules are benign (i.e., follicular adenoma) and malignant (i.e., papillary thyroid carcinoma and its sub-type arising within a follicular adenoma). Our results show that the considered image features can enable the quantitative characterization of the collagen capsule surrounding thyroid nodules and provide an accurate classification of the latter’s type using random forest.
    Scopus© Citations 4