Options
Vatsa, Mayank
Loading...
Preferred name
Vatsa, Mayank
Alternative Name
Vatsa, M.
Vatsa M.
Main Affiliation
Web Site
Email
ORCID
Scopus Author ID
Researcher ID
33 results
Now showing 1 - 10 of 33
- PublicationMulti-Surface Multi-Technique (MUST) Latent Fingerprint Database(2024-01-01)
;Malhotra, Aakarsh; ; ;Morris, Keith B.Noore, AfzelLatent fingerprint recognition involves acquisition and comparison of latent fingerprints with an exemplar gallery of fingerprints. The diversity in the type of surface leads to different procedures to recover the latent fingerprint. The appearance of latent fingerprints vary significantly due to the development techniques, leading to large intra-class variation. Due to lack of large datasets acquired using multiple mechanisms and surfaces, existing algorithms for latent fingerprint enhancement and comparison may perform poorly. In this study, we propose a Multi-Surface Multi-Technique (MUST) Latent Fingerprint Database. The database consists of more than 16,000 latent fingerprint impressions from 120 unique classes (120 fingers from 12 participants). Including corresponding exemplar fingerprints (livescan and rolled) and extended gallery, the dataset has nearly 21,000 impressions. It has latent fingerprints acquired under 35 different scenarios and additional four subsets of exemplar prints captured using live scan sensor and inked-rolled prints. With 39 different subsets, the database illustrates intra-class variations in latent fingerprints. The database has a potential usage towards building robust algorithms for latent fingerprint enhancement, segmentation, comparison, and multi-task learning. We also provide annotations for manually marked minutiae, acquisition Pixel Per Inch (PPI), and semantic segmentation masks. We also present the experimental protocol and the baseline results for the proposed dataset. The availability of the proposed database can encourage research in handling intra-class variation in latent fingerprint recognition. - PublicationOn AI Approaches for Promoting Maternal and Neonatal Health in Low Resource Settings: A Review(2022-09-30)
;Khan, Misaal ;Khurshid, Mahapara; ; ;Duggal, MonaSingh, KuldeepA significant challenge for hospitals and medical practitioners in low- and middle-income nations is the lack of sufficient health care facilities for timely medical diagnosis of chronic and deadly diseases. Particularly, maternal and neonatal morbidity due to various non-communicable and nutrition related diseases is a serious public health issue that leads to several deaths every year. These diseases affecting either mother or child can be hospital-acquired, contracted during pregnancy or delivery, postpartum and even during child growth and development. Many of these conditions are challenging to detect at their early stages, which puts the patient at risk of developing severe conditions over time. Therefore, there is a need for early screening, detection and diagnosis, which could reduce maternal and neonatal mortality. With the advent of Artificial Intelligence (AI), digital technologies have emerged as practical assistive tools in different healthcare sectors but are still in their nascent stages when applied to maternal and neonatal health. This review article presents an in-depth examination of digital solutions proposed for maternal and neonatal healthcare in low resource settings and discusses the open problems as well as future research directions.Scopus© Citations 4 - PublicationSeg-DGDNet: Segmentation Based Disguise Guided Dropout Network for Low Resolution Face Recognition(2023-11-01)
;Dosi, Muskan ;Chiranjeev, Chiranjeev ;Agarwal, Shivang ;Chaudhary, Jyoti ;Manchanda, Sunny ;Balutia, Kavita ;Bhagwatkar, Kaushik; —Face recognition models often encounter challenges while recognizing partially occluded faces. Disguise can be manifested intentionally to impersonate someone or unintentionally when the subject wears artifacts such as sunglasses, masks, hats, and caps. To identify a subject accurately, it is essential to discard the occluded regions of the subject’s face and use the features extracted from the visible regions. The problem is further exacerbated when the input image is low resolution or captured at a distance. This article proposes a novel Segmentation based Disguise Guided Dropout Network (Seg-DGDNet) to identify the occluded facial features and recognize a person by non-occluded biometric features. The proposed Seg-DGDNet has two primary tasks: 1) identifying the non-occluded pixels in the subject’s face using segmentation models and 2) guiding the recognition model to concentrate on visible facial features with the help of the proposed guided dropout. The performance of the proposed model is evaluated on three disguised face datasets with artifacts such as facial masks and sunglasses. The proposed model outperforms existing state-of-the-art face recognition models by a significant margin on different datasets with various levels of disguise and resolutions.Scopus© Citations 6 - PublicationTBIOM Special Issue on 'Best Reviewed Papers from IJCB 2020 - Editorial'(2021-10-01)
;Ratha, Nalini; ;Struc, Vitomir ;Kakadiaris, Ioannis A. ;Phillips, Jonathon P. - PublicationIBAttack: Being Cautious about Data Labels(2023-12-01)
;Agarwal, Akshay; ; Ratha, NaliniTraditional backdoor attacks insert a trigger patch in the training images and associate the trigger with the targeted class label. Backdoor attacks are one of the rapidly evolving types of attack which can have a significant impact. On the other hand, adversarial perturbations have a significantly different attack mechanism from the traditional backdoor corruptions, where an imperceptible noise is learned to fool the deep learning models. In this research, we amalgamate these two concepts and propose a novel imperceptible backdoor attack, termed as the IBAttack, where the adversarial images are associated with the desired target classes. A significant advantage of the adversarial-based proposed backdoor attack is the imperceptibility as compared to the traditional trigger-based mechanism. The proposed adversarial dynamic attack, in contrast to existing attacks, is agnostic to classifiers and trigger patterns. The extensive evaluation using multiple databases and networks illustrates the effectiveness of the proposed attack.Scopus© Citations 2 - PublicationDiscriminative shared transform learning for sketch to image matching(2021-06-01)
;Nagpal, Shruti ;Singh, Maneet; Sketch to digital image matching refers to the problem of matching a sketch image (often drawn by hand or created by a software) against a gallery of digital images (captured via an acquisition device such as a digital camera). Automated sketch to digital image matching has applicability in several day to day tasks such as similar object image retrieval, forensic sketch matching in law enforcement scenarios, or profile linking using caricature face images on social media. As opposed to the digital images, sketch images are generally edge-drawings containing limited (or no) textural or colour based information. Further, there is no single technique for sketch generation, which often results in varying artistic or software styles, along with the interpretation bias of the individual creating the sketch. Beyond the variations observed across the two domains (sketch and digital image), automated sketch to digital image matching is further marred by the challenge of limited training data and wide intra-class variability. In order to address the above problems, this research proposes a novel Discriminative Shared Transform Learning (DSTL) algorithm for sketch to digital image matching. DSTL learns a shared transform for data belonging to the two domains, while modeling the class variations, resulting in discriminative feature learning. Two models have been presented under the proposed DSTL algorithm: (i) Contractive Model (C-Model) and (ii) Divergent Model (D-Model), which have been formulated with different supervision constraints. Experimental analysis on seven datasets for three case studies of sketch to digital image matching demonstrate the efficacy of the proposed approach, highlighting the importance of each component, its input-agnostic behavior, and improved matching performance.Scopus© Citations 9 - PublicationUniform misclassification loss for unbiased model prediction(2023-12-01)
;Majumdar, Puspita; Deep learning algorithms have achieved tremendous success over the past few years. However, the biased behavior of deep models, where the models favor/disfavor certain demographic subgroups, is a major concern in the deep learning community. Several adverse consequences of biased predictions have been observed in the past. One solution to alleviate the problem is to train deep models for fair outcomes. Therefore, in this research, we propose a novel loss function, termed as Uniform Misclassification Loss (UML) to train deep models for unbiased outcomes. The proposed UML function penalizes the model for the worst-performing subgroup for mitigating bias and enhancing the overall model performance. The proposed loss function is also effective while training with imbalanced data as well. Further, a metric, Joint Performance Disparity Measure (JPD) is introduced to jointly measure the overall model performance and the bias in model prediction. Multiple experiments have been performed on four publicly available datasets for facial attribute prediction and comparisons are performed with existing bias mitigation algorithms. Experimental results are reported using performance and bias evaluation metrics. The proposed loss function outperforms existing bias mitigation algorithms that showcase its effectiveness in obtaining unbiased outcomes and improved performance. - PublicationDeriveNet for (Very) Low Resolution Image Classification(2022-10-01)
;Singh, Maneet ;Nagpal, Shruti; Images captured from a distance often result in (very) low resolution (VLR/LR) region of interest, requiring automated identification. VLR/LR images (or regions of interest) often contain less information content, rendering ineffective feature extraction and classification. To this effect, this research proposes a novel DeriveNet model for VLR/LR classification, which focuses on learning effective class boundaries by utilizing the class-specific domain knowledge. DeriveNet model is jointly trained via two losses: (i) proposed Derived-Margin softmax loss and (ii) the proposed Reconstruction-Center (ReCent) loss. The Derived-Margin softmax loss focuses on learning an effective VLR classifier while explicitly modeling the inter-class variations. The ReCent loss incorporates domain information by learning a HR reconstruction space for approximating the class variations for the VLR/LR samples. It is utilized to derive inter-class margins for the Derived-Margin softmax loss. The DeriveNet model has been trained with a novel Multi-resolution Pyramid based data augmentation which enables the model to learn from varying resolutions during training. Experiments and analysis have been performed on multiple datasets for (i) VLR/LR face recognition, (ii) VLR digit classification, and (iii) VLR/LR face recognition from drone-shot videos. The DeriveNet model achieves state-of-the-art performance across different datasets, thus promoting its utility for several VLR/LR classification tasks.Scopus© Citations 9 - PublicationSubclass heterogeneity aware loss for cross-spectral cross-resolution face recognition(2020-07-01)
;Ghosh, Soumyadeep; One of the most challenging scenarios of face recognition is matching images in presence of multiple covariates such as cross-spectrum and cross-resolution. In this paper, we propose a Subclass Heterogeneity Aware Loss (SHEAL) to train a deep convolutional neural network model such that it produces embeddings suitable for heterogeneous face recognition, both single and multiple heterogeneities. The performance of the proposed SHEAL function is evaluated on four databases in terms of the recognition performance as well as convergence in time and epochs. We observe that SHEAL not only yields state-of-the-art results for the most challenging case of Cross-Spectral Cross-Resolution face recognition, it also achieves excellent performance on homogeneous face recognition.Scopus© Citations 10 - PublicationOn Matching Finger-Selfies Using Deep Scattering Networks(2020-10-01)
;Malhotra, Aakarsh ;Sankaran, Anush; With the advancements in technology, smartphones' capabilities have increased immensely. For instance, the smartphone cameras are being used for face and ocular biometric-based authentication. This research proposes finger-selfie based authentication mechanism, which uses a smartphone camera to acquire a selfie of a finger. In addition to personal device-level authentication, finger-selfies may also be matched with livescan fingerprints present in the legacy/national ID databases for remote or touchless authentication. We propose an algorithm which comprises segmentation, enhancement, Deep Scattering Network based feature extraction, and Random Decision Forest to authenticate finger-selfies. This paper also presents one of the largest finger-selfie database with over 19, 400 images. The images in the IIIT-D Smartphone Finger-selfie Database v2 are captured using multiple smartphones and include variations due to background, illumination, resolution, and sensors. Results and comparison with existing algorithms show the efficacy of the proposed algorithm which yields equal error rates in the range of 2.1-5.2% for different experimental protocols.Scopus© Citations 25