Options
Vatsa, Mayank
Loading...
Preferred name
Vatsa, Mayank
Alternative Name
Vatsa, M.
Vatsa M.
Main Affiliation
Web Site
Email
ORCID
Scopus Author ID
Researcher ID
5 results
Now showing 1 - 5 of 5
- PublicationDiscriminative shared transform learning for sketch to image matching(2021-06-01)
;Nagpal, Shruti ;Singh, Maneet; Sketch to digital image matching refers to the problem of matching a sketch image (often drawn by hand or created by a software) against a gallery of digital images (captured via an acquisition device such as a digital camera). Automated sketch to digital image matching has applicability in several day to day tasks such as similar object image retrieval, forensic sketch matching in law enforcement scenarios, or profile linking using caricature face images on social media. As opposed to the digital images, sketch images are generally edge-drawings containing limited (or no) textural or colour based information. Further, there is no single technique for sketch generation, which often results in varying artistic or software styles, along with the interpretation bias of the individual creating the sketch. Beyond the variations observed across the two domains (sketch and digital image), automated sketch to digital image matching is further marred by the challenge of limited training data and wide intra-class variability. In order to address the above problems, this research proposes a novel Discriminative Shared Transform Learning (DSTL) algorithm for sketch to digital image matching. DSTL learns a shared transform for data belonging to the two domains, while modeling the class variations, resulting in discriminative feature learning. Two models have been presented under the proposed DSTL algorithm: (i) Contractive Model (C-Model) and (ii) Divergent Model (D-Model), which have been formulated with different supervision constraints. Experimental analysis on seven datasets for three case studies of sketch to digital image matching demonstrate the efficacy of the proposed approach, highlighting the importance of each component, its input-agnostic behavior, and improved matching performance.Scopus© Citations 9 - PublicationSubclass heterogeneity aware loss for cross-spectral cross-resolution face recognition(2020-07-01)
;Ghosh, Soumyadeep; One of the most challenging scenarios of face recognition is matching images in presence of multiple covariates such as cross-spectrum and cross-resolution. In this paper, we propose a Subclass Heterogeneity Aware Loss (SHEAL) to train a deep convolutional neural network model such that it produces embeddings suitable for heterogeneous face recognition, both single and multiple heterogeneities. The performance of the proposed SHEAL function is evaluated on four databases in terms of the recognition performance as well as convergence in time and epochs. We observe that SHEAL not only yields state-of-the-art results for the most challenging case of Cross-Spectral Cross-Resolution face recognition, it also achieves excellent performance on homogeneous face recognition.Scopus© Citations 10 - PublicationOn bias and fairness in deep learning-based facial analysis(2023-01-01)
;Mittal, Surbhi ;Majumdar, Puspita; Facial analysis systems are used in a variety of scenarios such as law enforcement, military, and daily life, which impact important aspects of our lives. With the onset of the deep learning era, neural networks are being widely used for the development of facial analysis systems. However, existing systems have been shown to yield disparate performance across different demographic subgroups. This has led to unfair outcomes for certain members of society. With an aim to provide fair treatment in the face of diversity, it has become imperative to study the biased behavior of systems. It is crucial that these systems do not discriminate based on the gender, identity, skin tone, or ethnicity of individuals. In recent years, a section of the research community has started to focus on the fairness of such deep learning systems. In this work, we survey the research that has been done in the direction of analyzing fairness and the techniques used to mitigate bias. A taxonomy for the bias mitigation techniques is provided. We also discuss the databases proposed in the research community for studying bias and the relevant evaluation metrics. Lastly, we discuss the open challenges in the field of biased facial analysis.Scopus© Citations 1 - PublicationA2-LINK: Recognizing Disguised Faces via Active Learning and Adversarial Noise Based Inter-Domain Knowledge(2020-10-01)
;Suri, Anshuman; Face recognition in the unconstrained environment is an ongoing research challenge. Although several covariates of face recognition such as pose and low resolution have received significant attention, 'disguise' is considered an onerous covariate of face recognition. One of the primary reasons for this is the scarcity of large and representative labeled databases, along with the lack of algorithms that work well for multiple covariates in such environments. In order to address the problem of face recognition in the presence of disguise, the paper proposes an active learning framework termed as A2-LINK. Starting with a face recognition machine-learning model, A2-LINK intelligently selects training samples from the target domain to be labeled and, using hybrid noises such as adversarial noise, fine-tunes a model that works well both in the presence and absence of disguise. Experimental results demonstrate the effectiveness and generalization of the proposed framework on the DFW and DFW2019 datasets with state-of-the-art deep learning featurization models such as LCSSE, ArcFace, and DenseNet.Scopus© Citations 5 - PublicationSUPREAR-NET: Supervised Resolution Enhancement and Recognition Network(2022-04-01)
;Ghosh, Soumyadeep; Heterogeneous face recognition is a challenging problem where the probe and gallery images belong to different modalities such as, low and high resolution, visible and near-infrared spectrum. A Generative Adversarial Network (GAN) enables us to learn an image to image transformation model for enhancing the resolution of a face image. Such a model would be helpful in a heterogeneous face recognition scenario. However, unsupervised GAN based transformation methods in their native formulation might alter useful discriminative information in the transformed face images. This affects the performance of face recognition algorithms when applied on the transformed images. We propose a Supervised Resolution Enhancement and Recognition Network (SUPREAR-NET), which does not corrupt the useful class-specific information of the face image and transforms a low resolution probe image into a high resolution one, followed by effective matching with the gallery using a trained discriminative model. We show the results for cross-resolution face recognition on three datasets including the FaceSurv face dataset, containing poor quality low resolution videos captured at a standoff distance up to 10 meters from the camera. On the FaceSurv, NIST MEDS and CMU MultiPIE datasets, the proposed algorithm outperforms recent unsupervised and supervised GAN algorithms.Scopus© Citations 2