Now showing 1 - 8 of 8
  • Placeholder Image
    Publication
    FamilyGAN: Generating Kin Face Images Using Generative Adversarial Networks
    (2020-01-01)
    Sinha, Raunak
    ;
    ;
    Automatic kinship verification using face images involves analyzing features and computing similarities between two input images to establish kin-relationship. It has gained significant interest from the research community and several approaches including deep learning architectures are proposed. One of the law enforcement applications of kinship analysis involves predicting the kin image given an input image. In other words, the question posed here is: “given an input image, can we generate a kin-image?” This paper attempts to generate kin-images using Generative Adversarial Learning for multiple kin-relations. The proposed FamilyGAN model incorporates three information, kin-gender, kinship loss, and reconstruction loss, in a GAN model to generate kin images. FamilyGAN is the first model capable of generating kin-images for multiple relations such as parent-child and siblings from a single model. On the WVU Kinship Video database, the proposed model shows very promising results for generating kin images. Experimental results show 71.34% kinship verification accuracy using the images generated via FamilyGAN.
  • Placeholder Image
    Publication
    WaveTransform: Crafting Adversarial Examples via Input Decomposition
    (2020-01-01)
    Anshumaan, Divyam
    ;
    Agarwal, Akshay
    ;
    ;
    Frequency spectrum has played a significant role in learning unique and discriminating features for object recognition. Both low and high frequency information present in images have been extracted and learnt by a host of representation learning techniques, including deep learning. Inspired by this observation, we introduce a novel class of adversarial attacks, namely ‘WaveTransform’, that creates adversarial noise corresponding to low-frequency and high-frequency subbands, separately (or in combination). The frequency subbands are analyzed using wavelet decomposition; the subbands are corrupted and then used to construct an adversarial example. Experiments are performed using multiple databases and CNN models to establish the effectiveness of the proposed WaveTransform attack and analyze the importance of a particular frequency component. The robustness of the proposed attack is also evaluated through its transferability and resiliency against a recent adversarial defense algorithm. Experiments show that the proposed attack is effective against the defense algorithm and is also transferable across CNNs.
    Scopus© Citations 7
  • Placeholder Image
    Publication
    Evolution of Newborn Face Recognition
    (2021-01-01)
    Tripathi, Pavani
    ;
    Keshari, Rohit
    ;
    ;
    Accidental new born swapping, health-care tracking, and child-abduction cases are some of the scenarios where new born face recognition can prove to be extremely useful. With the help of the right biometric system in place, cases of swapping, for instance, can be evaluated much faster. In this chapter, we first discuss the various biometric modalities along with their advantages and limitations. We next discuss the face biometrics in detail and present all the datasets available and existing hand-crafted, learning-based, as well as deep-learning-based techniques which have been proposed for new born face recognition. Finally, we evaluate and compare these techniques. Our comparative analysis shows that the state-of-the-art SSF-CNN technique achieves an average of rank-1 new born accuracy of 82.075 %.
  • Placeholder Image
    Publication
    Disguised Face Verification Using Inverse Disguise Quality
    (2020-01-01)
    Kar, Amlaan
    ;
    Singh, Maneet
    ;
    ;
    Research in face recognition has evolved over the past few decades. With initial research focusing heavily on constrained images, recent research has focused more on unconstrained images captured in-the-wild settings. Faces captured in unconstrained settings with disguise accessories persist to be a challenge for automated face verification. To this effect, this research proposes a novel deep learning framework for disguised face verification. A novel Inverse Disguise Quality metric is proposed for evaluating amount of disguise in the input image, which is utilized in likelihood ratio as a quality score for enhanced verification performance. The proposed framework is model-agnostic and can be applied in conjunction with existing state-of-the-art face verification models for obtaining improved performance. Experiments have been performed on the Disguised Faces in Wild (DFW) 2018 and DFW 2019 datasets, with three state-of-the-art deep learning models, where it demonstrates substantial improvement compared to the base model.
    Scopus© Citations 2
  • Placeholder Image
    Publication
    On bias and fairness in deep learning-based facial analysis
    (2023-01-01)
    Mittal, Surbhi
    ;
    Majumdar, Puspita
    ;
    ;
    Facial analysis systems are used in a variety of scenarios such as law enforcement, military, and daily life, which impact important aspects of our lives. With the onset of the deep learning era, neural networks are being widely used for the development of facial analysis systems. However, existing systems have been shown to yield disparate performance across different demographic subgroups. This has led to unfair outcomes for certain members of society. With an aim to provide fair treatment in the face of diversity, it has become imperative to study the biased behavior of systems. It is crucial that these systems do not discriminate based on the gender, identity, skin tone, or ethnicity of individuals. In recent years, a section of the research community has started to focus on the fairness of such deep learning systems. In this work, we survey the research that has been done in the direction of analyzing fairness and the techniques used to mitigate bias. A taxonomy for the bias mitigation techniques is provided. We also discuss the databases proposed in the research community for studying bias and the relevant evaluation metrics. Lastly, we discuss the open challenges in the field of biased facial analysis.
    Scopus© Citations 1
  • Placeholder Image
    Publication
    Review of Iris Presentation Attack Detection Competitions
    (2023-01-01)
    Yambay, David
    ;
    Das, Priyanka
    ;
    Boyd, Aidan
    ;
    McGrath, Joseph
    ;
    Fang, Zhaoyuan (Andy)
    ;
    Czajka, Adam
    ;
    Schuckers, Stephanie
    ;
    Bowyer, Kevin
    ;
    ; ;
    Noore, Afzel
    ;
    Kohli, Naman
    ;
    Yadav, Daksha
    ;
    Trokielewicz, Mateusz
    ;
    Maciejewicz, Piotr
    ;
    Mohammadi, Amir
    ;
    Marcel, Sébastien
    Biometric recognition systems have been shown to be susceptible to presentation attacks, the use of an artificial biometric in place of a live biometric sample from a genuine user. Presentation Attack Detection (PAD) is suggested as a solution to this vulnerability. The LivDet-Iris Liveness Detection Competition strives to showcase the state of the art in presentation attack detection by assessing the software-based iris Presentation Attack Detection (PAD) methods, as well as hardware-based iris PAD methods against multiple datasets of spoof and live fingerprint images. These competitions have been open to all institutions, industrial and academic, and competitors can enter as either anonymous or using the name of their institution. There have been four LivDet-Iris competitions organized to date: the series was launched in 2013, and the most recent competition was organized in 2020, with the other two happening in 2015 and 2017. This chapter briefly characterizes all four competitions, discusses the state of the art in iris PAD (from the independent evaluations point of view), and current needs to push the iris PAD reliability forward.
    Scopus© Citations 2
  • Placeholder Image
    Publication
    Benchmarking Robustness Beyond lp Norm Adversaries
    (2023-01-01)
    Agarwal, Akshay
    ;
    Ratha, Nalini
    ;
    ;
    Recently, a significant boom has been noticed in the generation of a variety of malicious examples ranging from adversarial perturbations to common noises to natural adversaries. These malicious examples are highly effective in fooling almost ‘any’ deep neural network. Therefore, to protect the integrity of deep networks, research efforts have been started in building the defense against these anomalies of the individual category. The prime reason for such individual handling of noises is the lack of one unique dataset which can be used to benchmark against multiple malicious examples and hence in turn can help in building a true ‘universal’ defense algorithm. This research work is an aid towards that goal that created a dataset termed “wide angle anomalies” containing 19 different malicious categories. On top of that, an extensive experimental evaluation has been performed on the proposed dataset using popular deep neural networks to detect these wide-angle anomalies. The experiments help in identifying a possible relationship between different anomalies and how easy or difficult to detect an anomaly if it is seen or unseen during training-testing. We assert that the experiments in seen and unseen category attack training-testing reveals several surprising and interesting outcomes including possible connection among adversaries. We believe it can help in building a universal defense algorithm.
    Scopus© Citations 1
  • Placeholder Image
    Publication
    Facial Retouching and Alteration Detection
    (2022-01-01)
    Majumdar, Puspita
    ;
    Agarwal, Akshay
    ;
    ;
    On the social media platforms, the filters for digital retouching and face beautification have become a common trend. With the availability of easy-to-use image editing tools, the generation of altered images has become an effortless task. Apart from this, advancements in the Generative Adversarial Network (GAN) leads to creation of realistic facial images and alteration of facial images based on the attributes. While the majority of these images are created for fun and beautification purposes, they may be used with malicious intent for negative applications such as deepnude or spreading visual fake news. Therefore, it is important to detect digital alterations in images and videos. This chapter presents a comprehensive survey of existing algorithms for retouched and altered image detection. Further, multiple experiments are performed to highlight the open challenges of alteration detection.
    Scopus© Citations 5