Options
Robust IRIS Presentation Attack Detection Through Stochastic Filter Noise
Journal
Proceedings - International Conference on Pattern Recognition
ISSN
10514651
Date Issued
2022-01-01
Author(s)
Abstract
The vulnerability of iris recognition algorithms against presentation attacks demands a robust defense mechanism. Much research has been done in the literature to create a robust attack detection algorithm; however, most of the algorithms suffer from generalizability, such as inter database testing or unseen attack type. The problem of attack detection can further be exacerbated if the images contain noise such as Gaussian or Salt-Pepper noise. In this research, we propose a multi-task deep learning model with a denoising convolutional skip autoencoder and a classifier to inbuilt robustness against noisy images. The Gaussian noise layer is introduced as a dropout between the encoder network's hidden layers, which helps the model learn generalized features that are robust to data noise. The proposed algorithm is evaluated on multiple presentation attack databases and extensive experiments across different noise types and a comparison with other deep learning models show the generalizability and efficacy of the proposed model.
Volume
2022-August
Unpaywall