• Nu S-Au Găsit Rezultate

View of Improved Synthetic Iris Generation and Recognition Techniques

N/A
N/A
Protected

Academic year: 2022

Share "View of Improved Synthetic Iris Generation and Recognition Techniques"

Copied!
12
0
0

Text complet

(1)

Improved Synthetic Iris Generation and Recognition Techniques

V.Anjani kranthi1, Pathapati Saroja2, Cheepurupalli Raghuram3,

1, 2, 3

Assistant Professor, SRKR Engineering College (A).

ABSTRACT: Generation of synthetic iris is most widely in many applications. The iris is mostly used in identification of humans according to their features of human eyes. Recent days large scale data is generating in many ways and many management systems use iris for enroll purposes. Many previously developed research methods try to synthesize the iris generation and check the whole iris image or texture. Various issues are identified with the previous existing approaches that consider cost and errors in generating the real data. Attacks can be made in many management systems. Many existing methods used to prevent the attacks in these systems. With the artificial identities the synthetic iris images can be created and also used for multiple purposes. Deep Learning algorithms perform well compare with compare with other algorithms to show the better iris recognition system. In this paper, the rapid CNN is developed to overcome the various issues in synthetic iris recognition system.

Comparative results can be shown with traditional algorithms and proposed system.

Keywords: iris, attacks, recognition, generation.

1. INTRODUCTION

Recent times all over the world using of iris images become important to save or get the details of the people. In many countries, iris used for the biometric propose. Biometric is the measuring of the biological data of the human and it is belongs to the science and technology. Biometrics is the iris recognition or iris generation method to check the actual humans. Iris cannot change every time because of its stableness. Once the person saves the iris image this will be like that whole life. Various automated techniques are used by iris recognition such as pattern-recognition, taking every person's eye images and recognizing the original ones. It is also a type of physiological. It is a very unique feature of every person.

Nowadays, various datasets are using to analyze the results of the different types of iris samples.

Deep learning based iris recognistion system is developed to overcome the various issues in existing image processing techniques. To introduce the better technique the deep learning is most widely used technique to get the better and accurate results with very less

(2)

processing time. Here an efficient model is developed to retrieve the iris features. The structure of the proposed methodology consists of 5 Convolution layers, 8 activation Relu layers, 2 Pooling layers and 4 fully connected layers to classify and extracted features from image automatically without any domain knowledge. This paper focuses on training and testing of samples. The dataset consists of 120 high resolution images with size 4x4 jpeg images collected from the various sources. The proposed system starts with segmentation, analyzation and identification. This system is universal system because this can be integrated to any system. Figure 1 explains about the processing steps that are followed by the proposed methodology.

The organization of the paper as follows, section 2 discussed about the literature survey. Section 3 explained about the existing methodologies. Section 4 introduces the new proposed methodology, section 5 with experimental results and conclusion and refernces are in section 6 and 7.

Figure 1, The flow pattern for feature encoding 2. LITERATURE SURVEY

In Iris segmentation various algorithms are discussed and these are proposed by Daugman and Wildes and the authors proposed the integro differential operators [1] and Hough transforms [2], respectively. The main aim of this methods are detecting the edges of the iris

(3)

images are arranged with the models such as circular or elliptical. The author Tan et al. [3]

developed the hybrid method which is combined with clustering, semantic refinements, and well-designed integro differential operators.

Betancourt and Silvente [4] obtained circular boundaries using QMA-OWA operators [5].

Ghodrati et al. [6] used a set of morphological operators, canny edge detector [7], and Hough transforms. Wang and Xiao [8] constructed a difference operator of radial directions. Some other groups used algorithms that rely on region growing instead of edge-based algorithms.

They gradually merged the blocks with high correlation in an image to obtain the iris region.

Liu et al. [9] used a K-means cluster for pupillary detection. Yan et al. [10] applied the watershed transform [11] and region merging on the structured eye images. Abate et al. [12]

combined the watershed transform, region merging, and color quantization. The edge-based and region-growing algorithms estimated the iris region well, but they are not suitable for application to images with various light environments.

3. METHODOLOGIES

Here we have discussed various types of methodologies that are used to generate the synthetic iris images.

3.1 BSIF (Binarized Statistical Image Features)

A unique finger impression check framework can be more than once demonstrated misdirected by phony fingerprints. In Fingerprint Liveness Detection extra data used to confirm if a fingertip picture is authentic. To defeated this issue the BSIF can be presented in the past location technique. BSIF is a nearby picture descriptor developed by binarizing the reactions to straight channels however, as opposed to past parallel descriptors, the channels are found out from common pictures utilizing autonomous segment investigation (ICA).

Figure: 2 Some Finger print images corresponding to BSIF

(4)

The BSIF consists have two variables: the size of the filter and the number of highlights separated. These methods remove from a fingertip picture a specific number of highlights that will be utilized to order the unique mark as either live or counterfeit. This method unmistakably beat LBP and LPQ strategies.

3.2 iDCGAN for Iris Image Synthesis

Deep convolutional generative adversarial networks (DCGAN) for unsupervised learning of features by utilizing convolutional neural networks as the generator and discriminator network. They also applied constraints on architectural topology of convolutional neural networks in the generator and discriminator networks for stable training.

Specifically, pooling functions were replaced with strided convolutions which allowed the resultant network to learn its own spatial upsampling. Additionally, the fully connected layers at the top of convolutional neural networks were removed and batch normalization was utilized for improving model stability by normalizing each unit to have zero mean and unit variance.

An extension to DCGAN by utilizing domain (iris) specific knowledge. The new generative adversarial network is termed as iDCGAN (iris Deep Convolutional Generative Adversarial Network). Similar to the idea of conditional GANs, it uses auxiliary information of iris quality to improve the performance of both discriminator and generator deep convolutional networks.

Figure: 3 A mixture of real and synthetic iris images generated from the proposed iDCGAN framework

(5)

In any iris recognition system, iris image quality assessment is an integral step as the quality of iris images can greatly impact the performance of iris recognition. It has been ascertained that different artifacts such as occlusion, off-gaze direction, motion blurriness, and specular reflection can affect iris recognition performance. Thus, incorporating quality metrics in generative adversarial network can improve the synthesis process.

3.3 Detection of Iris Spoofing using Structural and Textural Feature Framework

DEtection of iriS spoofIng using Structural and Textural feature (DESIST) framework for detecting spoofed iris images. The proposed framework involves two components: structural decomposition of images to analyze local regions of the images, and a textural analysis to observe the changes in contrast of the input iris image.

Figure: 4 Structure of DESIST

Zernike moments (ZMs) are known for their invariance across scale, rotation, and translation;

and have been successfully applied in iris segmentation and iris recognition at a distance.

The motivation behind extracting these Zernike moments is to capture the changes in the shape between a spoofed and a normal iris image. ZMs of an image are defined over an orthogonal set of polynomials and involve computation of the radial polynomial Rn,m.

Zernike basis functions can be calculated after the polynomial is computed and projection of the input image over these basis functions is determined. The radial polynomial R is defined as:

(6)

where, ρ is the distance between the center of the image and a corresponding point (x, y) on the image, n is called the order of the polynomial and m are the repetitions such that | m | < n and | n−m | is even. Zernike basis function can be directly computed in the Cartesian coordinate space as defined below:

where N × N is the size of the image,

Given an iris image I, dense Zernike moments are calculated for a given pair of (n, m) across non-overlapping windows of size P × P. Multiple pairs of (n, m) are selected to compute the amplitude of multi-order Zernike moments. This will help in enhancing the representation of the input iris image.

4. Enhanced Convolutional Neural Network (CNN)

We have seen the architecture and the basics of CNN, now we are going to building convolutional network using CNTK. Here, we will first see how to put together the structure of the CNN and then we will look at how to train the parameters of it.

At last we’ll see, how we can improve the neural network by changing its structure with various different layer setups. We are going to use MNIST image dataset.

So, first let’s create a CNN structure. Generally, when we build a CNN for recognizing patterns in images, we do the following−

 We use a combination of convolution and pooling layers.

 One or more hidden layer at the end of the network.

 At last, we finish the network with a softmax layer for classification purpose.

With the help of following steps, we can build the network structure−

(7)

Step 1− First, we need to import the required layers for CNN.

from cntk.layers import Convolution2D, Sequential, Dense, MaxPooling Step 2− Next, we need to import the activation functions for CNN.

from cntk.ops import log_softmax, relu

Step 3− After that in order to initialize the convolutional layers later, we need to import the glorot_uniform_initializer as follows−

from cntk.initializer import glorot_uniform

Step 4− Next, to create input variables import the input_variable function. And import default_option function, to make configuration of NN a bit easier.

from cntk import input_variable, default_options

Step 5− Now to store the input images, create a new input_variable. It will contain three channels namely red, green and blue. It would have the size of 28 by 28 pixels.

features = input_variable((3,28,28))

Step 6−Next, we need to create another input_variable to store the labels to predict.

labels = input_variable(10)

Step 7− Now, we need to create the default_option for the NN. And, we need to use the glorot_uniform as the initialization function.

with default_options(initialization=glorot_uniform, activation=relu):

Step 8− Next, in order to set the structure of the NN, we need to create a new Sequential layer set.

Step 9− Now we need to add a Convolutional2D layer with a filter_shape of 5 and a strides setting of 1, within the Sequential layer set. Also, enable padding, so that the image is padded to retain the original dimensions.

(8)

model = Sequential([

Convolution2D(filter_shape=(5,5), strides=(1,1), num_filters=8, pad=True),

Step 10− Now it’s time to add a MaxPooling layer with filter_shape of 2, and a strides setting of 2 to compress the image by half.

MaxPooling(filter_shape=(2,2), strides=(2,2)),

Step 11− Now, as we did in step 9, we need to add another Convolutional2D layer with a filter_shape of 5 and a strides setting of 1, use 16 filters. Also, enable padding, so that, the size of the image produced by the previous pooling layer should be retained.

Convolution2D(filter_shape=(5,5), strides=(1,1), num_filters=16, pad=True),

Step 12− Now, as we did in step 10, add another MaxPooling layer with a filter_shape of 3 and a strides setting of 3 to reduce the image to a third.

MaxPooling(filter_shape=(3,3), strides=(3,3)),

Step 13− At last, add a Dense layer with ten neurons for the 10 possible classes, the network can predict. In order to turn the network into a classification model, use a log_siftmax activation function.

Dense(10, activation=log_softmax) ])

4.1 Performance Evolution

Various performance measures are used for iris datasets such as False Positive Rate (FPR), False Negative Rate (FNR), Accuracy, the performance of the system is estimated.

The parameters such as True Positive (TP), True Negative (TN), False Positive (FP) and False Negative (FN) are used by these measures.

False Positive Rate (FPR)

The overall percentage of stages where the image was classified to normal images, but in fact it did not.

(9)

False Negative Rate (FNR)

The percentage of cases where an image was classified to abnormal images, but in fact it did.

Accuracy

The measurement of accuracy from the measures of FPR and FNR as specified below.

5. EXPERIMENTAL RESULTS

In this section, various experiments are discussed. The sample dataset consists of 50 iris images. The integrated system implements on these images. Various sample images are given below with results.

Figure: 5 CASIA.v4 thousand Figure: 6 ND-iris-0405

Figure: 7 CASIA.v4 distance Figure: 8 UBIRIS.v2.

(10)

The following steps are utilized by the integrated system 1.) Initialize the iris images from the datasets.

2.) Pre-processing the samples.

3.) Training the samples.

4.) Start matching the iris images.

5.) Classify the abnormal and normal iris images.

6.) Calculate the accuracy.

7.) Show results 8.) Stop

Accuracy Processing Time (MS)

BSIF 78% 23.23

IDCGAM 82% 21.34

DESIST 87% 18.98

ACNN Integrated System 95% 12.32

Table: 1 Performance of Integrated system in terms of Accuracy

Figure: 9 Show the performance of Integrated System

(11)

6. CONCLUSION

In this paper, various traditional models and also deep learning models are discussed for iris generation, segmentation, extracting features of iris, and matching of iris codes which are important steps in iris recognition system. This aim of this research is to identify the most of the algorithm techniques used provide good results, yet there is scope to improve the results. This proposed methodology is useful for research a scholar who wants to see a larger image of present condition of Iris recognition system, as this essay covers right from the type of images 2D and 3D are used for image acquisition to list of public database of iris available for research.

7. REFERENCES

1. J. G. Daugman, “High confidence visual recognition of persons by a test of statistical independence,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, no. 11, pp. 1148–1161, 1993.

2. R. Wildes, “Iris recognition: an emerging biometric technology,” Proceedings of the IEEE, vol. 85, no. 9, pp. 1348–1363, 1997.

3. T. Tan, Z. He, and Z. Sun, “Efficient and robust segmentation of noisy Iris images for non-cooperative Iris recognition,” Image and Vision Computing, vol. 28, no. 2, pp. 223–

230, 2010.

4. Y. A. Betancourt and M. G. Silvente, “A fast Iris location based on aggregating gradient approximation using QMA-OWA operator,” International Conference on Fuzzy Systems, pp. 1–8, 2010.

5. J. I. Pelaez and J. M. Dona, “A majority model in group decision making using QMA- OWA operators,” International Journal of Intelligent Systems, vol. 21, no. 2, pp. 193–

208, 2006.

6. H. Ghodrati, M. J. Dehghani, M. S. Helfroush, and K. Kazemi, “Localization of noncircular Iris boundaries using morphology and arched Hough transform,” in Proceedings of 2010 2nd International Conference on Image Processing Theory, Tools and Applications, pp. 458–463, Paris, France, 2010.

7. J. Canny, “A computational approach to edge detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-8, no. 6, pp. 679–698, 1986.

(12)

8. X.-C. Wang and X.-M. Xiao, “An Iris segmentation method based on difference operator of radial directions,” in Proceedings of 2010 6th International Conference on Natural Computation, pp. 135–138, Yantai, China, August 2010.

9. J. Liu, X. Fu, and H. Wang, “Iris image segmentation based on K-means cluster,” in Proceedings of 2010 IEEE International Conference on Intelligent Computing and Intelligent Systems, pp. 194–198, Xiamen, China, October 2010.

10. D. Veeraiah and J. N. Rao, "An Efficient Data Duplication System based on Hadoop Distributed File System," 2020 International Conference on Inventive Computation Technologies (ICICT), 2020, pp. 197-200, doi: 10.1109/ICICT48043.2020.9112567.

11. Rao, J. Nageswara, and M. Ramesh. "A Review on Data Mining & Big Data." Machine Learning Techniques. Int. J. Recent Technol. Eng 7 (2019): 914-916.

12. Karthik, A., MazherIqbal, J.L. Efficient Speech Enhancement Using Recurrent Convolution Encoder and Decoder. Wireless Pers Commun (2021).

https://doi.org/10.1007/s11277-021-08313-6

13. F. Yan, Y. Tian, H. Wu, Y. Zhou, L. Cao, and C. Zhou, “Iris segmentation using watershed and region merging,” in Proceedings of 2014 9th IEEE Conference on Industrial Electronics and Applications, pp. 835–840, Hangzhou, China, June 2014.

14. J. B. T. M. Roerdink and A. Meijster, “The watershed transform: definitions, algorithms and parallelization strategies,” Fundamenta Informaticae, vol. 41, no. 1-2, pp. 187–228, 2000.

15. A. F. Abate, M. Frucci, C. Galdi, and D. Riccio, “BIRD: watershed based Iris detection for mobile devices,” Pattern Recognition Letters, vol. 57, pp. 41–49, 2015.

Referințe

DOCUMENTE SIMILARE

Recognizing a person: By loading the trained dataset model,it read the video from the camera capturing images it extract the image and compare it with the

Gaussian filtering techniques are used to remove the white Gaussian noise in the images and also blur the image detail... Therefore, σ is the standard deviation

Viola Jones Algorithm tested with single Person image from Pointing’04 dataset Face detection from the randomly selected images. Randomly 20 images from Pionting’04 dataset were

Iris, Iris Recognition system, Hough Transform, Integro-Differential, Daughman Rubber Sheet model, VGG-Mini, Deep Learning, Neural

The aim of this project is to investigate various techniques for HGR (hand gesture recognition) using detection of the finger tips.This system algorithms makes use of image

In this paper, by considering the coupling of lateral-vertical bending with torsional vibration (flexural-torsional), the resonant frequencies and the modal sensitivities analyses

The representation sepals length versus petals length of iris dataset For linear separation we use classes Iris setosa and Iris versicolor (Fig. ) And for nonlinear classification

 Specific languages: algebraic modeling (AML), domain specific languages (DSL), architecture modeling (FSML), object modeling language, virtual reality modeling (VRML)... 