• Nu S-Au Găsit Rezultate

View of Iris Recognition System Using Integro Differential Model & Convolution Neural Networks

N/A
N/A
Protected

Academic year: 2022

Share "View of Iris Recognition System Using Integro Differential Model & Convolution Neural Networks"

Copied!
10
0
0

Text complet

(1)

Iris Recognition System Using Integro Differential Model & Convolution Neural Networks

RachanaSree P1, Suma Geethika P2, UjvalnathRaghavendra P3, Raji C4

1,2,3,4 School of Electronics & Communication Engineering, Reva University, Bangalore, India

ABSTRACT

Authorizing someone has become a substantial need in today's world.. In such circumstances, the integration of artificial intelligence into biometric systems has changed the way people lived and operated at various stages of their lives. Identifying a person basically works in two ways: characteristic or behavioral traits, like palmprint, earprint, iris, face & fingerprint etc. In this paper ,we have worked on Iris based Recognition System as it is one of the most reliable & secure biometric. In our work we have proposed modified segmentation technique(DaughmanIntegroDifferential) and efficient Convolution Neural Network (CNN) structure. We performed our experiments on CASIA iris database Version 1(v-1) consisting of 756 images in 108 folders having 7 images each and have obtained an accuracy of 98%.

Keywords

CNN, Daughman Differential model, iris, iris recognition, Median Blur, Neural Network

INTRODUCTION

As the digital technology advances People are managing massive amounts of information, including both public and private information using digital systems such as computers, mobile phones, banking and government management systems, and the internet. While public information can be available for others but private information needs to be secured from others.

Hence protection has become an important term in digital systems. People have traditionally used two methods for this task: knowledge-based and token-based methods . To access a specific information resource using the knowledge-based method, each user must create and remember a password. The token-based method, as the second option, provides a key in which a user's identification information is stored for accessing information resources. However the key can be misused or stolen from the other and hence security was a major concern. To address such issues, a biometric system was developed. Biometric systems are those that use physical/behavioral characteristics such as the iris, face, fingerprints, and so on to identify people[1].

Iris recognition systems are widely used in security applications because they have a diverse set of features and do not change significantly over time[2]. The abundance of texture form in an iris identification system is used to identify individuals. It's made up of a group of dilator muscles and sphincter muscles that enable the pupil to dilate and contract. Collarette is a section of the iris that separates the ciliary and pupillary zones. A series of radial streaks caused by connective tissue bands that enclose the crypts straighten when the pupil contracts, and they become wavy when the pupil dilates.

The pupillary and limbus borders in 2D photographs of the eye determine the iris spatial spectrum and aid in distinguishing it from other ocular structures such as the eyelids, eyelashes, pupil, and sclera. The enriched textural details on the anterior surface of the iris provide a simple biometric signal for human recognition.[3]

Iris recognition has five stages: Preprocessing, localization, normalization, Feature extraction and classification. Each stage employs a distinct set of techniques.

Localization is a method of locating that involves isolating the exact structure of the iris from an image of the eye. This procedure serves as a lead for an iris recognition model. Gradient-based

(2)

algorithms are the most commonly used localization algorithms for locating edges between the pupil, iris, and iris sclera.[4]

Normalization is the process of preparing a segmented iris image for encoding. This process is used to achieve iris size, position, and degree of iris dilation invariance for matching different iris patterns at later stages of image processing. [4]

Feature extraction is the process of extracting features from an image that are used to distinguish it from others. Classification is a subset of categorization, which is the process of identifying, distinguishing, and analyzing things and objects.

The upcoming sections of this paper will accommodate the following data:Section 2 details about the various existing works till date. Section 3 presents the preprocessing technique used. Section 4,5 elaborates the normalization & classification technique used.Section 6 describes segmentation

& feature extraction. Section 7,8 Concludes the paper with Results & discussions.

RELATED WORK

AlWaisy et al.[5] proposed an effective and realtime multimodal biometric system based on deep learning representations for images of a person's right and left irises and fusing the results obtained using the ranking level fusion method. IrisConvNet is the trained deep learning system, and its architecture is composed mainly of Convolutional Neural Network (CNN) and Softmax classifier. The proposed system was tested on 3 datasets: SDUMLA-HMT, CASIA-IrisV3 Interval and IITD iris databases.

Shervinminae et al.[6] suggested an end-to-end deep learning framework for iris recognition for iris recognition based on residual convolution neural network. The proposed work was applied on IIT Delhi dataset which is a well-known dataset hence achieving an accuracy of 95.5%. The proposed work well on only one database.

Muhammad Arsalan et al.[7] worked on iris recognition based on visible light environment to avoid the use of additional near infrared light camera and NIR illuminator as they increase the difficulty of iris segmentation. In order to overcome this problem, he proposed he proposed two iris segmentation schemes based on convolution neural network (CNN) which is capable of accurate iris segmentation in severely noisy environment of iris recognition. The work has been implemented on 3 datasets namely NICE-II, UBIRIS-V2, MICHE.Although their method showed up higher accuracies of iris segmentation, they had to use traditional image processing algorithm in stage 1.

MaryimOmran et al.[8] presented the robustness & effective structure of the iris recognition system. The system is named as IRISNet. The architecture of IRISNetconsist of CNN Layer for feature extraction and SoftMax layer to classify features into N classes. Dataset used is IITD VI and obtained 97.32% & 96.43% for original and normalized images. However, the presented system worked well on only one dataset.

Naglaa et al.[9] proposed a Course-fine-algorithm to address computational cost problems, morphological processing to extract the initial Centre point and a refinement step is made using integro differential operator and ended up achieving 87% efficiency.They made their testing on CASIA v3 database. The database contains 249 subjects having a maximum of 5 images of the same eye, but he could only test on only 100 images of 16 subjects taking a maximum of 5 images for same eye.

Prateekvarma et al.[10] has used Daughman segmentation method for iris segmentation in his work. Iris images were selected from CASIA database. However, he could segment 3 out of 4 eye

(3)

image which corresponds to a success rate of 83% using integro differential equation & Hough transform method.

PrajoyPodder et al.[11] proposed an improved noise reduction scheme based on radial suppression to reduce localised high frequency information from segmented iris regions for personal authentication. CASIA-v1 and CASIA-v3 are the datasets used for testing resulting FFR and FAR of 37.88% & 0.0001%.

Kuowang , Ajay Kumar et al.[12] investigated a new deep learning-based approach for iris recognition & attempts to improve the accuracy using a more simplified framework to more accurately recover the representative features.7.14%,10.7% & 27.4% are the error rates obtained which were been tested on three datasets: CASIA iris database,MMU,IITD.

From the above literature survey, we could conclude that certain authors worked on the conventional segmentation approaches without any observable changes, which led them to poor accuracies and one of the major processes in the system include the preprocessing which has a considerable impact on the performance of the system.

Additionally, it is clear that many authors have worked on a single database rather than on multiple databases, restricting their work to only certain datasets leading to failure on others.

Whereas, few authors have worked on multiple datasets and have used the datasets only for comparison rather than having quality accuracies on all the datasets.

Fig.1 Iris recognition system implementation 3. PRE-PROCESSING

Database used in our project is CASIA v-1. The dataset has 756 images of iris in 108 folders with 7 images having dimensions 280X320 each.

In Preprocessing, the RGB input image is converted to grayscale, and morphological operations are performed using median blur filtering. In the preprocessing section basically there exists two simple methods .Firstly, the conversion of original image to gray scale and secondly, the filtering of original image using the median blur filter. Moreover, there are two other major filters known as Bilateral and Gaussian. The bilateral filter is a non - linear filter that smoothens a signal while preserving its sharp edges. It has proven to be extremely effective for a wide range of problems in computer vision and computer graphics . The result of blurring an image with a Gaussian function is a Gaussian blur . It's a common effect in graphics software, typically used to reduce image noise and detail. Gaussian smoothing is also used in computer vision algorithms as a pre- processing stage to improve image structures at different scales.

(4)

Fig.2 Steps involved in iris preprocessing Grey Scale Image:

(a) After Median Blur Filter:

(b) Fig.3 Preprocessed image

4.NORMALIZATION

Followed by the detection of pupil and iris circles by the localization process. The ROI, which is the angular part between the pupil and iris circles, is then extracted. As a result, during the normalization process, we extract this region

and convert it to a rectangular image format with a size of (64,512). The key benefit of iris normalization is that it removes dimensional inconsistencies that can occur as a result of stretching of the iris region caused by pupil dilation with varying levels of illumination.

Daughman's rubber sheet model is one of the most commonly used normalization technique.

Using this technique we transform iris images from Cartesian coordinates to polar coordinates . Different people's irises can be captured in different proportions, and irises from the same eye can

(5)

also vary in scale due to light and other factors. Certain elastic deformation in the iris texture would influence the results of iris matching. By performing normalization, we are able to overcome this effect. The iris ring is reordered counterclockwise to form a rectangular block of a predetermined size.

(a)

(b)

Fig4. Normalized images 5.CLASSIFICATION

Classification is a subset of categorization, which is the process of identifying, distinguishing, and analyzing things and objects. In our work we used Neural Network.

The term network in Neural Network refers to the interconnection of neurons in different layers of a system.The basic idea of a Neural Network is brought up in consideration with the human brain which contains the neurons.Collection of such neurons together is called a Network. Every system is composed of three layers: the input layer, the hidden layer, and the output layer. The input layer contains input neurons that send data to the hidden layer through synapses, and the hidden layer sends data to the output layer through more synapses. The synapses store values known as weights that allow them to manipulate the input and output to various layers. In Neural Networks, we typically use the SoftMax or sigmoid function for classification. When there are multiple class classifications, we use the SoftMax function at the end of the output layers.SoftMax returns the probabilities of each class ranging from zero to one for each output.

For every Neural Network there exists a loss function ,with the help of which it estimates the output to the closest requirement.

6.PROPOSED WORK 6.1 Iris Localization

Once the image is converting into gray scale, the next step is Iris localization. It is also known as segmentation process. The iris is first identified by assembling two circular outlines that correspond to the pupil and iris borders of the iris field. The main cause for doing this is to reduce the number of computation regions rather than having a complete image of the iris to CNN. Use of this method allows you to acquire only the areas of interest while discarding the rest. Hence, iris segmentation is always considered as an important step in the pre processing and can ensure greater efficiency. In our work Iris localization is done using DaughmanIntegro Differential operator.

Daugman's algorithm is based on operating an integro-differential operator to detect the iris and pupil contour.

The Daughmanintegro differential operator is a circular edge detector ,which detects based on the illumination difference. As the name indicates it initially differentiates at maximum intensity difference points of the region of interest and obtains few points with their respective slopes and

(6)

finally integrates all the points to form a circle.In our work, we have set the thresholds for the illumination difference and the radius parameters in a peculiar manner to obtain an effective modified operator which detects the boundaries in precise manner.

The above mentioned equation is the Daughmanintegro differential equation. In the equation X0, Y0, ro are the centre & radius of course circle. Gσ(r) is the smoothing function. The smoothed image is then scanned for a circle with the greatest gradient change, indicating an edge. I(x, y) is the pixel intensity at coordinates (x, y) in an iris image. Symbol (*) denotes the convolution.

The Daughmanintegro differential operator is a circular edge detector ,which detects based on the illumination difference. As the name indicates it initially differentiates at maximum intensity difference points of the region of interest and obtains few points with their respective slopes and finally integrates all the points to form a circle.In our work, we have set the thresholds for the illumination difference and the radius parameters in a peculiar manner to obtain an effective modified operator which detects the boundaries in precise manner.

Fig.5 Segmented images 6.2 Feature Extraction

In the proposed work we have used Convolution Neural Network(CNN) for Feature extraction.

Once the image has been normalized, it is now ready for the feature extraction process. Feature extraction is the process of extracting features from an image that are used to distinguish it from others. The distinct feature of CNN is that it can be used for both feature extraction and classification. It is made up of layers such as Convolutional layer, Pooling layer, Activation layer, Dense layer, and so on.

The primary reason for selecting CNN is that it has a wide range of applications and can perform accurately with a greater number of layers in the field of computer vision. Convolutional layers, pooling layers, dense layers, and dropout are all included in the proposed CNN work structure.

We chose the following hyper parameters to improve the performance of the CNN.

CNN is fed to the normalized iris images. During the training phase, 80 percent is used for training, 20 percent for validation (from the training set), and 20 percent for testing. The model is evaluated for up to 100 epochs. After the first fully connected layer, a dropout method with a probability of 0.3 is used. It aids in preventing overfitting by deactivating or cutting off connections between layers of neurons.

(7)

Table( 1) HYPER PARAMETERS IN THE PROPOSED CNN

Fig.6 The Structure of proposed CNN model 7.RESULTS& DISCUSSION

In this section of the paper , we are going to discuss the experimentational results of the proposed algorithms and comparison with the existing papers.

The model used in our experiment is CNN with appropriate hyper parameters tuning which has a capacity to attain high accuracy within lower epochs of time and result in the best accuracy at 100 epochs. The fig(8). States information about the training and testing accuracies of the built model.

The fig(9). Shows us the final structure of the CNN built which suits best for the dataset utilized.

Fig(7). details us about the values of accuracies and loss during the training and testing phase for each epoch.In TABLE(2) , we have also compared the result achieved by our model with the existing one [ 7 ] and are able to conclude that our model has attained better efficiency than the existing one.

Finally , the combination of CNN for feature extraction and NN for classification is considered to be the best combination because of the unique qualities exhibited by them individually and such consideration also helps us in being intact with the upcoming technology.

Fig.7

(8)

Fig.8

Fig.9

TABLE (2) - Comparison Table of accuracy of built model with the existing one.

Model comparison Accuracy

PrateekVerma et.al[7](existing model) 83% (percentage)

Proposed model 98% (percentage)

CONCLUSION

In our work, we proposed a Iris based biometric identification algorithm which identifies an individual very precisely and accurately. Initially, we have pre-processed the input image and then used the modified segmentation technique to detect the inner and outer boundaries effectively. Then, we normalized the segmented images using the Daugman’s Rubber Sheet

(9)

model and its output is sent into the newly built CNN(Convolutional Neural Networks) architecture and obtained efficient features and finally we sent the extracted features into the Neural Network (NN) classifier to obtain an accuracy of 98%(percentage). CASIA dataset v-1 consisting of 756 images in 108 folders containing 7 images each has been used for Experimentation.In our future work, we will be working on multiple databases and achieve higher accuracies in all of them with the existing built model.

REFERENCES

[1] Nguyen, DatTien, et al. "Deep learning-based enhanced presentation attack detection for iris recognition by combining features from local and global regions based on NIR camera sensor."

Sensors 18.8 (2018): 2601.

[2] Minaee, Shervin, AmiraliAbdolrashidiy, and Yao Wang. "An experimental study of deep convolutional features for iris recognition." 2016 IEEE signal processing in medicine and biology symposium (SPMB). IEEE, 2016.

[3] Ross, Arun. "Iris recognition: The path forward." Computer 43.2 (2010): 30-35.

[4] Akinfende, Akinola Samuel, Agbotiname Lucky Imoize, and O. S. Ajose. "Investigation of iris segmentation techniques using active contours for non-cooperative iris recognition."

Indonesian J. of Electrical Engineering and Computer Science 19.3 (2020): 1275-1286.

[5] Al-Waisy, Alaa S., et al. "A multi-biometric iris recognition system based on a deep learning approach." Pattern Analysis and Applications 21.3 (2018): 783-802.

[6] Minaee, Shervin, and AmiraliAbdolrashidi. "Deepiris: Iris recognition using a deep learning approach." arXiv preprint arXiv:1907.09380(2019).

[7] Arsalan, Muhammad, et al. "Deep learning-based iris segmentation for iris recognition in visible light environment." Symmetry 9.11 (2017): 263.

[8] Omran, Maryim, and Ebtesam N. AlShemmary. "An Iris Recognition System Using Deep convolutional Neural Network." Journal of Physics: Conference Series. Vol. 1530. No. 1. IOP Publishing, 2020.

[9] Soliman, Naglaa F., et al. "Efficient iris localization and recognition." Optik140 (2017): 469- 475.

[10] Verma, Prateek, et al. "Daughman’s algorithm method for iris recognition—a biometric approach." International journal of emerging technology and advanced engineering 2.6 (2012):

177-185.

[11] Podder, Prajoy, et al. "An efficient iris segmentation model based on eyelids and eyelashes detection in iris recognition system." 2015 International Conference on Computer Communication and Informatics (ICCCI). IEEE, 2015.

[12] Wang, Kuo, and Ajay Kumar. "Toward more accurate iris recognition using dilated residual features."IEEE Transactions on Information Forensics and Security 14.12 (2019): 3233-3245.

[13] Okokpujie, Kennedy, et al. "An improved iris segmentation technique using circular Hough transform." IT Convergence and Security 2017. Springer, Singapore, 2018. 203-211.

[14] Adamović, Saša, et al. "An efficient novel approach for iris recognition based on stylometric features and machine learning techniques." Future Generation Computer Systems 107 (2020):

144-157.

[15] James, SrSahaya Mary. "A Review of Daugman’s Algorithm in Iris Segmentation." IJISET- International Journal of Innovative Science, Engineering & Technology 2.8 (2015).

[16] Manchanda, Nidhi, et al. "A survey: various segmentation approaches to Iris recognition."

International Journal of Information and Computation Technology 3.5 (2013): 419-424..

(10)

[17] Petrov, I., and N. Minakova. "Optimization method for non-cooperative iris recognition task using Daugmanintegro-differential operator." Journal of Physics: Conference Series. Vol.

1615. No. 1. IOP Publishing, 2020.

[18] Harakannanavar, Sunil Swamilingappa, et al. "An extensive study of issues, challenges and achievements in iris recognition." Asian Journal of Electrical Sciences 8.1 (2019): 25-35..

[19] Huang, Ya-Ping, Si-Wei Luo, and En-Yi Chen. "An efficient iris recognition system."

Proceedings. International Conference on Machine Learning and Cybernetics. Vol. 1. IEEE, 2002. [20]-Abidin, Z. Zainal, et al. "Iris segmentation analysis using integro-differential and hough transform in biometric system." Journal of Telecommunication, Electronic and Computer Engineering (JTEC) 4.2 (2012): 41-48.

[20] Kaur, Navjot, and MamtaJuneja. "A review on iris recognition." 2014 Recent Advances in Engineering and Computational Sciences (RAECS). IEEE, 2014.

Referințe

DOCUMENTE SIMILARE

Assignment of Protein Sequence to Functional Family Using Neural Network and Dempster-Shafer Theory.

To find the image classification of weather reports using a convolution neural network and suggest a deep learning algorithm using TensorFlow or Keras.. KEYWORDS: Computer

The study in [4] used CNNs (Convolution Neural Networks) for image classifications and built a crop yield prediction model using UAV’s RGB and NDVI data.. A multi-level DLT using

This project built with the assistance of very simple and basic Convolutional Neural Network(CNN) model using TensorFlow with Keras library and OpenCV to detect whether

Our country is the second-largest producer of fishes.Particularly in the fishery industry, it is a requirement to detect the quality of fish by finding its freshness for

The proposed model is experimented with different machine learning (ML) algorithms for text document classification. Machine learning algorithm is broadly classified

At Present, It Is Very Interesting To Design The Deep Intricate Neural Network (Cnn) Is The Latest Image Recognition Solution. Here We Try To Gather Several

Brain tumor segmentation using convolutional neural networks in MRI images. The multimodal brain tumor image segmentation benchmark