• Nu S-Au Găsit Rezultate

View of Computer-Assisted Diagnosis of Diabetic Retinopathy and its Classification into Different Stages

N/A
N/A
Protected

Academic year: 2022

Share "View of Computer-Assisted Diagnosis of Diabetic Retinopathy and its Classification into Different Stages"

Copied!
16
0
0

Text complet

(1)

Computer-Assisted Diagnosis of Diabetic Retinopathy and its Classification into Different Stages

Abhishek Singh

1

, S. Poornima

2

, M. Pushpalatha

3

, Dhruv Kukar

4

1Computer Science and Engineering, SRM Institute of Science and Technology, Kattangulathur, Chennai, India.

E-mail: [email protected]

2Assistant Professor, Department of Computer Science and Engineering, SRM Institute of Science and Technology, Kattangulathur, Chennai, India.

3Professor, Department of Computer Science and Engineering, SRM Institute of Science and Technology, Kattangulathur, Chennai, India.

4Computer Science and Engineering, SRM Institute of Science and Technology, Kattangulathur, Chennai, India.

E-mail: [email protected]

ABSTRACT

Diabetic retinopathy is a diabetes impediment that harms the eyes. It originates in the light-delicate tissue’s blood artery and veins at the rear of the eye. DR detection is an important task which makes use of the retinal images for the early observation and nursing, and can dormantly decrease the possibility of blindness. Retinal photos play a notable part in diabetic retinopathy (DR) for disease identification, illness recognition, and nursing. The recent methodologies are not pleased with sensitivity and specificity. In reality, there are yet other matters to be set on in the recent procedure such as effective performance, correctness, as well as easy identification of the DR disease.

The aim of this project is to begin an identification system for the recognition of Diabetic Retinopathy (DR) and its periods using appropriate photo processing and Deep-Learning Techniques. Texture features are extracted from segmented fundus images of retina. The input photographs are collected from Kaggle Datasets. Different features are extracted, and the classifier is trained with different images of all the datasets. The classifier identifies the presence of DR and also its stages like: Normal eyes, Mild DR, Moderate DR, Severe DR, and Proliferative DR.

KEYWORDS

Diabetic Retinopathy, Diabetes.

Introduction

Diabetes is a Chronic and an organ illness that happens when the pancreas does not secrete abundant insulin or the body cannot march it properly. Diabetes contaminates the circular system, as well as that of the retina. Diabetes retinopathy is a medicinal case when retina is harmed as of fluid exudes from blood artery and vein into the retina. It is one of the most regular diabetic retinal diseases and a major cause for blindness. Approximately 415 million diabetic patients are at threat of having blindness because of diabetics. It takes place when diabetes affects the blood vessels inside the rear of the eye, the luminous delicate tissue at the rear of the eye. This tiny blood vessel will discharge blood as well as fluid on the retina forms characteristic such as micro-aneurysms, hard exudates, hemorrhages, cotton wool spots or venous loops[1].

Diabetic retinopathy can be categorized as Non-Proliferative diabetic retinopathy and proliferative diabetic retinopathy (PDR). Pivoting on the existence of characteristics on the retina, the phases of DR can be recognized. In NPDR stage, the illness can go from mild to moderate and then to severe stage with various levels of characteristics except abundance of new blood vessels. PDR is the modern phase where the fluids dispatched by the retina for nutriment activate the extension of new blood vessels. They extend beside the retina and above the top of the clear, vitreous fluid that fills the interior of the human eye. If they discharge blood, genuine vision loss and even blindness can follow. To date, Recognizing DR is a difficult task and manual operation that need a trained doctor to check and access digital colour fundus photos of the retina.

Ahead human lecturers present their analysis, usually one day or 2 later, the detained outcome leads to lost research, miscommunication, and delayed treatment. According to ’WHO’ estimation 34.7 crore of world inhabitants is having the illness diabetes and about half of those have some type of the illness. By looking at further images one can see difference between images manufactured by normal eye and DR affected eye. Small scale aneurysms are the first clinically perceptible indications of DR. They show up as little red dabs of 10 to 100 microns breadth. Smaller scale aneurysms exist ordinarily worldly to the macula (Fig 1.1(a)). Micro-aneurysms start because of more sugar contents

(2)

in the blood which brings about the walls of blood vessels to hunk. Seeing ailment advances, micro aneurysms will be cracked. This outcomes in retinal hemorrhages either externally or in more profound layers (Fig 1.1(a)).

As the retinal veins turn out to be more harmed and penetrable, they will start incrementing. Retinal hemorrhages look either as little red specks or smudges indistinguishable to small scale aneurysms or as bigger fire formed hemorrhages. The vessels not only spills blood and causes the spillage of lipids but also clears proteins route for the presence of little splendid specks called exudates (Fig 1.1(b)). These ischemic locales are noticeable on the retina as cushioned whitish blobs called cotton wool spots (Fig 1.1(c)).

Fig. 1.1. Abnormal DR Images – (a) Picture showing microaneurysms and hemorrhages (b) DR with exudates (c) DR with cotton wool spot

Literature Survey

As of now, there is an expanding enthusiasm for building up programmed frameworks that screens an immense number of individuals for vision undermining maladies like DR and to give a robotized recognition of the illness.

Image processing is currently turning out to be exceptionally down to earth and a helpful instrument for DR screening. Computerized photographing offers a brilliant fixedness record of the retina pictures, which will be utilized by eye specialists for the testing of motion or reaction to the nursing. Computerized images can be prepared via mechanized examination substructures. Fundus photographs examination is a muddled undertaking, as a result of the variability of the fundus pictures regarding dark levels, the internal structures of the eye and presence of specific components in diverse diseased that might prompt a wrong translation. Theoretically their various methods of the use of computerized image system practiced as embedded distinguish proof of DR can be seen. There have been few exploration examinations to distinguish retinal segments, for example, veins, optic circle, fovea and retinal injuries which includes hemorrhages, microaneurysms as well as exudates in the writing.

Automatic diabetic retinopathy classification by Marıa A. Bravo and Pablo A. Arbelaez published in 2017[2] is a project which exercises CNN machine learning technique approach by dividing the pre-processed images into different clusters identified namely as original, circle, square, colour centred and Grey scale. Then combined the score of best trained neural network and the project was categorized in 5 classes (0-4 classes) by taking training inputs with an accuracy of 50.5%. Fig 2.1 gives the flow of the model through different convolutional layers.

Fig. 2.1. Flow of the model through different convolutional layers

Classification of diabetic retinopathy types based on CONVOLUTIONAL NEURAL NETWORK by Hager Khalil, Walid El-Shafie published in 2019[3] is a paper which uses high resolution images of ARIA (Automated Retinal image Analysis) dataset and uses CONVOLUTIONAL NEURAL NETWORK machine learning technique. It contains 2 layers in proposed system. namely convolutional layer and max pooling layer (which reduces size of the image) layered in stack form alternately having dissimilar BRD filters value & dataset is passed through it gives different results and at last the result was predicted by the help of plotting an accuracy epoch(time) graph.

At last DR was divided into 2 DR classes with 95% accuracy. As a result, it is able to distinguish between NPDR

(3)

(Non-proliferative DR) and PDR (proliferative). Fig 2.2 tells how CNN extracts its own features and went to a classifier for classification with the help of alternate max pooling and convolutional layers.

Fig. 2.2. Feature extraction by CNN

Diabetic retinopathy detection and grading using machine learning by Dr D.K. Kirange, Dr J.P. Chaudhari in 2019[4]

uses various ML techniques (Support vector machine, KNN, Naïve bayes, Neural Network and decision trees) and compares every algorithm to get the best accuracy by the most suitable algorithm. It uses the pre-processed images and then removes the optical disk and blood vessel segmentation for further clearance. For computation it makes use of microaneurysms and hemorrhages and then apply the ML techniques. As a result, the best accuracy was given by Naïve Bayes with an accuracy of 77.8% by distinguishing the outcome in 5 categories (Class 0,1,2,3,4 DR). Table 2.1 compares the accuracy of all the models in this table LBP stands local binary patterns and in columns there are different method such as classification tree (CT), Neural Network (NN), Naïve Bayes (NB) etc.

Table 2.1. Machine learning Techniques Feature

extraction

CT NN SVM KNN NB

LBP 60.99 51.77 50.35 71.63 41.84 Gabor 65.95 65.95 74.46 73.04 77.85

Automatic identification and classification of microaneurysms for detection of diabetic retinopathy by Gowthaman R issued in 2014[5]. A system which propose to acknowledge the diabetic retinopathy at right on time arrange consequently which serves curing it completely and serves to diminish the development. Here victims’ retinal pictures are firstly caught and pictures then are accumulated in database. Subsequently, they are given to be pre- processed the decrease the commotion and to upgrade. At that point candidate regions are separated from the picture alongside veins are evacuated to successfully extricate the applicant areas. Also, MAs are upgraded by Gabor channel and from that distinctive element are removed and they are given for the characterization. These components are stated to multi class classifier for preparing and for trial; Execution of this suggested work is assessed with execution measurements, for example, exactness, sensitivity, specificity and execution time and explained as a productive method made for the prior detection of DR.

Automated identification of diabetic retinopathy stages using support vector machine by Enrique V carrera, Andres Gonzalez published in 2018[6] utilizes Kaggle dataset (approx. 100 images) and SVM technique to classify the DR into 5 subparts. Image pre-processing, morphological processing methodologies and texture inspection methods are tried on the fundus photographs to discover the properties including area of blood vessels, hard exudates & the contrast. The attributes are given to the support vector machine (SVM). Eventually, the project explains a categorized accuracy of 93 percent, sensitivity of 90 percent and specificity of 100% Collapse. As a result, it gives accuracy of 83%.

Diabetic retinopathy using eye images which was issued by Mohit singh Solanki in 2015[7] is a paper based on various techniques i.e. supervised learning and neural network. In this paper three layers are used i.e. red, green and blue layer for neural network and 2 steps of feature extraction that are eye detection and thresholding. In eye detection the perimeter of 3 layers was founded and in thresholding gives the area of the three layers which helps finally to determine the category of the diabetic retinopathy. This method gave result in 6-7 hours and gave a

(4)

accuracy of 55%. It was able to classify DR into its broad categories. In table 2.2 there is a confusion matrix in which predicted data and taken data is labeled in rows & columns in the table & the diagonal elements present the accuracy of model.

Table 2.2. Classification of DR Conf

Matrix

Class 0 Class 1 Class 2 Class 3 Class 4

Class 0 170 53 20 14 0

Class 1 41 69 14 6 1

Class 2 18 26 25 4 0

Class 4 3 8 5 8 1

Class 5 0 2 6 2 4

232 258 70 34 6

Detecting Clinical Features of Diabetic Retinopathy using Image Processing by Nimmy Thomas issued in 2014[8]

developed a robotized substructure to analyse the retinal pictures for extricating exudates which are essential indications of DR. The procedure primarily comprises of two principle stages. At the starting stage, exudates are recognized utilizing morphological picture handling systems, which incorporates end of optic circle and the distinguished exudates are characterized utilizing fluffy rationale calculation. The fluffy rationale idea utilizes worth of retinal pictures in RGB shading room, in the interest of fluffy set. The exudates identified are named typical, feeble, hard exudates. For testing their proposed framework pictures were looked over the freely accessible Diabetic Retinopathy dataset DIARETDB0 and DIARETDB1. Also pictures were taken from the doctor's facility Dr. Tony Fernandez super forte eye doctor's facility, Aluva. This estimated model offers to distinguish exudates utilizing morphological routines & arrange these exudates into hard and non-hard exudates utilizing the guideline of fluffy rationale. The adequacy of this technique is the capacity to choose whether every exudates is hard exudates or not, on an individual premise. As the fundus picture for the most part contains a high measure of commotion, diverse pre- processing strategies are utilized for clamour concealment and improving components to balance locales indicating uneven differentiation.

SVM based detection of diabetic retinopathy by V Ramya published in 2018[9] gives a model that uses SVM as its main Machine learning technique and high-resolution images of eye are embedded in it firstly. In this SVM classifier is being prepared with initial known photographs of eye i.e., eyes whose DR level is priorly familiar. This process is called learning of SVM classifier. With the help of this SVM classifier the paper predicts the outcome and give the difference between if an eye has PDR or NPDR with an accuracy of 82%. Table 2.3 shows the recognition rate of Normal and diabetic effected eye through training and testing sets of data.

Table 2.3. Recognition rate of Normal and diabetic effected eye RGN

Rate testing

TRAINIG SET TESTING SET RGN RATE (%)

Normal EYE 18 12 86

Diabetic affected eye 10 5 82

Diabetic Retinopathy detection through integration of deep learning by Alexander Rakhlin Neuramatiom OU published in 2017[10] is a paper that uses CONVOLUTIONAL NEURAL NETWORK as its Machine learning approach and has a large value of high-resolution photos (round about 99,000) from dataset of Kaggle & messidor-2.

The model is a with the CNN approach in two datasets and then the result it calculated separately. And then the specificity and sensitivity of the result are estimated through two datasets. As a result, it estimates get 96 percent accuracy in messidor 2 & a 92 percent accuracy in Kaggle dataset (with that it says that messidor 2 images were clear that Kaggle) Fig 2.3 picturises the architecture utilized in the process which was developed from VGGNet family.

(5)

Fig. 2.3. Architecture which originates from VGGNet family.

Diabetic retinopathy detection using deep learning by Quang H Nguyen, Ramasamy Muthu Raman issued in 2020[11]

utilizes CNN as its primary approach and uses photos from Kaggle dataset. This method gives out an automated classification structure, in which it analyses pictures with changing lighting and field of sight and originates a varying category for diabetic retinopathy utilizing ML. This system achiever 80% sensitivity,82% accuracy and 0.904 AUC for categorizing images in 5 groups (Normal eye, Mild DR, Moderate DR, Severe DR, PDR).

A. Survey Summary

Roughly 420 million people around the world have been identified with diabetes mellitus. The generality of DR has been multiplied in last 1/3rd of the decade and is only anticipated to multiply more, mostly in Asia. People with diabetes, roughly 1/3rd are anticipated to be identified with diabetic retinopathy, a long-standing eye illness that may advance to an unrepairable vision drop. Timely observation, which is censorious for the healthy prognosis, depends on accomplished peoples and is both employment & time-concentrated. This constitutes dare in regions that conventionally absence entry to accomplished medical equipment’s. Besides, the physical essence of DR screening practices encourages all over the world unpredictability among readers. Eventually, give an expand for generality of both diabetes and related retinal issues all over the earth, physical practices of identification can be authorizing to keep a pace along call for screening services.

Automated procedures for diabetic retinopathy (DR) detection are crucial to answering these troubles. Deep learning for binary categorization in general has attained high acceptance rightness, multiple-stage categorization outcomes, especially for early-stage illness. Automated observation and screening offer a distinctive chance to fend off a notable amount of sight drop in world’s inhabitants.

In the latest time, scientists have attached CNNs into the pack of methods that would screen for diabetic illness.

CNNs assurance to grip the great numbers of Photographs that have been gathered for doctors translated screening and gain from sore pixels. The elevated variance and underwhelming bias of these representation could permit CNNs to identify a vast span of non-diabetic illness as ably.

(6)

Implementation

A. Dataset

Native Kaggle dataset utilized in this procedure was procured, the California Healthcare Foundation supplied a dare for making an automatic program to identify diabetic retinopathy (DR)[12]. This provocation consists of an information of high-resolution retina photos taken in the process of a diversity of picturing state with around 3700 training images. For comments, the data has the photos categorized in 5 groups, pivoting on the disease’s seriousness. Level zero takes photos with none of the marks of retinopathy, whereas stage four images tell advanced symptoms. The class of each of the degrees of DR denotes the zero, one, two, three, four classes form none to proliferative respectively. In order to permit teaching as well as computing of huge capacity models, the dataset is divided to training and testing.

Fig. 3.1. Distribution of training dataset

Table 2.4. Dataset Samples Class Name Number of Images

0 No DR 1752

1 Mild 260

2 Moderate 900 3 Severe 250 4 Proliferative 500

B. Data Pre-processing

Pre-processing refers to put in photographs at the basic amount of preoccupation and pre-processing point is to increase the photos which are functional & chief for more distant procedure[13]. The Pre-processing steps to accomplish our project are:

 Utilizing RGB images for Green Channel Separation

 Ben Graham’s Pre-processing method

(7)

1) Utilizing RGB images for Green Channel Separation

The true photo is modified into proper colour space for more distant procedures. Shading fundus picture is initially changed over into a green channel picture keeping in mind the end goal to encourage the veins division as shown in figure 3.3.

From visual perception, veins by and large show the best complexity in the green colour band of image. Gray scale picture gives just the required data from the shading picture subsequent to taking out the hue and saturation. A large portion of the data required is present in intensity and henceforth needed for determination of DR.

Fig. 3.2. Retinal Fundus Images of the dataset

Fig. 3.3. Green Plane of the RGB Image

Fig 3.4 shows zoomed and more focused image of the green plane of one of the photographs from the kaggle dataset and then through green channel.

(8)

Fig. 3.4. Zoomed image of the green plane

2) Ben Graham's pre-processing method

Ben Graham uses Min Pooling. This system intricate SparseConvNet, a convolutional neural network (CNN) and the input to the pre-processor is the full image. To teach the system, it was obligatory to process data augmentation:

scaling, skewing and rotating the photos. Min Pooling two recycles photographs to assure eyes of the identical radius, taken out the community mean colour, and hooked photo to 90% of its structure to eradicate boundary reactions. The last contesting representation added 3 different networks: 2 convolutional networks, utilizing fractional max-pooling(thirteen) demanding coatings of spatial pooling but having some fractional limitations, and the rest utilizing cavernous convolutional neural networks.

To represent the last class, Min Pooling trained a random forest using the anticipated scores as well as other correlated data like the score of the other person’s eye, the variance of the real photo as well as the variance of the pre-processed photo.

Fig 3.5 Shows the Application of Ben Graham’s method on the green plane of the RGB image.

Fig. 3.5. Application of Ben Graham’s method on the green plane of the RGB image

Further improved by auto-cropping, an important update on Colour Version of Cropping & Ben's Pre-processing-

(9)

Fig. 3.6. Auto-cropping on Colour Version of cropping &Ben Graham’s method

Fig. 3.7. Comparing the transformed and original image

C. Augmentation and Data-Visualization

Grad-CAM, a method that may be used visualize the category activation maps of a Convolutional Neural Network (CNN), thereby permitting you for verifying that your channel is “looking” and “activating” at the right areas.

Grad-CAM utilizes bank of any selected idea, loosing into the last convolutional layer to manufacture a coarse localization map call attention to the main zone in the photos for predicting the idea. Using Grad-CAM, we may perceptively certify where our channel is glancing, certifying that it is as expected glancing at the right designs in the photos and operating around those patterns. Result of Grad-CAM is heat-map visualization for a familiar category label.

(10)

Fig. 3.8. Architecture of Grad-CAM

Grad-CAM visualization is summarized as follows Objective Emphasize pixel regions (spatial information) which makes the model to take a decision on the last predicted class (here, diabetic retinopathy severity level)[14]. We visualize these regions using heat-map (as shown in the above figure).

Method Intuition

1. We believe that most important spatial information come from the 3D-tensor of the rearmost convolutional layer (just before Global Pooling layer), which is the nearest spatial information flowing to the last FC layer.

2. For each channel of this 3D-tensor, each activated pixel region represents important features (e.g., blood vessel / scab / cotton wool) of the feed in image. Note that some features are necessary to determine class 0 (perfectly fine blood vessel), some features are important to decide class 4 (big cotton wools). Normally, we anticipate each channel to apprehend different lot of characteristic

3. To point up characteristics which lastly pretentious the final observation, we compute the gradient of the final forecast class with respect to each characteristic. If that characteristic is necessary to this category, it should have large gradient (i.e., rise the value of this characteristic, the observation confidence rises).

4. Therefore, we increase the operated worth of this 3D-tensor and gradients in conjunction, to gain the envision heat-map for each passage. Note that we have multi-passage, and each passage usually have multi- characteristics.

5. lastly, we merge heat-maps of all passages using simple mean, and eradicate negative worth (the ReLu step in the picture) to get the last heat-map.

Fig. 3.9. Augmented retinal images

(11)

This function receives 4 arguments as inputs:

 The image to make a prediction, remember to insert the correct pre-processed version here.

 The model.

 A layer to get gradients, and

 An auxiliary image to combine with heat-map and visualize the final result; we have used Ben's pre-processed image here since it eliminates lightning conditions in the pictures, and hence it’s easy to visualize the final result.

The following output images show the original input, Ben's preprocessed input, heatmap and combined heatmap respectively.

Fig. 3.10. The original input, Ben’s pre-processed input, heat-map, combined heat-map image respectively

D. Robustness Test with Albumentation

This section shows how to apply five transforms of albumentation and test with the model in order to see that it still gives the same prediction as non-augmented or not. The sixth and ultimate augmentation is to combine all the five transformations together. In the case of overfitting training data, visualize the training data to see some spurious features. Then design effective augmentations to eliminate that spurious features. The following output images show the original image along with the resulted transformations.

Fig. 3.11. Original retinal images before and after augmentation transformations E. Feature Extraction

(12)

Features are utilized to determine the class of details which is required to answer computational task connected to application. The reason for feature extraction is to diminish information by calculating properties which are of interest, which recognize data designs. An object is portrayed by estimations, whose qualities are fundamentally they will be equal for items in the alike category & diverse for items of the distinctive categories.

These Characteristics are to be transferred from the colour fundus photos for categorization task. Gray Level Co- occurrence Matrix (GLCMs) is a texture determining techniques used for feature calculation in CAD systems.

GLCMs are generated and its parameters are calculated for categorization that are known as characteristics. The parameters for GLCM include:

Contrast: It is a quantification of strength contrast between a pixel and its locality pixels in the whole photo.

The span of contrast feature is (0 (size (GLCM, 1) -1) ^ 2). Contrast of a photo is set up by using equation 1, given below

Contrast = Σij |i − j|2. p (𝑖, 𝑗)

Correlation: It is an estimate of how a pixel is correlated to its locality pixel in whole photo. Its span is from -1 to +1. Correlation can be calculated using equation 4.2.

Correlation = Σ

Energy: It is the addition of squared components in GLCM. It spans from 0 to 1. For constant photos, energy worth is 1. Energy of a photo is

Energy =

Homogeneity: The closeness estimation of the components giving out in GLCM-to-GLCM diagonal. It spans from 0 to 1. For crosswise GLCM Homogeneity worth is 1. Expression for homogeneity is given by

Homogeneity = Σ ij

Entropy: Entropy is an approximation of randomness of intensity photo[15]. Entropy = Σi Σj p(i, j)log(p(i, j))

Sum Entropy: Sum entropy is an estimation of the unpredictability associated with a group of variables.

Sum Entropy = (i) log { (i)}

Difference Entropy: The Difference entropy is an estimation of the predictability associated with a group of variables.

Difference Entropy = (i) log { (i)}

Sum of Squares variance: Variance is the estimations which inform us by how much, gray levels are differing from the mean. The variation of Y is known as the sum of squares Y and featured as sum of the squared deviations of Y from the mean of Y[16].

Sum of Squares Variance = ∑ij (i-µ) 2 (p (i, j))

Sum Average: The value of the addition over all the P matrix[17]. Sum Average =

Inverse difference Moment: Inverse difference moment is the estimation of local homogeneity.

Inverse Difference Moment = ∑i j p(i, j) Where,

𝑝 (𝑖, 𝑗) – The pixel’s Intensity values that correlates to ith row & jth column.

μ - Mean of the photo.

σ - Standard deviation of the photo.

NG - count of gray levels in the grayscale matrix.

F. Classification

CNN classifier was taught depending on the extracted characteristics from the segmented Exudates region of colour fundus photos to label the photo as Class 0,1,2,3,4 of Diabetic Retinopathy.

1) Convolutional Neural Networks (CNN)

Solely picture information is being train for CNN model. Processed photos with single band have been passed as the

(13)

load of the channel with given stages[18]. CNN model has been regarded as the VGGnet model. The VGGnet representation organized of CONV layers which bring about 3x3 convolution alongside stride: 1 and pad: 1 & of POOL layers which carry out 2x2 maxpooling with stride: 2. There is no padding existed in the network. The network trained with CPU support. As activation function ReLU has been utilized. All convolutional layers come behind with Maxpool utilized in the pooling layer for taking the most notable characteristic connecting the photo pixels.

VGGnet works nicely with dense featured pictures. As per the model there’s no normalization layers used, thw reason being either way it does not upgrade the correctness of the model.

Fig. 3.12. Model View of the CNN

Results and Conclusion

A diagnosis system is designed and implemented as per the components talked about in past sections. In the initial phase system will automatically detect the basic anatomical features of the retina: optic disc. Then, it recognizes exudates, microaneurysm there in retina. The system classifies DR based on the spotting of the exudates.

A confusion matrix is nothing but a table used to interpret the performance of a classification model on a set of test data for which the real values are usually known[20]. Fig. 4.1 gives the confusion matrix by the our model where 0,1,2,3 and 4 stands for No DR, Mild DR, Moderate DR, Severe DR and Proliferative DR respectively.

Fig 4.2 and Fig 4.3 are the graphs which interprets that there’s a decline in the training and validation loss as the iterations increase and that there’s an improvement in our model. The X-axis of the graph denotes the number of iterations in the model while the Y- axis of the graph denotes the loss[21]. One Epoch means that the entire dataset is made to pass forward and backward through the neural network once. Iterations are nothing but the number of batches required to complete a single epoch.

Fig. 4.1. Confusion Matrix

(14)

Fig. 4.2. Loss – Iterations

Fig. 4.3. Loss - Iterations

This system presents vesting results in distinguishing and evaluating images having DR. Different deep learning techniques were explored. It was observed that our model which was designed using CNN classifier achieved an accuracy of 81.35% for the Kaggle Dataset.

A user interface is developed in which the user has to give a fundus retinal input image and the system will predict whether the user is suffering from DR or not, if yes, then it will identify the stage of DR. (Fig 4.4).

(15)

Fig. 4.4. Screenshot of the user interface

Future Scope

The DR can be further categorized into NPRD and PDR (Diabetic Maculopathy) based on identification of Micro- aneurysms and exudates position in fundus image respectively.

Moreover, proceeds in electronic media transmission expand the pertinence of utilizing image processing as a division of "teleophthalmology" as a guide in clinical decision making, with specific importance to substantial rural communities.

The method of DR identification can be further enhanced by extracting the features using other wavelet techniques such as Gabor wavelet, Mexican-hat wavelet, Daubechies wavelet and Shanon wavelet. And other classifiers such as Multiclass SVM technique can be used.

References

[1] Jagadish Nayak. "Automated Identification Of Diabetic Retinopathy Stages Using Digital Fundus Images", Journal of Medical Systems, 04/2008

[2] Maria A Bravo, Pablo A Arbelaz; Automatic Diabetic Retinopathy Classification, 2017.

Https://www.semanticscholar.org/paper/automatic-diabetic-retinopathy-classification-bravo- arbel%c3%a1ez/297fc9980f10ec2d3dc5d81bffbbbf659876d0c

[3] Hager Khalil, Noha El-hag, Ahmed Sedik, Walid El-shafie, Abd El-naser Mohammed & Adel S El-fishawy;

Classification of Diabetic Retinopathy Types based on Convolution Neural Network, 2019, https://www.researchgate.net/publication/339928755_classification_of_diabetic_retinopathy_types_based_on _convolution_neural_network_cnn

[4] Dr K Kirange, Dr J P Chaudhari, Dr K P Rane, Dr K Bhagat, Dr. Nandini Chaudhari; Diabetic Retinopathy Detection And Grading Using Machine Learning, 2019,

Https://www.researchgate.net/publication/338441580_diabetic_retinopathy_detection_and_grading_using_m achine_learning

[5] Gowthaman R; Automatic Identification And Classification Of Microaneurysms For Detection Of Diabetic Retinopathy, 2014, Https://www.semanticscholar.org/paper/automatic-identification-and-classification-of- for-gowthaman/6575fadd12f1d8ed50eed1399687c229adc18a6f

(16)

[6] Enrique V Carrera, Andres Gonzalez; automated Identification Of Diabetic Retinopathy Stages Using Support Vector Machine, 2018.

Https://www.researchgate.net/publication/317951200_automated_detection_of_diabetic_retinopathy_using_s vm

[7] Mohit Singh Solanki; Diabetic Retinopathy Detection Using Eye Images, 2015.

Https://cse.iitk.ac.in/users/cs365/2015/_submissions/mohitss/report.pdf

[8] Nimmy Thomas; detecting Clinical Features Of Diabetic Retinopathy Using Image Processing, 2014, Https://www.researchgate.net/publication/287196415_detecting_clinical_features_of_diabetic_retinopathy_u sing_image_processing

[9] V Ramya; Svm Based Detection For Diabetic Retinopathy, 2018.

Https://www.rsisinternational.org/journals/ijrsi/digital-library/volume-5-issue-1/11-13.pdf

[10] Alexender Rakhlin; Diabetic Retinopathy Detection Through Integration of Deep Learning Classification Framework, 2017, Https://www.biorxiv.org/content/10.1101/225508v2.full.pdf

[11] Quang Hong Nquyen, Ramasamy Muthuraman, Laxman Singh, Gopa Sen, Anh Cuong Tran, Binh P Nquyen, Mathew Cha; Diabetic Retinopathy Detection Using Deep Learning, 2020.

Https://dl.acm.org/doi/10.1145/3380688.3380709 [12] https://savvash.blogspot.com/2015/10/

[13] https://www.tutorialspoint.com/computer_graphics/viewing_and_clipping.htm [14] https://icml.cc/conferences/2019/videos?source=post_page--- [15] https://www.sciencedirect.com/topics/computer-science/quantized-image(Diff Entropy) [16] https://www.sciencedirect.com/topics/engineering/squared-deviation

[17] http://www.ijsrp.org/research-paper-0513/ijsrp-p1750.pdf [18] https://persagen.com/files/ml.html

[19] Pradeep Nijalingappa, B. Sandeep. "Machine Learning Approach For The Identification Of Diabetes Retinopathy And Its Stages", 2015 International Conference On Applied And Theoretical Computing And Communication Technology (Icatcct), 2015

[20] https://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/

[21] https://towardsdatascience.com/lagged-mlp-for-predictive-maintenance-of-turbofan-engines- c79f02a15329.dhfd

Referințe

DOCUMENTE SIMILARE

Result for classification of DNA binding proteins into four major classes using 2 nd neural network based on protein sequence derived features with the varying number of

To find the image classification of weather reports using a convolution neural network and suggest a deep learning algorithm using TensorFlow or Keras.. KEYWORDS: Computer

Abstract:A CAD (computer-aided diagnosis) framework based on a Deep Convolutional Neural Network was built in this paper.Initially, we applied Gaussian Mixture Convolutional

With the aim of ICT based approach, the detection and paramedic evaluation of feature sets and attributes in diabetic retinopathy is further simplified and ease for the

had suggested machine learning based automatic segmentation and hybrid feature analysis for Diabetic Retinopathy classification using fundus Image (2020).. This study used

This paper uses the novel deep learning model, namely the Elite Opposition-based Bat Algorithm for Deep Neural Network (EOBA-DNN) for performing polarity classification of

The image input is impacted by Using concept component analysis, a calculation for picture improvement versatile mean change has been made.. The spatial working of the 2D

The Convolutional Neural Network uses these signals and their average relative band power to distinguish disease stages and accuracy is taken as shown in Figure 7.5