• Nu S-Au Găsit Rezultate

View of A Contemporary Strategy for the Recognition of Glaucoma with Tripartite Tier Convolutional Neural Network

N/A
N/A
Protected

Academic year: 2022

Share "View of A Contemporary Strategy for the Recognition of Glaucoma with Tripartite Tier Convolutional Neural Network"

Copied!
16
0
0

Text complet

(1)

883

A Contemporary Strategy for the Recognition of Glaucoma with Tripartite Tier Convolutional Neural Network

A.Padma1, Dr.M.Sivajothi2, Dr.M.Mohamed Sathik3

1Research Scholar, Research Centre for Computer Science, Sadakathullah Appa College, Tirunelveli, Tamilnadu, India.

2 Associate Professor, Dept. of Computer Science, Sri Parasakthi College for Women,

Manonmaniam Sundaranar University, Tirunelveli, Tamil Nadu, India.

3 Principal, Sadakathullah Appa College, Manonmaniam Sundaranar University, Tirunelveli, Tamil Nadu, India

ABSTRACT

Glaucoma is an intricate sickness that is caused due to the damage of optic nerve. It is the second foremost reason for the loss of sight. To diagnose glaucoma, Tripartite Tier Convolutional Neural Network Scheme (TT_CNN Scheme) was proposed. TT_CNN Scheme comprises three tiers namely top tier, middle tier and bottom tier. Each tier contains hidden layers such as convolution, relu, max pooling, drop out and fully connected layer. The input retinal images are processed through the hidden layers and the obtained outputs are concatenated and classified as normal or glaucomatous. Support vector machine classifier, K- Nearest Neighbour classifier, Random Forest Classifier, Decision tree classifier are used to classify the processed retinal images as healthy or glaucomatous images. Amongst the classifiers, Random Forest Classifier exemplifies better performance than other classifiers.

This TT_CNN Scheme has been analysed using MIAG RIMONE (Release2) database and MIAG RIMONE (Release3) database. The performance metrics illustrates enhanced results for TT_CNN Scheme than single tier CNN method. This TT_CNN Scheme achieves a sensitivity of 99.26% and 98.6 % in classifying glaucoma images for MIAG RIMONE (Release2) database and MIAG RIMONE (Release3) database respectively. This Scheme also produces an overall accuracy of 99.26% and 99.1% using Random Forest Classifier for MIAG RIMONE (Release 2) database and MIAG RIMONE (Release3) database correspondingly.

Keywords: Glaucoma, convolutional neural network, Tripartite Tier.

1 INTRODUCTION

Glaucoma is a set of related eye complaints that injure the optic nerve that transmit information from the eye to the brain. At later stage it shows symptoms such as blurred vision, severe eye and head pain, vomiting and sudden sight loss [1]. Glaucoma often is called the "silent thief of sight," since it causes no pain and shows no symptoms until perceptible vision loss occurs. Glaucoma can be treated properly by having timely diagnosis.

Convolutional Neural Network(CNN) is a kind of deep neural network[2] that extract features directly from the images, provides higher accuracy and avoids manual feature extraction. It is utilized for many tasks such as object classification, speech recognition and so on. It procures the image as an input and discern the several features of the image using filters. Its purpose is to reduce the images without losing the information for best prediction.

The outcome obtained from every convoluted image works as an input to the consecutive level. It is utilized for huge datasets so that the network can be trained to make many layered decisions.

(2)

884 2 RELATED WORK

Raghavendra U[3] presented a CAD device to recognize using machine learning approach.

An artificial neural network (autoencoder) was designed and trained to snub noise. It was prepared to find out the efficient and significant features from retinal fundus images. Wei zhou[4] presented an innovative method for automatic detection of OD using low rank representation based semi-supervised extreme learning machine (LRR-SSELM). Shuang Yu [5] presented a methodology for neovascularisation detection using machine learning. The vessels are segmented using Gabor filter and the vessel reatures are extracted and classified using SVM. Akara Sopharak [6] proposed a technique for the detection of exudates using machine learning that leads to loss of sight. The features are selected and classified using Naive Bayes and SVM classifier. Cheng Wan [7] proposed a technique in which the features are extracted using convolutional neural network. Several layers included in neural network are trained for the localization of optic disc region. The candidate pixels in the OD region was arranged based on threshold value. The center of gravity among the pixels are computed and the OD region was identified for the detection of glaucoma. Qaisar Abbas[8] presented an approach for the diagnosis of glaucoma. CNN was utilized to extricate features with multilayer. Deep belief network was implemented to obtain the majority distinctive intense features. Finally softmax linear classifier was applied to classify the images as glaucoma and non-glaucoma. Sevastopolsky [9] proposed a method to detach OD and OC using U-Net convolutional neural network. Ruchir Srivastava[10] presented an approach using deep neural network which project the features of both PPA and OD and OD is differentiated from PPA.

Seema Tukaram Kamble [11] proposed a method to identify glaucoma. A classifier model was utilized to find the edges of OD region. The edges found in the color or grayscale image was converted to binary image for analysis. The feature extraction (Circle hough transform)method was used to locate the OD region accurately. Mamta Juneja [12] stated a methodology in which two neural networks are functioning concurrently to split OD and OC to identify the eye disease.

Xiangyu Chen [13] presented a technique with deep learning architecture which contains multiple layers in order to differentiate between glaucoma and non-glaucoma. Marcos Vinícius Ferreira [14] proposed a technique for computerized identification of glaucoma using deep learning and texture attributes via phylogenetic diversity indexes. Andres Diaz- Pinto [15] presented a new method in which the retinal image was synthesized and a approach to diagnose glaucoma anchored in Deep Convolutional Generative Adversarial Networks.Yunshu Qin [16]stated an innovative disc-aware ensemble network which includes deep hierarchical context of retinal fundus images and optic disc region for the automatic detection of glaucoma. Han Liu [17] presented classical CNN technique in which different combinations of hyperparameters are taken. The selection of hyperparameters are optimized using coarse-to-fine. This model was focussed individually on RGB fundus images to obtain two different segmentation outcomes. The merged result was given as input to U-Net model for additional augmentation of optic disc. BaidaaAl-Bander [18] suggested a technique to detect the location of fovea and optic disc using deep multiscale sequential CNN. Zeya Wong [19] proposed a framework to identify the OD and OC region. The morphology of each OD and OC region was acquired and the parameter of an ellipse was computed. Jongwoo kim [20] presented a method for ROI detection using deep learning. This method utilizes CNN to identify ROI and non-ROI images. Multiple CNNs are trained for dataset using various scales of kernel and stride that belongs to primary convolution layer. The window is moved over the image in both parallel and perpendicular direction. The effective ROI was found by applying CNN to each pane.

(3)

885 In [21], the green channel of the input retinal image is morphologically restructured using round structuring component. The brighter areas are extracted and categorized as plausible OD and non OD region using 6 region oriented characteristics and Gaussian Mixture Model classifier. In [22], the OD detection algorithm contains four steps. Based on unique quality of each pixel, the subimage with OD was obtained from input retinal image . The super pixel was generated using Simple Linear Iterative Clustering (SLIC) algorithm. The super pixel was classified into disc or non disc region based on Adaboost algorithm. The resultant OD area was fitted with circular shape based on Active Geometric Shape Model. Shijan Lu proposed a method [23] in which the spherical vividness structure related to OD was captured with line operator. The orientation of multiple line segments with minimum or maximum variation is used to find OD. In [24], OD segmentation is used to integrate the confined image report in the region of multi dimensional element to improve accuracy. The Optic cup was segmented using vessel corners at the cup boundary.

Sandra Morales [25] proposed a method to recognize optic disc. The grey scale image was obtained by applying Principal Component Analysis (PCA) . The various regions in the grey scale are discriminated evidently and the vessels are removed using morphological operations. Stochastic Watershed Transformation was implemented to find the location of OD for automatic and accurate detection. Darvish [26] presented a multistage algorithm for exudate detection.

Raghavendra U[3] presented a CAD device to recognize using machine learning approach.

An artificial neural network (autoencoder) was designed and trained to snub noise. It was prepared to find out the efficient and significant features from retinal fundus images. Wei zhou[4] presented an innovative method for automatic detection of OD using low rank representation based semi-supervised extreme learning machine (LRR-SSELM). Shuang Yu [5] presented a methodology for neovascularisation detection using machine learning. The vessels are segmented using Gabor filter and the vessel reatures are extracted and classified using SVM. Akara Sopharak [6] proposed a technique for the detection of exudates using machine learning that leads to loss of sight. The features are selected and classified using Naive Bayes and SVM classifier. Cheng Wan [7] proposed a technique in which the features are extracted using convolutional neural network. Several layers included in neural network are trained for the localization of optic disc region. The candidate pixels in the OD region was arranged based on threshold value. The center of gravity among the pixels are computed and the OD region was identified for the detection of glaucoma. Qaisar Abbas[8] presented an approach for the diagnosis of glaucoma. CNN was utilized to extricate features with multilayer. Deep belief network was implemented to obtain the majority distinctive intense features. Finally softmax linear classifier was applied to classify the images as glaucoma and non-glaucoma. Sevastopolsky [9] proposed a method to detach OD and OC using U-Net convolutional neural network. Ruchir Srivastava[10] presented an approach using deep neural network which project the features of both PPA and OD and OD is differentiated from PPA.

Seema Tukaram Kamble [11] proposed a method to identify glaucoma. A classifier model was utilized to find the edges of OD region. The edges found in the color or grayscale image was converted to binary image for analysis. The feature extraction (Circle hough transform)method was used to locate the OD region accurately. Mamta Juneja [12] stated a methodology in which two neural networks are functioning concurrently to split OD and OC to identify the eye disease.

Xiangyu Chen [13] presented a technique with deep learning architecture which contains multiple layers in order to differentiate between glaucoma and non-glaucoma. Marcos

(4)

886 Vinícius Ferreira [14] proposed a technique for computerized identification of glaucoma using deep learning and texture attributes via phylogenetic diversity indexes. Andres Diaz- Pinto [15] presented a new method in which the retinal image was synthesized and a approach to diagnose glaucoma anchored in Deep Convolutional Generative Adversarial Networks.Yunshu Qin [16]stated an innovative disc-aware ensemble network which includes deep hierarchical context of retinal fundus images and optic disc region for the automatic detection of glaucoma. Han Liu [17] presented classical CNN technique in which different combinations of hyperparameters are taken. The selection of hyperparameters are optimized using coarse-to-fine. This model was focussed individually on RGB fundus images to obtain two different segmentation outcomes. The merged result was given as input to U-Net model for additional augmentation of optic disc. BaidaaAl-Bander [18] suggested a technique to detect the location of fovea and optic disc using deep multiscale sequential CNN. Zeya Wong [19] proposed a framework to identify the OD and OC region. The morphology of each OD and OC region was acquired and the parameter of an ellipse was computed. Jongwoo kim [20] presented a method for ROI detection using deep learning. This method utilizes CNN to identify ROI and non-ROI images. Multiple CNNs are trained for dataset using various scales of kernel and stride that belongs to primary convolution layer. The window is moved over the image in both parallel and perpendicular direction. The effective ROI was found by applying CNN to each pane.

In [21], the green channel of the input retinal image is morphologically restructured using round structuring component. The brighter areas are extracted and categorized as plausible OD and non OD region using 6 region oriented characteristics and Gaussian Mixture Model classifier. In [22], the OD detection algorithm contains four steps. Based on unique quality of each pixel, the subimage with OD was obtained from input retinal image . The super pixel was generated using Simple Linear Iterative Clustering (SLIC) algorithm. The super pixel was classified into disc or non disc region based on Adaboost algorithm. The resultant OD area was fitted with circular shape based on Active Geometric Shape Model. Shijan Lu proposed a method [23] in which the spherical vividness structure related to OD was captured with line operator. The orientation of multiple line segments with minimum or maximum variation is used to find OD. In [24], OD segmentation is used to integrate the confined image report in the region of multi dimensional element to improve accuracy. The Optic cup was segmented using vessel corners at the cup boundary.

Sandra Morales [25] proposed a method to recognize optic disc. The grey scale image was obtained by applying Principal Component Analysis (PCA) . The various regions in the grey scale are discriminated evidently and the vessels are removed using morphological operations. Stochastic Watershed Transformation was implemented to find the location of OD for automatic and accurate detection. Darvish [26] presented a multistage algorithm for exudate detection.

Raghavendra U[3] presented a CAD device to recognize using machine learning approach.

An artificial neural network (autoencoder) was designed and trained to snub noise. It was prepared to find out the efficient and significant features from retinal fundus images. Wei zhou[4] presented an innovative method for automatic detection of OD using low rank representation based semi-supervised extreme learning machine (LRR-SSELM). Shuang Yu [5] presented a methodology for neovascularisation detection using machine learning. The vessels are segmented using Gabor filter and the vessel reatures are extracted and classified using SVM. Akara Sopharak [6] proposed a technique for the detection of exudates using machine learning that leads to loss of sight. The features are selected and classified using

(5)

887 Naive Bayes and SVM classifier. Cheng Wan [7] proposed a technique in which the features are extracted using convolutional neural network. Several layers included in neural network are trained for the localization of optic disc region. The candidate pixels in the OD region was arranged based on threshold value. The center of gravity among the pixels are computed and the OD region was identified for the detection of glaucoma. Qaisar Abbas[8] presented an approach for the diagnosis of glaucoma. CNN was utilized to extricate features with multilayer. Deep belief network was implemented to obtain the majority distinctive intense features. Finally softmax linear classifier was applied to classify the images as glaucoma and non-glaucoma. Sevastopolsky [9] proposed a method to detach OD and OC using U-Net convolutional neural network. Ruchir Srivastava[10] presented an approach using deep neural network which project the features of both PPA and OD and OD is differentiated from PPA.

Seema Tukaram Kamble [11] proposed a method to identify glaucoma. A classifier model was utilized to find the edges of OD region. The edges found in the color or grayscale image was converted to binary image for analysis. The feature extraction (Circle hough transform)method was used to locate the OD region accurately. Mamta Juneja [12] stated a methodology in which two neural networks are functioning concurrently to split OD and OC to identify the eye disease.

Xiangyu Chen [13] presented a technique with deep learning architecture which contains multiple layers in order to differentiate between glaucoma and non-glaucoma. Marcos Vinícius Ferreira [14] proposed a technique for computerized identification of glaucoma using deep learning and texture attributes via phylogenetic diversity indexes. Andres Diaz- Pinto [15] presented a new method in which the retinal image was synthesized and a approach to diagnose glaucoma anchored in Deep Convolutional Generative Adversarial Networks.Yunshu Qin [16]stated an innovative disc-aware ensemble network which includes deep hierarchical context of retinal fundus images and optic disc region for the automatic detection of glaucoma. Han Liu [17] presented classical CNN technique in which different combinations of hyperparameters are taken. The selection of hyperparameters are optimized using coarse-to-fine. This model was focussed individually on RGB fundus images to obtain two different segmentation outcomes. The merged result was given as input to U-Net model for additional augmentation of optic disc. BaidaaAl-Bander [18] suggested a technique to detect the location of fovea and optic disc using deep multiscale sequential CNN. Zeya Wong [19] proposed a framework to identify the OD and OC region. The morphology of each OD and OC region was acquired and the parameter of an ellipse was computed. Jongwoo kim [20] presented a method for ROI detection using deep learning. This method utilizes CNN to identify ROI and non-ROI images. Multiple CNNs are trained for dataset using various scales of kernel and stride that belongs to primary convolution layer. The window is moved over the image in both parallel and perpendicular direction. The effective ROI was found by applying CNN to each pane.

In [21], the green channel of the input retinal image is morphologically restructured using round structuring component. The brighter areas are extracted and categorized as plausible OD and non OD region using 6 region oriented characteristics and Gaussian Mixture Model classifier. In [22], the OD detection algorithm contains four steps. Based on unique quality of each pixel, the subimage with OD was obtained from input retinal image . The super pixel was generated using Simple Linear Iterative Clustering (SLIC) algorithm. The super pixel was classified into disc or non disc region based on Adaboost algorithm. The resultant OD area was fitted with circular shape based on Active Geometric Shape Model. Shijan Lu proposed a method [23] in which the spherical vividness structure related to OD was captured with line operator. The orientation of multiple line segments with minimum or

(6)

888 maximum variation is used to find OD. In [24], OD segmentation is used to integrate the confined image report in the region of multi dimensional element to improve accuracy. The Optic cup was segmented using vessel corners at the cup boundary.

Sandra Morales [25] proposed a method to recognize optic disc. The grey scale image was obtained by applying Principal Component Analysis (PCA) . The various regions in the grey scale are discriminated evidently and the vessels are removed using morphological operations. Stochastic Watershed Transformation was implemented to find the location of OD for automatic and accurate detection. Darvish [26] presented a multistage algorithm for exudate detection.

3 METHODOLOGY CNN OVERVIEW

The CNN contains many layers such as

 Input layer

 Convolution layer

 ReLU layer

 Max Pooling layer

 Fully Connected layer

 Softmax layer

 Classification layer Input Layer

The Input tier captures the raw input image with width, height and depth. The depth of RGB image and grayscale image are taken as 3 and 1 respectively. It can be represented as

(1)

The output of the first layer performs as an input of succeeding layers.

Convolution Layer

This layer is utilized to extort components from an input image. Convolution is a numerical procedure that consider image matrix and filter as input. Let us consider an image matrix of height h, width w and depth d. Its dimension can be denoted like (h * w * d). Let us take a filter of height hf, width wf and depth d and can be represented as (hf * wf * d). The convolved image or feature map can be achieved using the equation as below.

(2)

A 2D convolutional layer applies sliding convolutional filters to the input image and it shifts the filter along the input both vertically and horizontally and calculates the dot product of the

weights and the input and adds up the bias term along with it.

Figure 1 Convolved Image

(7)

889 Convolved image with unlike filters can be used for applications such as edge detection, blurring and sharpening of images. The count of pixels that slide along the input matrix is a stride. Whenever the value of stride is equal to one, the filter will be moved per pixel at a time. Similarly the filter will be moved based on the value of stride. A part of the image can be cut where the filter did not suit and the suitable portion of the image can be retained.

Adding zeroes around the border of the input matrix (zero padding) is useful to apply filter on the edging components of the input image. It allows to manipulate the range of the feature maps.

ReLU layer (Rectified Linear Unit)

This unit is utilized to perform non-linear operation and to avoid negative values. The output of relu layer is denoted below

𝑓 𝑥 = max 0, 𝑥 (3) Where x is non negative value

A threshold operation is performed on every element of the input and if the range of any element is less than 0, then it can be set as 0

Figure 2 ReLU Layer Pooling layer

The spatial size of convolved feature is minimized using this layer. It improves faster computation and controls overfitting. The categories of pooling are

 Max pooling

 Average pooling

 Sum pooling

It reduces the sampling rate of the input by separating it into sub regions and computes the value to retain the most important information.

Max pooling, Average pooling and Sum pooling displays the maximum value, mean value and sum value respectively in the feature map.

(8)

890 Figure 3 Pooling Layer

Fully connected layer

The matrix values are flattened into column vector and it is applied into a fully connected layer.

SoftmaxLayer Softmax function is implemented for the classification of outputs based on the number of classes. It is used for classification of numerous classes. It shows the possibilities of all classes and the final target class will show greater possible value..

𝑓 𝑧𝑖 = 𝑒𝑧𝑖

𝑒𝑧𝑗 𝑘 𝑗 =0

(4)

where i=0,1,2,...k and z is the input value and j=0,1,2,...k.

Figure 4 Output Layer

(9)

891 Classification Layer

Classification layer infers the number of classification to be done related to the output size of the past layer. It classifies the images based on the probability of classes.

SINGLE TIER CNN ARCHITECTURE

Figure 5 Single Tier CNN Architecture Steps involved

1. Dataset Acquisition

2. Feature Extraction using hidden layer such as convolution and relu layer 3. Classification of class output using softmax function

The image input layer captures the raw input image with dimension 224 x 224 x 3.

The convolution2dLayer( ) is applied to create a 2D convolved image and its range between zero and negative are set as zero by applying reluLayer( ). The convolution2dLayer( ) is implemented for a second time to produce a convolved image with zero padding of size one along all edges of the layer input. The achieved image whose matrix values less than zero are considered as zero using reluLayer( ).

Table 1 Layer Structure for Single Tier CNN

FUNCTION USED LAYER STRUCTURE

imageInputLayer( ) Input image with dimension 224x224x3 convolution2dLayer( ) +

reluLayer( )

ConvolutionLayer_1(64 filters of size 4x4) and ReluLayer_1

convolution2dLayer( ) + reluLayer( )

ConvolutionLayer_2(64 filters of size 4x4) and ReluLayer_2

maxpoolingLayer( ) Max Pooling Layer_1 with pool size 2 and Stride 2

convolution2dLayer( ) + reluLayer( )

ConvolutionLayer_3(64 filters of size 4x4) and ReluLayer_3

maxpoolingLayer( ) + reluLayer( ) Max Pooling Layer_2 with pool size 2 and Stride 2 ReluLayer_4

(10)

892

dropoutLayer( ) +

fullyConnectedLayer( )

fc_1 (output size 100) fullyConnectedLayer( ) fc_1 (output size 2)

softmaxLayer( ) Softmax Layer

classificationLayer( ) Classification Layer

The maxpoolingLayer( ) is utilized and computes the maximum value for each region. Again the convolution2dLayer( ), reluLayer( ), max poolingLayer( ), reluLayer( ) is applied and the output is achieved. The obtained output is flattened into column vector and it is fed as an input to fully connected layer.

The fullyConnectedLayer( ) produces a layer with output size 100.The fullyConnectedLayer(

) is utilized for another time to creates a layer with output size 2 and it is send to the next step. In this stage the softmaxLayer( ) is used to produce a layer that is helpful for classification.

Finally classificationLayer( ) is applied to create a layer that calculates the cross entropy loss for multiclass classification problem. It also infers the number of classification to be done based on the output range of the preceding layer.

TRIPARTITE TIER CNN ARCHITECTURE

Figure 6 TT_ CNN Architecture

The proposed Tripartite Tier CNN comprises three tiers such as top tier, middle tier and bottom tier.

 The input layer acquires the input retinal fundus color image having dimension as 224*224*3 and normalizes the data. The output of input layer is fed to the top, middle and bottom tiers simultaneously.

 The input retinal image is passed to the convolutional layer to get a feature map. The filter utilized in the convolutional layer extract features from the input image. Filters of varying sizes were implemented to extract detailed dissimilar various features for edge

(11)

893 detection, blurring, color recognition and sharpening of images. The convolved output image is then shifted to relu layer which is an activation function that ignores the negative values.

 Again the activated output is pushed to convolution layer to achieve deep feature extraction. The convolved output is activated using relu function.

 The resulted output is then fed to pooling layer that reduces the parameter values.

 The pooled output is processed using convolution and relu layer.

 The achieved output is passed to pooling layer. The output is divided into multiple region and returns the maximum value as output and is activated once more.

 The output matrix value is flattened as column vector values using drop out and fully connected layer.

 The column vector values obtained from top, middle and top layers are concatenated and the merged output is fed to the fully connected layer which generate output in the form of class.

 Then the softmax function calculates the proportion of the exponential of the input value and the aggregate of exponential values which ranges between 0 to 1.

 Finally classification layer classifies the images based on the probability of classes.

Table 2 Layer Structure for TT_ CNN

FUNCTION USED LAYER STRUCTURE

TOP LAYER MIDDLE LAYER BOTTOM LAYER

imageInputLayer( ) Input image with dimension 224x224x3 convolution2dLayer( ) +

reluLayer( )

conv_1 (64 filters with 3x3) and Relu_1

conv_5 (64 filters with 5x5) and Relu_5

conv_3 (64 filters with 7x7) and

Relu_3 convolution2dLayer( ) +

reluLayer( )

conv_2(64 filters with 3x3) and Relu_2

conv_6(64 filters with 5x5) and Relu_6

conv_4 (64 filters with7x7) and

Relu_4 maxpoolingLayer( ) MaxPool_2 (pool

size 2 and Stride 2)

MaxPool_6(pool size2 and Stride 2)

MaxPool_4 (pool size 2 and Stride 2) convolution2dLayer( ) +

reluLayer( )

conv_8(64 filters with 3x3) and Relu_8

conv_7(64 filters with 5x5) and Relu_7

conv_9(64 filters with 7x7) and

Relu_9 maxpoolingLayer( ) +

reluLayer( )

MaxPool_3 (pool size 2 and Stride 2) Relu_11

MaxPool_1 (pool size2 and Stride 2) Relu_10

MaxPool_5 (pool size 2 and Stride 2) Relu_12

dropoutLayer( ) Dropout_2 Dropout_1 Dropout_3

fullyConnectedLayer( ) fc_2 (output size 100)

fc_1 (output size 100)

fc_3 (output size 100)

concatenationLayer( ) CONCATENATION LAYER fullyConnectedLayer( ) fc_4 (output size 2)

softmaxLayer( ) SOFTMAX LAYER

classificationLayer( ) CLASSIFICATION LAYER 4 EXPERIMENTAL RESULTS AND ANALYSIS 4.1 DATA SET

The single tier CNN methodology and TT_ CNN methodology was experimented on MIAG- RIM ONE (release2) database that contains 255 normal images and 200 glaucomatous

(12)

894 images. Both of these techniques were examined for MIAG-RIM ONE (release3) database that contains 85 normal images and 39 glaucomatous images.

4.2 PERFORMANCE METRICS

The performance was examined using the metrics such as sensitivity, specificity, precision, accuracy, F1score and recall for the two datasets.

Table 3 Metrics

Metrics Formula

TPR or Sensitivity

or Recall 𝑇𝑃𝑅 = 𝑇𝑃

𝑇𝑃 + 𝐹𝑁

True positive (TP) is the number of glaucomatous images classified as glaucomatous.

True Negative

Rate or Specificity 𝑇𝑁𝑅 = 𝑇𝑁 𝑇𝑁 + 𝐹𝑃

True negative (TN) is the number of healthy images classified as healthy.

Precision P = TP

TP + FP

False positive (FP) is the number of healthy images classified as glaucomatous.

Accuracy

𝐴𝐶𝐶

= 𝑇𝑃 + 𝑇𝑁

𝑇𝑃 + 𝐹𝑃 + 𝑇𝑁 + 𝐹𝑁

False negative (FN) is the number of glaucomatous images classified as normal.

F1 Score (F1) 𝐹1 = 2𝑇𝑃

2𝑇𝑃 + 𝐹𝑃 + 𝐹𝑁

Sensitivity evaluates the figure of exactly recognized glaucomatous images from the total number of glaucomatous images. Specificity computes the proportion of accurately identified normal images from the entire normal images. Accuracy is the proportionality of correctly detected as normal and glaucomatous to the whole pool of images. Precision measures the number of persons who are labelled as glaucomatous are actually affected. The F-score is utilized to obtain the mean of ratios.The performance of the classifiers using single tier CNN and TT_ CNN Scheme for the two datasets are shown in the table below.

Table 4 Performance Metrics for MIAG RIMONE Release 2 and Release 3 using Single tier CNN and TT_ CNN methodology

Dataset Classifier Accuracy Sensitivity Specificity Precision Recall Fscore

Single Scale CNN

MIAG RIMONE

Release2

SVM 0.8897 0.896 0.896 0.8782 0.896 0.8845

RF 0.9265 0.9293 0.9293 0.9169 0.9293 0.9222

KNN 0.8676 0.866 0.866 0.8555 0.866 0.8599

DT 0.8382 0.8344 0.8344 0.8248 0.8344 0.8288

MIAG RIMONE

Release3

SVM 0.7568 0.7067 0.7067 0.7389 0.7067 0.7161

RF 0.8378 0.8045 0.8045 0.8322 0.8045 0.815

KNN 0.7297 0.6859 0.6859 0.7028 0.6859 0.6917

DT 0.7838 0.7276 0.7276 0.7817 0.7276 0.7413

MIAG

SVM 0.9926 0.9919 0.9919 0.9933 0.9919 0.9926

RF 0.9926 0.9926 0.9926 0.9928 0.9926 0.9926

KNN 0.9779 0.9779 0.9779 0.978 0.9779 0.9779

(13)

895 Multi

Scale CNN

RIMONE

Release2 DT 0.9706 0.9706 0.9706 0.971 0.9706 0.9706

MIAG RIMONE

Release3

SVM 0.982 0.979 0.979 0.982 0.979 0.98

RF 0.991 0.986 0.986 0.994 0.986 0.99

KNN 0.964 0.96 0.96 0.961 0.96 0.96

DT 0.955 0.94 0.94 0.959 0.94 0.948

From this table, it is revealed that RF classifier exposes better outcome than other classifiers using single tier CNN methodology. In TT_ CNN methodology, all the classifiers represent improved results than the results obtained in single tier CNN methodology. Among the four classifiers, RF classifier exhibits enhanced result than other classifiers. SVM also shows improved result nearer to RF classifier. KNN and Decision Tree classifier indicates least performance than the other two.

The graph represents the effect of the classifiers for the two datasets as shown below

Figure 7 Performance Metrics for MIAG RIMONE Release 2 using Single Tier CNN and TT_ CNN methodology

Support vector Machine Classifier shows results closer to Random forest classifier and K- Nearest Neighbour classifier along with Decision tree classifier reveals least performance for both the datasets.

Figure 8 Performance Metrics for MIAG RIMONE Release 3 using Single Tier CNN and TT_ CNN methodology

(14)

896 From the Figure7, it is known that all the classifiers show enhanced results using Tripartite Tier CNN than Single Tier CNN. For MIAG RIMONE release 2 database, accuracy value is higher for TT_CNN by10.29%, 6.55%, 11.03% and 13.24% for SVM, RF, KNN and DT respectively. F-score value is found to be 10.81%, 7.04%, 11.8%, 14.8% greater for TT_CNN technique for SVM, RF, KNN and DT.

From the Figure8, it is known that all the classifiers show enhanced results using Tripartite Tier CNN than Single Tier CNN. For MIAG RIMONE release 3 database, accuracy value is higher for TT_CNN by 22.52%, 15.32%, 23.43% and 17.12% for SVM, RF, KNN and DT respectively. F-score value is found to be 26.39%, 17.5%, 26.83%, 20.67% greater for TT_CNN technique for SVM, RF, KNN and DT.

The performance of the classifiers mainly depends on the dataset and the data. In the proposed method, RF shows better performance than the other three classifiers for the reason that RF has less variance and works well for large dataset. Moreover it merges the outcomes of the decision trees thereby overcomes overfitting.

Next SVM illustrates enhanced results nearer to RF since SVM is robust. It particularly exhibits effectual functioning for dataset with large number of features and provides high accuracy especially for two class problem. It is efficient with high dimensional space and most favourable for linearly separable space. As the selection of the right kernel is vital attribute in classification phase, SVM shows good results.

KNN shows moderate performance because it does not work effective with high dimensional data and large dataset. For every test data, the distance should be calculated between the test data and the training data which in turn results in large time consumption for testing.

DT shows moderate performance as it uses single feature per node to split the data at each step that ends in overfitting. Due to the small variations in data, the decision tree is unstable which sources to imprecise outcome.

5 CONCLUSION

The single Tier CNN method is implemented for the diagnosis of glaucoma. A novel approach namely TT_CNN Scheme was proposed to obtain improved results. This scheme have three rows one below the other. Each row comprises the hidden layers. The input retinal fundus images are processed by means of the hidden layers. The resultant outcomes are combined and fed to the classification process. Few classifiers like SVM classifier, KNN classifier, RF Classifier, Decision tree classifier are utilized for classification. The performance of TT_CNN is found to be excellent when compared with the results of single level CNN method. Both SVM and RF classifiers shows tremendous results for TT_CNN Scheme.

REFERENCES

1. Glaucoma Research Foundation, https://www.glaucoma.org/.

2. https://towardsdatascience.com/a-comprehensive-guide-to-convolutional-neural- networks-the-eli5-way

3. Raghavendra U , Anjan Gudigar, Sulatha V. Bhandary,Tejaswi N Rao, “A Two Layer Sparse Autoencoder for Glaucoma Identification with Fundus Images”, Journal of Medical Systems , DOI: 10.1007/s10916-019-1427-x, July 2019.

(15)

897 4. Wei Zhou, Shaojie Qiao, Yugen Yi, Nan Han, Yuqi Chen & Gang Lei, “Automatic optic disc detection using low-rank representation based semi-supervised extreme learning machine”, International Journal of Machine Learning and Cybernetics, 12 February 2019.

5. Shuang Yu , Di Xiao , Yogesan Kanagasingam, “Machine Learning Based Automatic Neovascularization Detection on Optic Disc Region”, IEEE Journal of Biomedical and Health Informatics , Volume: 22 , Issue: 3 , May 2018.

6. Akara Sopharak, Matthew N. Dailey , Bunyarit Uyyanonvara , Sarah Barman , Tom Williamson , Khine Thet Nwe and Yin Aye Moe, “Machine learning approach to automatic exudate detection in retinal images from diabetic patients “,Journal of Modern Optics Vol. 57, No. 2, 20 January 2010.

7. Xu P., Wan C., Cheng J., Niu D., Liu J. (2017) Optic Disc Detection via Deep Learning in Fundus Images. In: Cardoso M. et al. (eds) Fetal, Infant and Ophthalmic Medical Image Analysis. OMIA 2017, FIFI 2017. Lecture Notes in Computer Science, vol 10554. Springer, Cham.

8. Qaisar Abbas, “ Glaucoma-Deep: Detection of Glaucoma Eye Disease on Retinal Fundus Images using Deep Learning”, International Journal of Advanced Computer Science and Applications, Vol. 8, No. 6, 2017.

9. Sevastopolsky. “A Optic disc and cup segmentation methods for glaucoma detection with modification of U-Net convolutional neural network”, Pattern Recognition and Image Analysis, volume 27, pages 618–624, 2017.

10. Ruchir Srivastava , Jun Cheng , Damon W. K. Wong , Jiang Liu, “Using deep learning for robustness to parapapillary atrophy in optic disc segmentation”, 2015 IEEE 12th

International Symposium on Biomedical Imaging (ISBI),

DOI: 10.1109/ISBI.2015.7163985, 23 July 2015.

11. Seema Tukaram Kamble, S.A.Patil, “Automatic Detection of Optic Disc using Structural Learning”, International Journal of Engineering Research & Technology (IJERT), ISSN:

2278-0181, IJERTV7IS050091, Vol. 7 Issue 05, May-2018.

12. Juneja, M., Singh, S., Agarwal, N. et al. Automated detection of Glaucoma using deep learning convolution network (G-net). Multimed Tools Appl , https://doi.org/10.1007/s11042-019-7460-4 , 2019.

13. Xiangyu Chen , Yanwu Xu , Damon Wing Kee Wong , Tien Yin Wong , Jiang Liu,

“Glaucoma detection based on deep convolutional neural network”, IEEE, 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2015.

14. MarcosVinícius dos Santos Ferreira, AntonioOseas de Carvalho Filho,, Alcilene Dalília deSousa, AristófanesCorrêa Silva, MarceloGattass, “Convolutional neural network and texture descriptor-based automatic detection and diagnosis of glaucoma”, Expert Systems with Applications,Volume 110, Pages 250-263, 15 November 2018.

15. A. Diaz-Pinto, A. Colomer, V. Naranjo, S. Morales, Y. Xu and A. F. Frangi, "Retinal Image Synthesis and Semi-Supervised Learning for Glaucoma Assessment," in IEEE Transactions on Medical Imaging, vol. 38, no. 9, pp. 2211-2218, Sept. 2019.

16. Yunshu Qin, Ammar Hawbani, “A Novel Segmentation Method for Optic Disc and Optic Cup Based on Deformable U-net”, Conference: IEEE-2018 International Conference on Artificial Intelligence and Big Data, February 2019.

17. LeiWang, HanLiu, YalingLu, HangChen, JianZhang, JiantaoPu, “A coarse-to-fine deep learning framework for optic disc segmentation in fundus images”, Biomedical SignalProcessing and Control, Volume 51, Pages 82-89, May 2019.

18. BaidaaAl-Bander, WaleedAl-Nuaimy, Bryan M.Williams, YalinZheng, “Multiscale sequential convolutional neural networks for simultaneous detection of fovea and optic

(16)

898 disc”, Biomedical Signal Processing and Control, Volume 40, Pages 91-101, February 2018.

19. Z. Wang, N. Dong, S. D. Rosario, M. Xu, P. Xie and E. P. Xing, "Ellipse Detection of Optic Disc-and-Cup Boundary in Fundus Images," 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, pp. 601-604, 2019.

20. Jongwoo Kim, Sema Candemir, and George R.Thoma, Emily Y. Chew, “Region of Interest Detection in Fundus Images Using Deep Learning and Blood Vessel Information”, 2018 IEEE 31st International Symposium on Computer-Based Medical Systems, DOI 10.1109/CBMS.2018.00069, 2018.

21. Sohini Roychowdhury, “Optic disc boundary and vessel origin segmentation of fundus images”, IEEE Journal of Biomedical and Health Informatics,2015.

22. Zhun Fan, Yibiao Rong, Xinye Cai, Fang Li, Wenji Li, Huibiao Lin, “Detecting Optic disk based on Adaboost and Active geometric shape model”, IEEE Conference on Cyber Technology in Automation, Control and Intelligent Systems,June 2015

23. Shijian Lu, Joo Hwee Lim,’Automatic Optic Disc Detection from Retinal Images by a Line Operator’, IEEE Transactions on Biomedical Engineering, Vol.58, Issue 1, pp. 88- 94, 2011.

24. G. D. Joshi, J. Sivaswamy, and S. R. Krishnadas, “Optic disk and cup segmentation from monocular color retinal images for glaucoma assessment,”IEEE Trans. Med. Imag., vol.

30, no. 6, pp. 1192–1205,Jun. 2011.

25. Sandra Morales, Valery Naranjo, David Perez, Amparo Navea, Mariano Alcaniz,’Automatic Detection Of Optic Disc Based on PCA and Stachastic Watershed’, EUSIPCO, 2012.

26. J. Darvish and M. Ezoji, “Morphological Exudate Detection in Retinal Images using PCA-based Optic Disc Removal”, Journal of AI and Data Mining Vol 7, No 4, DOI:

10.22044/JADM.2019.1488, 2019.

Referințe

DOCUMENTE SIMILARE

The neural network was used to determine the optimum catalyst conditions for obtaining maximum hydrogen production performance of Pd/SBA-15 catalyst for the production of

De¸si ˆın ambele cazuri de mai sus (S ¸si S ′ ) algoritmul Perceptron g˘ ase¸ste un separator liniar pentru datele de intrare, acest fapt nu este garantat ˆın gazul general,

Thus, if Don Quixote is the idealist, Casanova the adventurous seducer, Werther the suicidal hero, Wilhelm Meister the apprentice, Jesus Christ will be, in the audacious and

genautonomousdetectionofcancerusingMachineLearningandBigDataoronlyimagerecognitiontechniques.Thetraditiona landpre-contemporaryapproachesfollowedinthedetectionofcancer using various

Transfer learning on a collection of 2000 radiograms square measure usually accustomed train four modern convolutional neural networks, at the side of

Based on the functions of convolutional neural network layers, thermal images both normal and abnormal are taken as input image for segmentation of cancer

Abstract:A CAD (computer-aided diagnosis) framework based on a Deep Convolutional Neural Network was built in this paper.Initially, we applied Gaussian Mixture Convolutional

As we discussed, the various machine learning and deep learning algorithms like K-Nearest Neighbour, Convolutional Neural Network, Logistic Regression, Unsupervised