• Nu S-Au Găsit Rezultate

View of An Extensive Analysis on Diabetic Retinopathy Prediction Using Deep Learning Approaches

N/A
N/A
Protected

Academic year: 2022

Share "View of An Extensive Analysis on Diabetic Retinopathy Prediction Using Deep Learning Approaches"

Copied!
14
0
0

Text complet

(1)

An Extensive Analysis on Diabetic Retinopathy Prediction Using Deep Learning Approaches

Ramitha M A

Research Scholar, Dept. of CSE Karpagam Academy of Higher Education,

Coimbatore [email protected] Dr. N. Mohanasundaram

Professor, Dept. of CSE

Karpagam Academy of Higher Education, Coimbatore

[email protected]

Abstract- Diabetic Retinopathy leads to loss of vision leads to diabetic mellitus complications. The analysis with computer-aided diagnosis over retinal fundus images is an effectual way for predicting the disease in earlier stage and assists the physicians. There are diverse Machine Learning approaches that are adopted for predicting the complications;

however, it leads to computational complexities while performing feature extraction separately. This drawback came be overcome with the utilization of deep learning (DL) approaches as it performs feature extraction and classification concurrently for enhancing the prediction accuracy. This work provides an extensive analysis and reviews on approaches used for DR prediction. There are various challenges that are identified and needs to be addressed using the emergent DL approaches. These approaches are extremely robust, and efficient to predict DR by handling all the learning challenges and provides the direction for further analysis.

Keywords- Diabetic Retinopathy, deep learning, computational complexity, feature extraction, and classification

1. Introduction

Diabetic Retinopathy (DR) is the preliminary source of blindness due to diabetic’s mellitus. It is a major complication related to diabetes and the primary issue related to DR. It is extremely complex when it reaches the advanced stages and therefore the earlier prediction of the disease is extremely essential. Moreover, it leads to significant complexities over the healthcare system owing to huge amount of patients and the number of technical experts. It motivates to model an automated diagnostic system for assisting in prior DR diagnosis. There are various attempts that have been made in this direction and various techniques based on feature extraction and provide promising solution in predicting DR regions from the retinal fundus images.

The conventional machine-learning (ML) uses hand-engineered features are generally utilized for DR prediction. Some extensive analyses are reviewed with these conventional approaches. For instance, author et al. [1] classifies DR diagnosis based on various approaches like clustering-based models, mathematical morphology, deformable and thresholding models, retinal lesion tracking, hybrid models and matched filtering approaches.

Author et al. [2] analyses various approaches that haul out lesion features from provided images like texture, blood vessel area, micro-aneurysms, exudes, and haemorrhages. Author et al. [3] performs extensive research on the prediction of exudates. Some algorithmic are

(2)

constructed for segmenting retinal vessels. Author et al. [4] models various approaches for glaucoma prediction and optic disc segmentation. Moreover, some expert knowledge is needed for handling this hand-engineered features and selecting appropriate features required severe examination of diverse ways and tiresome parameter evaluations. However, some approaches are relies on hand-crafted features do not simplify it.

Recently, there are enormous dataset and remarkable computing power provided by graphical processing units which motivates the research area over deep-learning approaches. These approaches show outstanding recital in diverse computer vision tasks and attained better decision-making over conventional hand-crafted feature selection approaches [5]. Various learning approaches are designed for diverse tasks to examine the retinal images and to design automatic computer-aided diagnosis model. This work examines present DL approaches utilized for predicting that highlights the importance and challenges need to be addressed by recent research ideas. Initially, an extensive analysis is done with diverse DL models and examines DL-based approaches for DR prediction. At last, this work summarizes the research gaps, challenges need to be addressed, and future research direction while modelling and training the traditional learning approaches for DR prediction. Fig 1a to Fig 1f depicts the retinal image for eye detection.

Fig 1a. Normal retina Fig 1b. Mild DR Fig 1c. Moderate DR

Fig 1d. severe DR Fig 1e. Proliferative DR Fig 1f. Macular edema

2. Reviews on DR datasets

Joshi et al. [6] discusses a dataset known as Singapore epidemiology of eye diseases (SEED) which is tranquil of 236 images with the concentration of examining the major eye disease that includes cataract, glaucoma, refractive errors, AMD and DR. The images possess OC and OD regions with trained grade functions as ground truth value for segmenting purposes.

(3)

In [7] discusses digital images for optical nerve dataset for segmenting optic nerve and corresponding pathologies. It is interpreted with independent experts where the image centres are based on optic nerve and preserved in slide format. Work [8] concentrates on automated retinal image analyzer which is gathered from tracing the blood vessels, fovea and OD locations. It is used for diagnosing DR and AMD with fundus camera and Zeiss FF450 camera. Sivaswamy [9] discusses retinal image database for evaluating nerve and modelled for predicting glaucoma and composed of 169 optic nerve regions cropped manually from provided images. Kaggle dataset [10] is composed of high-resolution retinal images acquired from various circumstances and offered by expert ophthalmologists and images are allocated with grading scale of 0-4 as 0 → 𝑛𝑜 𝑟𝑖𝑠𝑘; 1 → 𝑚𝑖𝑙𝑑; 2 → 𝑚𝑜𝑑𝑒𝑟𝑎𝑡𝑒; 3 → 𝑠𝑒𝑣𝑒𝑟𝑒 𝑎𝑛𝑑 4 → 𝑃𝐷𝑅. E-ophtha dataset [11] discusses digital retinal images for extracting vessels and collected from DR screening. It is composed of 40 fundus images and chosen randomly in which 33 images does not show any DR sign and 7 images show mild DR prediction. It is partitioned into training and testing sets and composed of 20 images. It gives pixel-level annotation. Hoover et al. [12] discusses the funded program of U.S. health institute known as structured analysis of retina composed of 13 infected images related to human eye. It offers disease code list and image names. The optic nerve and blood vessels have pixel level annotation without any grading. It performs manual segmentation by labelling the pixels of image vessels. It deals with challenging OD detection problem owing to the retinal disease appearance. Kal et al. [13] discusses a dataset composed of 90 color fundus images where the image annotation is done by independent experts. It is also known as ‘calibration of level_1 fundus image. It is partitioned as 28 training and 61 testing images. Chase dataset [14]

constructed the diabetic retinopathy image database to get rid of various shortcomings in grading and constraint amount of observers. Images are chosen by the experts with visible fragile vessels. The images are computed by five diverse experts. The annotated images include the soft EX, HMs, MAs, blood vessels, macula and ODs. Zhang et al. [15] models the online fundus image database for analysing glaucoma. It is an online repository with ground truth values for sharing the retinal image analysis and appropriate diagnosis. The images are acquired for 3 years and specifically concentrate on optic cup and OD segmentation for predicting glaucoma. NIH fund for generated for develop under-eye age-based eye disease [16] which is a long-term multi-centre with 595 participants and modelled to evaluate the clinical courses. The participant’s illness was analyzed for graded long-term with reading centre, ophthalmologic evaluation, and visual acuity.

The computing and informatics department of Lincoln University designed a retinal vessel image dataset [17] for width estimation. It is composed of 193 annotated segments and 16 mydriatic images. It includes 5066 profiles with three independent experts. It evaluates the precision and accuracy of vessel width measurement and it includes partitioning of 16 images into four sections: kick-point image (2 images), central light reflex image set (2 images), vascular disease image set (4 images), and high resolution image set (8 images). Eyepacs dataset [18] discusses eye picture archive and communication systems for modelling telemedicine system and flexible protocol for screening DR in collaboration with the physicians. The fundus images are uploaded easily to the EyePACS web. It computes the severity and presence of discrete retinal lesions related to DR. It uses canon CR-1 non- mydriatic cameras which are accessed over the EyePACS website. It is graded as HMs, MAs, wool spots, intra-retinal micro-vascular abnormalities, venous beading, without/with MA, pre-retinal HE and HM, vitreous HM and fibrous proliferation. It also deals with the occurrence of laser scars. Images are provided on the online grading template that records the lesion type with yes (present), no (absent), or cannot grade.

(4)

3. Reviews on automatic retinopathy detection

This section discusses the DR lesions type, DR stages, DR grading, and detection framework.

There are some research directions that need to be examined and addressed. In [19], the author discusses the earlier stage of DR and retinal damage due to the internal elastic lamina disruption. It diminishes the vision owing to the endothelial barrier function loss that causes retinal edema and leakage. MA is extremely smaller and shows red spot and sharp margins.

The bot and dot haemorrhages occur due to retinal layer. The damaged capillary leakage leads to exudates and appears in irregular shape and seems to be yellow in color. There are two diverse types of exudates (EX): hard and soft exudates. The former exudates are grey cloud that occurs in arteriole and the latter model possesses sharp margins. It is composed of circular rings and blocks. Ex is different with bright and dark lesions. The diameter variations are termed as venous bleeding with advanced non-proliferative diabetic retinopathy stages.

Patz et al. [20] discusses intra-retinal micro-vascular abnormalities that specify actual blood vessels growth or pre-existing capillaries. The retinal vessels grow towards the vitreous and known as neo-vascularization. It leads to hard exudates and retina thickening with one diameter disk and fovea is for central vision. The foremost objects act as a primary role in DR prediction that classifies highest contrast among circular-shaped regions. It is utilized as a reference frame for predicting severe eye pathologies like disc drusen, glaucoma, and optic disc pit to verify disc neo-vascularization. Also, OD is utilized to pinpoint some structures like fovea. For normal retina, the OD edges are well-defined and clear. Fig 2 depicts the flow diagram for retinal disease prediction.

Fig 2 Flow diagram for retinal disease prediction

Fleming et al. [21] discusses the stages and classes of severity as proliferative and non- proliferative models. The later model is considered as the earlier stage when diabetes initiates

Pre-processing, ROI detection, cropping

Image segmentation

Feature extraction and learning

Classification Retinal

image dataset

Normal retinal image

Disease effected retinal image

(5)

to spoils retinal blood vessels. It is frequent among the diabetes people. The vessels initiate to discharge blood or fluid that causes the retina to swell. When time passes, edema makes retinal thickening with indistinct vision. The features include hard EX with or without haemorrhage. The prediction of DR is the superior stage that leads to growth on blood vessels. It is considered as proliferation of abnormal vascular with vitreous cavity. The blood vessels bleed with vitreous cavity and crucial visual loss owing to haemorrhage. In [22], the author performs screening and examining retina using ophthalmoscopy and needs dilated pupils to classify and grade pathology. The grading process is a crucial activity during DR screening programme for predicting retinal disease. Grading is done with well-trained technicians to carry out a necessary task for recovering blinding eye conditions for potential blinding eye recovery like diabetic eye disease and age-based mascular degradation. The higher level of DR prediction is classified into two tasks like image and lesions-based detection. The former model concentrates on the evaluation of image levels during screening process as it computes DR signs. The latter model includes two phases: lesion classification and lesion segmentation or detection. The detection process performs potential ROI;

however, it includes false positive. It is utilized for eliminating false positive. It is a screening task that categorizes image as normal or DR signs.

The DR detection framework includes pre-processing steps, segmentation, feature extraction and appropriate classification process. It is divided into two processes: supervised and unsupervised learning. The former model is a system with labelled data for infer-functional mapping; while the later is to identify the hidden patterns with its own properties of unlabeled samples with similarity. Using the hand-crafted feature extraction process, the DL approaches are integrated with automatic learning process and unified framework. The training is done in an E2E manner. Table 1 depicts the comparison of grading process.

Table 1 Comparison of grading process

Grade Features Description

𝑅0 → 𝑛𝑜 𝐷𝑅 No abnormalities 12 month re-screening 𝑅1 → 𝑚𝑖𝑙𝑑 𝐷𝑅 Only MA 12 month re-screening 𝑅2 → 𝑚𝑜𝑑𝑒𝑟𝑎𝑡𝑒 𝐷𝑅 Venous beading in two

quadrant 6 month re-screening

𝑅3 → 𝑠𝑒𝑣𝑒𝑟𝑒 𝐷𝑅 Intra-retinal micro vascular

abnormalities Re-screening

𝑅4 → 𝑃𝐷𝑅 New vessels at OD Re-screening 𝑀0 → 𝑛𝑜 𝑀𝐸 No retinal thickening 12 month re-screening 𝑀1 → 𝑚𝑖𝑙𝑑 𝑀𝐸 Retinal thickening at

posterior pole 6 month re-screening 𝑀2 → 𝑚𝑜𝑑𝑒𝑟𝑎𝑡𝑒 𝑀𝐸 Similar mild ME signs Laser treatment

𝑀3 → 𝑠𝑒𝑣𝑒𝑟𝑒 𝑀𝐸 Retinal thickening that

affects fovea centre Laser treatment 4. Reviews on performance evaluation

Some metrics are done for evaluating DR prediction algorithm. They are: precision, sensitivity, specificity, accuracy, ROC curve, F-score, dice similarity coefficient (DSC), overlapping error, log loss, IOU, and boundary based computation.

1) Accuracy: It is depicted as the proportion of appropriately classified samples over total instances. It is depicted as in Eq. (1):

(6)

𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑇𝑃+𝑇𝑁

𝑇𝑃 +𝑇𝑁+𝐹𝑃+𝐹𝑁 (1)

Here, 𝑇𝑃 → 𝑡𝑟𝑢𝑒 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒 is number of positive samples with DR that are properly categorized; 𝑇𝑁 → 𝑡𝑟𝑢𝑒 𝑛𝑒𝑔𝑎𝑡𝑖𝑣𝑒 is the number of negative samples are appropriately classified; 𝐹𝑃 → 𝑓𝑎𝑙𝑠𝑒 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒 is the number of positive instances that are inappropriately categorized and 𝐹𝑁 → 𝑓𝑎𝑙𝑠𝑒 𝑛𝑒𝑔𝑎𝑡𝑖𝑣𝑒 is the number of negative samples that are inappropriately categorized.

2) Sensitivity: It is known as the TPR (true positive rate) or recall. It is the fraction of properly classified positive samples;

3) Specificity: It is also known as TNR (true negative rate). It is the fraction of properly classified negative samples;

4) Precision: It is known as positive prediction value. It is a measure of fraction of positive samples that are properly categorized. It is mathematically shown as in Eq. (2), Eq. (3), Eq.

(4), and Eq. (5):

𝑆𝑒𝑛𝑠𝑖𝑡𝑖𝑣𝑖𝑡𝑦 𝑟𝑒𝑐𝑎𝑙𝑙 = 𝑇𝑃

𝑇𝑃 + 𝑇𝑁 (2)

𝑆𝑝𝑒𝑐𝑖𝑓𝑖𝑐𝑖𝑡𝑦 = 𝑇𝑁

𝑇𝑁 + 𝐹𝑃 (3)

𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 = 𝑇𝑃

𝑇𝑃 + 𝐹𝑃 (4)

𝐹 = 2 ∗ 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 ∗ 𝑅𝑒𝑐𝑎𝑙𝑙

𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 + 𝑅𝑒𝑐𝑎𝑙𝑙 (5)

5) Logarithmic loss: It specifies accuracy by penalization of false classification. For predicting the loss, classifier needs to allocate probability of every class. It is expressed as in Eq. (6):

𝐿𝑜𝑔 𝑙𝑜𝑠𝑠 = − 1

𝑁 𝑌𝑖𝑗log 𝑃𝑖𝑗

𝑀

𝑗 =1 𝑁

𝑖=1

(6)

Here, ′𝑁′ is number of samples; ′𝑀′ is number of labels; 𝑦𝑖𝑗 is the binary specifies where label ′𝑗′ is proper classification for instances, 𝑃𝑖𝑗 is the probability of allocated label to some instances. Some metrics are utilized for evaluating the segmentation performance that includes intersection over union, overlapping error, DSC, and boundary-based evaluation.

Intersection over union (IOU), overlapping error is expressed as in Eq. (7), Eq. (8):

𝐼𝑂𝑈 = 𝐴𝑟𝑒𝑎 (𝐴 ∩ 𝐺)

𝐴𝑟𝑒𝑎 (𝐴 ∪ 𝐺) (7)

𝐸 = 1 − 𝐼𝑂𝑈 (8)

Here, ′𝐴′ is segmentation output and ′𝐺′ specifies the manual segmentation of ground truth value. Boundary-based evaluation is depicted as absolute point-wise localization error attained by evaluating distance among the closed boundary curves. The distances among the curves are expressed as in Eq. (9):

(7)

𝐵 = 1

𝑛 𝑑𝑔𝜃 2− 𝑑𝑎𝜃 2

𝜃𝑛

𝜃=1

(9)

Here, 𝑑𝑔𝜃 and 𝑑𝑎𝜃 are distance from curve centroid to points; ′𝑛′ is total amount of angular samples. The distance among ground truth and boundaries are ideally nearer to zero. DSC is mathematically expressed as in Eq. (10):

𝐷𝑆𝐶 = 2𝑇𝑃

2𝑇𝑃 + 𝐹𝑃 + 𝐹𝑁 (10)

DSC values ranges from 0 and 1; the DSC value is closer to 1 with superior segmentation outcomes. Region-based Precision Recall is utilized to evaluate boundary or edge detection on overlapped region. It projects the quality segmentation with precision recall space.

Fig 3 ANN architecture

5. Reviews on deep learning

There are various DL approaches that includes convolutional neural networks (CNNs), deep belief networks (DBN), auto-encoders (AE), and recurrent neural networks (RNN). These learning architectures are explained below and the graphical representations are shown in Fig 4 to Fig 8.

a. Convolutional Neural Networks (CNNs)

CNN model replicate the human visual system and extensively utilized for diverse computer vision tasks. It is significantly composed of three diverse layers: convolutional, pooling and fully connected layers. Initially layer uses convolution to encode spatial data and FC layers are used for encoding the global information. Some CNN model includes ResNet, VGGNet, AlexNet, and GoogLeNet. The features are automatically learned and outcomes in superior performance. CNN models like Alexnet and LeNet composed of few layers. Lim et al. [23]

elaborates deep CNN model known as VGGNet with 19 layers, deeper for superior performance. The deeper model includes ResNet, Inception, and GoogLeNet with various computer vision tasks. Usually, the model specifies the input and provides output. The process of learning CNN model needs expensive huge amount of data to get rid of over- fitting issues and fulfils faster convergence; however, huge amount of data are not accessible over medical domain. Transfer learning is utilized for CNN pre-training as feature extractor and fine-tuning CNN with data for appropriate region. FCN is an extended CNN version

(8)

where FC layers are transformed into convolutional and deconvolution layers are attained with output mapping as the input image. Generally, it is utilized for segmentation.

Fig 4 Deep Neural Networks

Guo et al. [24] adopts CNN model for segmenting vessel and non-vessel pixels. It is composed of two FC and three convolutional layers. Sevas et al. [25] anticipates pixel-wise supervised segmentation approach is trained with provided images and pre-processing is done with global contrast normalization, zero-phase whitening, gamma corrections, and augmented with geometric transformations. This model is forceful against vessel reflex and sensitive over vessels. Zilly et al. [26] performs retinal blood vessel segmentation as regression tasks that are applied for VGG pre-training with modified FC layers and integrated with convolutionl layers before performing layer pooling. Convolutional layers are upsampled to similar image size, concatenated and trained over the volume. Zhang et al. [27] performs discriminative features with CN and uses k-NN for performing principle component analysis for estimating local structure distribution employed for generalized probabilistic tracking model for segmenting the blood vessels. Fu et al. [28] uses FCN merged with structured detection to segment blood vessels with multi-label inference. The layered-CNN model is utilized for segmenting the blood vessels and fovea. After colored image normalization, the author formulates the segmentation issues as a classification issue related with the classes of blood vessels. It is extremely time consuming as the pixels are independently classified with number of pixels.

(9)

Fig 5 Convolutional Neural Networks

b. Auto-encoder

Maji et al. [29] discusses auto-encoder with hidden layer neural networks with input and output. It is utilized for constructing the stacked auto-encoder. The training of this model is composed of two phases: fine tuning and pre-training. During pre-training, SAE is trained in an unsupervised manner. It is fine-tuned with back propagation and gradient descent model.

There are two diverse auto-encoder types known as sparse and de-noising. It is an auto- encoder type that intends to sparse feature extraction from the raw data. The sparsity representation is attained by direct output penalization of hidden unit activations or penalization of hidden unit biases. Roy et al. [30] discusses denoising auto-encoders for DR prediction. It works in a robust manner by recovering corrupted input and induces the model for capturing the appropriate version.

Fig 6 Auto-encoder representation

c. Recurrent neural networks

Mikolov et al. [31] discusses about a neural network type that learns context from the provided input patters. The outputs are learned from the prior iterations and merge it with given input for yielding the output. It includes various parameter set to the hidden weights, output weights, and input weights.

(10)

Fig 7 Recurrent Neural Network representation

d. Deep belief networks

Vinyals et al. [32] elaborates this network model with the designed with the cascading restricted Boltzmann machines. It performs divergence algorithm for maximizing the similarity among the input and projections. The probability measure shows the similarity among the de-generated solutions and offers a probabilistic model. Initially, it is pre-trained in an unsupervised manner with greedy learning approach. It is fine-tuned with back propagation and gradient descent algorithms.

Fig 8 Deep belief network

(11)

Table 2 Comparison of various deep learning approaches Learning

approaches Description Benefits Disadvantages

Deep Neural Networks

It is extremely simple for learning approaches with more hidden layers.

It is useful for various applications

that are related to regression and

classification

It is extensively utilized for superior

performance and finest accuracy

Huge time is needed for training process

Convolutional Neural Networks

It is extremely superior for image-

based applications

It is completely faster and superior with better performance

Training labels are required for data in

classification associated applications

Recurrent Neural Networks

It is completely useful for handling the sequence format.

The network weights are shared with

network nodes

It is operated in a sequential manner and provides better

accuracy

It needs huge sized dataset for superior

performance

Deep Belief Network

It is completely utilized for supervised learning process. The hidden

layer of every network is accessible

for successive sub- network

Greedy norms are utilized in every layer to superior

prediction

It requires superior higher computational

complexity during training process

Deep auto-encoder

It is utilized for dimensional reduction of image features. The input and output size is

same

Not labelled input data and diverse applications like sparse auto-encoder,

de-noising, auto- encoder. It provides flexible robustness to

input data.

Needs pre-trained process while using it

Deep Boltzmann machine

It functions over uni- directional manner

and boltzmann

With appropriate interference and functions for discrete

predicted value

It needs huge dataset for analysis,

utilization, optimization of

parameters 6. Limitations and future research directions

There are various learning approaches that are adopted for predicting the eye disease which is caused due to diabetes mellitus. Some of the limitations are listed below:

(12)

1) Some methods are analyzed over the real-time dataset; however, it leads to lesser standardization with the traditional models.

2) Some retinal image database is extensively utilized by various approaches. The significance of the prevailing approaches is not satisfactory while implementing over the large-sized database.

3) There is no proper standardization for computing disc ratio in various prevailing approaches.

4) There are very techniques that are accessible using learning approaches for eye disease prediction.

In future, there are some research directions that need to be concentrated.

1) Modelling various approaches based on deep learning approach for eye disease prediction.

2) Modelling an efficient method that is utilized for various sizes of retinal image dataset.

3) The standardization of the approach needs to be improved with execution steps requires accuracy for eye disease detection.

7. Conclusion

This work provides an extensive review on various deep learning approaches that includes the prediction of diabetic retinopathy, reviews on DR dataset, performance metrics, and so on. This work describes various learning approaches utilized for predicting eye disease with available retinal images. Recently, there is enormous research work that intends to resolve various research challenges and it needs to be addressed with learning process. The investigators pretend to enhance the performance of disease detection with extensive analysis.

Hence, the performance of these models does not meet the standardization of the model which is considered to be a challenging task for various research works. Also, it is obvious that some works are carried out for eye disease with deep learning approaches. Therefore, it is considered as an open research direction for predicting the eye disease with deep learning frameworks.

REFERENCES

[1] Borra S, Thanki R, Dey N (2019) Satellite image analysis: clustering and classification.

Springer International Publishing, Germany

[2] Manju K, Sabeenian RS (2018) Robust CDR calculation for glaucoma identification. Spec Issue Biomed Res 2018:S137– S144

[3] Kandi H, Mishra D, Gorthi SRS (2017) Exploring the learning capabilities of convolutional neural networks for robust image watermarking. Comput Secur 65:247–268 [4] Kanse SS, Yadav DM (2019) Retinal fundus image for glaucoma detection: a review and study. J Intell Syst 28(1):43–56

[5] Issac A, Sarathi MP, Dutta MK (2015) An adaptive thresholdbased image processing technique for improved glaucoma detection and classification. Comput Methods Programs Biomed 122(2):229–244

(13)

[6] Joshi GD, Sivaswamy J, Krishnadas SR. Optic disk and cup segmentation from monocular color retinal images for glaucoma assessment. IEEE Trans. Med. Imaging 2011;30(6):1192–205

[7] DRION Dataset. http://www.ia.uned.es/ejcarmona/DRIONS-DB.html [accessed 30.04.18].

[8] ARIA Dataset. http://www.eyecharity.com/aria_online.html [accessed 28.02.18].

[9] Sivaswamy J, Krishnadas SR, Joshi GD, Jain M, Tabish AUS. Drishti-gs: retinal image dataset for optic nerve head (onh) segmentation. 2014 IEEE 11th International Symposium on Biomedical Imaging (ISBI). 2014. p. 53–6.

[10] Kaggle Dataset. https://www.kaggle.com/c/diabetic-retinopathy-detection/data [accessed 08.01.18].

[11] E-ophtha. http://www.adcis.net/en/Download-Third-Party/E-Ophtha.html [accessed 08.01.18].

[12] Hoover AD, Kouznetsova V, Goldbaum M. Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response. IEEE Trans. Med. Imaging 2000;19(3):203–10.

[13] Kälviäinen RVJPH, Uusitalo H. Diaretdb1 diabetic retinopathy database and evaluation protocol. Medical Image Understanding and Analysis, vol. 2007. 2007. p. 61.

[14] CHASE Dataset. http://www.chasestudy.ac.uk/ [accessed 01.02.18].

[15] Zhang Z, Yin FS, Liu J, Wong WK, Tan NM, Lee BH, et al. Origa-light: an online retinal fundus image database for glaucoma analysis and research. 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). 2010. p.

3065–8.

[16] Nih AREDS Dataset. https://www.nih.gov/news-events/news-releases/nih-addsfirst- images-major-research-database [accessed 01.02.18].

[17] Al-Diri B, Hunter A, Steel D, Habib M, Hudaib T, Berry S. A reference data set for retinal vessel profiles. 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2008, EMBS 2008. 2008. p. 2262–5.

[18] EyePACS Dataset. http://www.eyepacs.com/eyepacssystem/ [accessed 01.03.18].

[19] Early Treatment Diabetic Retinopathy Study Research Group, et al. Grading diabetic retinopathy from stereoscopic color fundus photographs – an extension of the modified airlie house classification: ETDRS report number 10. Ophthalmology

1991;98(5):786–806.

[20] Patz A. Studies on retinal neovascularization. Friedenwald lecture. Investig. Ophthalmol.

Visual Sci. 1980;19(10):1133–8.

[21] Fleming AD, Goatman KA, Philip S, Williams GJ, Prescott GJ, Scotland GS, et al. The role of haemorrhage and exudate detection in automated grading of diabetic retinopathy. Br.

J. Ophthalmol. 2010;94(6):706–11.

[22] Diabetic Retinal Screening, Grading, Monitoring and Referral Guidance.

https://www.health.govt.nz/publication/diabetic-retinal-screening-grading-monitoringand- referral-guidance [accessed 01.05.19].

[23] Lim G, Cheng Y, Hsu W, Lee ML. Integrated Optic Disc and Cup Segmentation with Deep Learning. 2015. p. 162–9.

[24] Guo Y, Zou B, Chen Z, He Q, Liu Q, Zhao R. Optic Cup Segmentation Using Large Pixel Patch Based CNNS. 2016.

[25] Sevastopolsky A. Optic disc and cup segmentation methods for glaucoma detection with modification of u-net convolutional neural network. Pattern Recogn. Image Anal.

2017;27(3):618–24.

(14)

[26] Zilly JG, Buhmann JM, Mahapatra D. Boosting Convolutional Filters with Entropy Sampling for Optic Cup and Disc Image Segmentation from Fundus Images. 2015. p. 136–43 [27] Zhang D, Zhu W, Zhao H, Shi F, Chen X. Automatic localization and segmentation of optical disk based on faster r-cnn and level set in fundus image. Medical Imaging 2018:

Image Processing, vol. 10574. 2018. p. 105741U

[28] Fu H, Cheng J, Xu Y, Wong DWK, Liu J, Cao X. Joint Optic Disc and Cup Segmentation Based on Multi-Label Deep Network and Polar Transformation.

2018arXiv:1801.00926.

[29] Maji D, Santara A, Ghosh S, Sheet D, Mitra P. Deep neural network and random forest hybrid architecture for learning to detect retinal vessels in fundus images. 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC).

2015. p. 3029–32.

[30] Roy AG, Sheet D. Dasa: domain adaptation in stacked autoencoders using systematic dropout. 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR). 2015. p. 735–9.

[31] Mikolov T, Karafiát M, Burget L, Černocký J, Khudanpur S. Recurrent neural network based language model. Eleventh Annual Conference of the International Speech Communication Association 2010.

[32] Vinyals O, Toshev A, Bengio S, Erhan D. Show and tell: lessons learned from the 2015 mscoco image captioning challenge. IEEE Trans. Pattern Anal. Mach. Intell.

2017;39(4):652–63.

Referințe

DOCUMENTE SIMILARE

and Zhou H., "Diabetic Retinopathy Detection Using Prognosis of Microaneurysm and Early Diagnosis System for Non-Proliferative Diabetic Retinopathy Based on Deep

[11] in the paper proposed an explanation about deep learning techniques that are used for object detection.. Some of the SOTA algorithms are discussed in this

The models used in Machine Learning to predict diabetes are the Linear Regression, Support Vector Machine.. Other algorithms require more computational time and Deep

Rather, the injury suffered by a particular breach (violation) of international law does not provide, thereby, as a victim to obtain redress in international courts. A state -

Talvila , Estimates of the remainder in Taylor’s theorem using the Henstock- Kurzweil integral,

This manuscript discusses polymeric nanoparticles, oral insulin administration using polysaccharides and polymeric nanoparticles, inhalable insulin nanoparticle formulations,

Diabetic foot ulcers are the most common complications of Diabetes Mellitus (DM) and conventional wound healing therapies is not that much effective for DFUs and if proper

So to overcome the limitations of the previously developedmethods, face recognition based automatic attendance marking system is developed using deep learning