• Nu S-Au Găsit Rezultate

View of Feature and Segmentation Based Eye Disease Classification Using Glcm and Tree Technique

N/A
N/A
Protected

Academic year: 2022

Share "View of Feature and Segmentation Based Eye Disease Classification Using Glcm and Tree Technique"

Copied!
8
0
0

Text complet

(1)

Feature and Segmentation Based Eye Disease Classification Using Glcm and Tree Technique

Jayasheela. M

1*

, Gomathi. E

2

Ramasamy. K

3

,

123KIT-kalaignarkarunanidhi Institute of Technology, Coimbatore ,India

*[email protected] ABSTRACT

Now-a-days

,

Medical imaging and its analysis is a very notable area in research and development where digital images are processed for the diagnosis and helps in identifying different medical oriented issues. Diabetic retinopathy (DR) is a kind of the serious eye problems that may leads to blindness. DR is a disease related to eye caused by the increase of insulin in blood. The earlier identification and prediction of the disease DR helps in saving a patient vision and also helps to find abnormalities like different types of lesion, such as hemorrhages, micro aneurysms, soft and hard exudates. DR is a threatening disease to vision due to diabetes mellitus which is the important reason for loss of vision. In frequent situations the patient is not conscious of the disease until it is delay for effective treatment. The prevalence of retinopathy varies with the age of diabetes and the time of disease. Early diagnosis by regular checkup screening and treatment helps in preventing from impairment and loss of vision. In this paper, presented a method for detecting and classifying the types of abnormality that present in retinal images and calculate accuracy performance. Various image processing methodologies including Image Acquisition, Filtering, Enhancement, Segmentation, Classification and concept has been improved for the prediction of DR earlier based on abnormalities in human eye. It projects a review of work on using of processing techniques for detecting DR using Grey Level Co-occurrence Matrix (GLCM).

Keywords

Diabetic Retinopathy, Enhancement, Filtering, Adaptive Histogram Equalization, GLCM

Introduction

Diabetic retinopathy is a terrible eye disease caused by diabetes mellitus that is a major cause of blindness worldwide, particularly in developed countries.[4][14-15].

Early detection and treatment are important at this stage in order to prevent patients from going blind, or at the very least to delay the progression of diabetic retinopathy toward blindness. As a result, widespread diabetic patient screening is highly desirable. Manual grading, on the other hand, always produces accurate results because it necessitates a high level of experience and knowledge. Much work has gone into developing accurate computerised scanning systems based on colour fundus images over time [8] [13].

Diabetic eye disease is a disease where damage happens to the retina since diabetes level is high and that leads to blindness. 80% of people are affected by diabetes for 30 years or more. 90 percent of cases would have got solution if there were proper treated and monitored their eye position. The use of intelligent diagnostic systems is the next step after imaging and computer vision-based systems. DR is diagnosed by machine learning based implementation with hemorrhages, hard and soft exudates and blood vessels as inputs. Hard exudates are detected using top down image based segmentation method and local thresholding. The dynamic thresholding technique is applied for analyzing retinal images. With the help of morphological operations DR can be diagnosed.

Literature Review

Aqib Ali et al. had suggested machine learning based automatic segmentation and hybrid feature analysis for Diabetic Retinopathy classification using fundus Image (2020). This study used fundus photos to identify four DR phases such as mild, moderate, non-proliferative and proliferative as well as the normal human retina using fused hybrid-feature analysis. The various modalities of texture analysis have allowed the results to vary. In this work four features were extracted. Finally, a blended hybrid-feature dataset was created, which is a combination of the extracted features. A data fusion technique was used to create a merged hybrid-feature dataset.

(2)

Efficient Machine Learning Techniques to Detect Glaucoma Using Structure and Texture Dependent Features are proposed by Nataraj Vijapur and R. Srinivasa Rao Kunte (2020). The aim of this study is to use image processing and machine learning-based classification techniques to accurately predict and diagnose Glaucoma. Techniques for segmentation include the use of a special prototype approach and a Gray Level Coherence Matrix as a foundation.

Revathi Priya Muthusamy et al. (2019) proposed using DTCWT, GLCM Feature Extractor, and CNN-RNN Classifier to detect abnormalities in retinal blood vessels automatically. The authors used pre-processing to isolate the image's Green plane, then de-noised and fed the signals to the DTCWT and GLCM feature extraction processes, followed by the CNN-RNN neural network for classification.

P Hosanna Princye et al. (2018) had proposed the technique of extracting the blood vessel features based on their geometrical properties that are carried out by making use of GLCM.

K. Karthikeyan et al. (2017) had proposed an approach that the segmentation of blood vessels of retina are done by using clustering. Clustering is the process of grouping similar pixels. A widely used method for clustering works on k- Means in which the partitioning data into k-clusters is done.

Sharath Kumar P N et al. (2016) had presented a method for automatic analysis and classifying of retina of eye by applying two- field mydriatic fundus photography. It also included histogram analysis for blood vessels extraction.

Sandra Morales et al. (2015) had proposed a technique of capabilities to discriminate the fundus texture to differentiate between healthy and pathological images. The aim of this paper lies in identifying the performance with the help of Local Binary Patterns (LBP) as descriptor of texture of retinal images. It is based on viewing around each pixel at the local variations, and labeling is assigned to various other local patterns

.

Proposed Work

Diabetic Retinopathy has an effect on the blood vessels in the retina, according to the proposed system. Image acquisition, preprocessing such as filtering and contrast enhancement, feature extraction such as GLCM, and accurate disease detection will all be part of the image processing. The skin locus method and the histogram method are used to identify and diagnose diabetic retinopathy disease in this study. Performance between Support Vector Machine (SVM), K- Nearest Neighbor (KNN) and tree classifier techniques is compared. In contrast to existing systems, the proposed system's overall classification rate would provide greater

efficiency and accuracy in

recognizing diseases. Patients will receive their report by email after the results are received. Records will be submitted via email and SMS via GSM module after the results are received.

Input Image

Image processing techniques such as image acquisition, pre-processing, feature extraction, and precise disease identification are used to process the input image. The proposed work uses a skin locus model and histogram to classify retinal images into normal and abnormal categories.

Filtering

Non-linear digital filtering methods are often used to eliminate unwanted components that induce image or signal distortion. Noise reduction is an effective processing step used to improve the effects of identification of edges such as sobel, cany, etc. Median filtering is used in the study of colored image processing to eliminate noise in the original 2D image. We will avoid edges by eliminating noise.

The Wiener filter aims to reduce the mean square error between the random process and the main process. The application of filtering is carried out whose impulse response is a Gaussian function. The picture is processed as a number of discrete pixels. A discreet approximation is obtained for the Gaussian equation.

(3)

Adaptive Histogram Equalization

Contrast enhancement is a method used in the processing of contrast images using the histogram technique. Histogram equalization achieves this by improving the distribution of more strength values to simple blur. Adaptive histogram equalization (AHE) is used to improve image contrast. It differs from other histogram equalization with respect to the fact that the system measures multiple histograms.

Skin Locus segmentation

The technique of dividing a digital image into many segments known as pixels or super pixels is known as picture segmentation. By simplifying or modifying an image's representation, segmentation aims to make it more meaningful and easier to understand.

Feature extraction

Feature extraction is a step in the dimensionality reduction process, which divides and reduces a large set of raw data into smaller groups. Hence process will be quicker. The fact that these large data sets have a large number of variables is the most important feature. To process these variables, a large amount of computing sources are required. As a result, feature extraction aids in the extraction of the best feature from large data sets by selecting and combining variables into features, effectively reducing the amount of data.

When the input data is huge while applying to algorithm it should be processed and if it is identified to be redundant then transformation is done to get a reduced set of features. Feature extraction also helps to identify both analysis by quantitative and qualitatively which depicts the amount of human eye degraded by several problems.

Classifiers

Classifiers use the features that have been obtained from data, as opposed of using the input image directly. It is also used to identify and differentiate the abnormalities that presents inhuman eye. There are various types of problems exists in human eye namely hard exudates, soft exudates, age related vision loss, etc,

a. SVM

Support vector machines (SVMs) are used in regression and classification analysis. When training examples set are given, this algorithm used to construct a model that depicts new example to each other. The SVM technique is used to represent points in space, resulting in the formation of separate categories and a distinct gap. The important case of linear SVM is worked with high efficiency by logistic regression.

b.

KNN

KNN is known as a lazy learning algorithm because it doesn't learn from the training set immediately, instead storing the data and running a dataset process when it's time to classify. The KNN algorithm only stores the dataset during the training stage, and when new data is obtained, it classifies it into a category that is very similar to the existing data.

Predictive power, calculation time, and simple output interpretation are all advantages of KNN. It can also be used to estimate continuous variables and compute Euclidean distance.It can also be used to compute the inverse weighted average using the k-nearest multivariate neighbours method. Due to noisy features, the method's accuracy is severely

(4)

degraded.

c. Boosted Tree

Tree is defined as a structure of hierarchical data in which each has one parent node along with several or no children node. Each node has the capability to save any type of data. The tree algorithm is used to improve the accuracy of the system. Boosting tree is a technique used for increasing the accuracy of function by applying it in series and its output are combined for each function with total weighting so that maximum error occurrence can be controlled.

A boosted decision tree is an ensemble learning system in which the first tree's errors are corrected by the second tree, and the first and second trees' errors are corrected by the third tree, and so on. The entire ensemble of trees that makes the prediction is used to produce the prediction.

d. Decision Tree

The algorithm for predicting the class of a given dataset in a decision tree begins at the root node of the tree. The values of the root attribute are compared to the values of the real dataset attribute, and the algorithm then follows the branch and jumps to the next node based on the results. The algorithm compares the attribute value with the other sub-nodes and moves on to the next node. It repeats the loop until it reaches the tree's leaf node.

Fig 1: Block diagram of proposed system

Figure 1 shows the block diagram of the proposed system. This proposed technique is used to diagnose a problem with the human eye that causes diabetics' vision to deteriorate. The proposed system depicts a simple and early identification of the problem, which aids in reducing time complexity and paving the way for improved performance.

The input image is converted into gray scale image for easy computation and to avoid loss of information. Median filter and wiener filter is used to improve the results by processing of edge detection and reduction of additive noise in the input image. Histogram equalization is used to gain high contrast and intensity. AHE is used to improve local

(5)

contrast and enhancing edge definitions in different section of image. Even though various types of abnormalities exists in human eye then the results are observed with the presence of soft exudates in the human eye.

Results and Discussions

Monte Carlo simulations is used to run the input image for various processing methods. The two parameters namely Peak Signal to Noise Ratio (PSNR) and Mean Square Error (MSE) are measured to analyze the quality of image.

2a.Input image 2b.Median filter 2c.Wienerfilter 2d.Gaussian filter

Figure 2a represents the input image loaded in run time and then the colored image is converted into gray scale image using pre-processing step. Figure 2b shows the image obtained after applying image through median filter and Figure c and d represents that image obtained after applying to wiener and Gaussian filter respectively.

Figure 3a and 3b shows the image obtained after applying histogram equalization and adaptive histogram equalization in order to improve contrast and to produce graphical representation of the image. Figure 3c represents that image is adjusted to that needed pixel level.

3a. Histogram Equalization 3b.Adaptive Histogram Equalization 3c.Image adjustment equalization Fig 3: Output results of various Equalization techniques

Figure 4 shows segmented image which implies that affected area is separated out. Figure 5 shows the output along with the name of the disease affects the diabetic eye.

(6)

Fig 4: Segmented image Fig 5: Output

Figure 6 shows the values obtained for that particular image using GLCM technique.

Fig 6: Values from GLCM

7a. Boosted tree 7 b. Decision Tree (Complex)

(7)

7 c. Linear SVM 7 d.KNN

Fig 7: Matrix representation to identify the abnormalities found in eye

Figure 7 shows the matrix representation of boosted tree, decision tree, linear SVM and KNN to identify the abnormalities in the eye. From the figure it is observed that green color shows the correct location of disease in matrix and pink represents the false location of disease.

Fig 8: Accuracy estimation of decision tree, SVM, KNN and ensemble boosted tree

Figure 8 shows the plot of Accuracy estimation of tree, SVM, KNN and ensemble boosted tree. It is clear that complex tree structure gives around 95% accuracy when compared other systems namely linear SVM (61%), KNN (49%), ensemble boosted tree (81%).

(8)

Conclusion

The optic disc is discovered using skin locus methods or segmentation with extraction features, and blood vessels and exudates are segmented and identified using intensity computing, enhancement, and extraction features using GLCM.

Exudates are categorized as true or false exudates with the aid of Tree, SVM and KNN classifiers and have been able to differentiate between three grade level forms with an overall accuracy of 95 per cent.

References

[1] Aqib Ali, Salman Qadri, Wali Khan Mashwani , Wiyada Kumam, Poom Kumam, Samreen Naeem , Atila Goktas Farrukh Jamal,Christophe Chesneau, Sania Anam and Muhammad Sulaiman, (2020). Machine Learning Based Automated Segmentation and Hybrid Feature Analysis for Diabetic Retinopathy Classification Using Fundus Image DOI: 10.3390/e22050567.

[2] Nataraj Vijapur,& R. Srinivasa Rao Kunte (2020). Efficient Machine Learning Techniques to Detect Glaucoma using Structure and Texture based Features International Journal of Recent Technology and Engineering 9(2), 193-201.

[3] Revathi Priya Muthusamy, S. Vinod, M. Tholkapiyan (2019). Automatic Detection of Abnormalities in Retinal Blood Vessels using DTCWT, GLCM Feature Extractor and CNN-RNN Classifier International Journal of Recent Technology and Engineering 8(4), 329-331.

[4] Kaji Y (2018) Diabetic eye disease. diabetes and aging-related complications. Springer, Singapore, pp 19–29.

[5] P Hosanna Princye, &V Vijayakumari (2018). “Retinal disease diagnosis by morphological feature extraction and SVM classification of retinal blood vessels”, Biomedical Research, 22-30.

[6] JasemAlmotiri, Khaled Elleithy and Abdelrahman Elleithy(2018). “Retinal Vessels Segmentation Techniques and Algorithms: A Survey”, Applied Sciences, 2-31.

[7] Wiharto Wiharto and Esti Suryani (2020). The Comparison of Clustering Algorithms K-Means and Fuzzy C- Means for Segmentation Retinal Blood Vessels, ACTA INFORM MED ,28(1), 42-47.

[8] Islam M, Dinh AV, Wahid KA (2017) Automated diabetic retinopathy detection using bag of words approach.

J Biomed Sci Eng 10, 86–96

[9] IshmeetKaur &Lalit Mann Singh(2016). A Method of Disease Detection and Segmentation of Retinal Blood Vessels using Fuzzy C- Means and Neutrosophic Approach, Imperial Journal of Interdisciplinary Research, 2(6), 551-557

[10] Sandra Morales, KjerstiEngan, Valery Naranjo and Adrian Colomer(2016), “Retinal Disease Screening through Local Binary Patterns”, IEEE Journal of biomedical and health informatics, 21(99).

[11] Sharath Kumar P N , Rajesh Kumar R, Dr. AnujaSathar, Dr. Sahasranamam V. (2013). Automatic Detection of Exudates in Retinal Images Using Histogram Analysis Proceedings of 2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS).

[12] Antal B, Hajdu A (2012) An ensemble-based system for microaneurysm detection and diabetic retinopathy grading. IEEE Trans Biomed Eng 59(6), 1-7.

[13] M. Rema & R. Pradeepa(2007), “Diabetic retinopathy: An Indian perspective”, Madras Diabetes Research Foundation &Dr Mohan’s Diabetes Specialities Centre, Indian J Med Res, pp 297-310.

[14] Congdon NG, Friedman DS, Lietman T (2003). Important causes of visual impairment in the world today, Journal of the American Medical Association 290(15), 2057–2060.

[15] Taylor HR, Keeffe JE (2001) World blindness: a 21st century perspective. Br J Ophthalmol 85(3):261–266.

[16] Adam Hoover, ValentinaKouznet sova, and Michael Gold Baum(2000), “Locating Blood Vessels in Retinal Images by Piecewise Threshold Probing of a Matched Filter Response ”, IEEE transcations on medical imaging, 19(3),931-935.

Referințe

DOCUMENTE SIMILARE

(2020) proposed a new hybrid approach using different machine learning techniques to predict the heart disease.. Classification algorithms like Logistic Regression,

In the wake of preprocessing, helpful picture highlights are extricated utilizing highlight extraction procedure that will be utilized as preparing tests for the help

Through pre- processing, image fusion and initial tumor strip classification, tumor segmentation based on the final hybrid intelligent fuzzy Hopfield neural network algorithm,

After the segmentation step, the next technique is feature extraction, where the features are drawn out using grey level-co-occurancematrix(GLCM).In this step,

The image input is impacted by Using concept component analysis, a calculation for picture improvement versatile mean change has been made.. The spatial working of the 2D

Finally, we compare and evaluate few machine learning algorithms in spark using RDD-based regression and classification methods for Random forest, decision tree,

In this paper, programmed tumor division approach utilizing convolutional neural systems.. A fix shrewd division method has been

The prediction and analysis of atherosclerosis disease machine learning applied four classification algorithm support vector machine, decision tree, naïve bayes and