Sequential Learning Neural Network Based Diabetic Detection Using Tongue Images
E. Srividhya, Research Scholar,
Bharath Institute of Higher Education and Research.
E-Mail: [email protected]
Dr.A. Muthukumaravel, Dean, Faculty of Arts & Science, Professor Department of MCA, Bharath Institute of Higher Education and Research.
E-Mail:[email protected] ABSTRACT
A tongue is an organ that represents a person's physiology and hospitality in a pathological state. Every part of the tongue is related to the viscera. In particular, visual information is used in the detection of the mother tongue. Color, shape, and movement of a tongue, tongue material, tongue coating are important factors in diagnosis. Therefore in this work use Sequential Learning Neural Network (SLNN) method for detect the diabetic from tongue images. Deciding the different boundaries related with neural networks isn't straight forward and finding the ideal design is a time and memory consuming process. To reduce the time and memory, the SLNN algorithm is used with sequential learning. Since SLNN has single hidden layer the memory utilization will be less. Sequential learning is employed to reduce the memory space and also reduce the computation complexity. In sequential learning, the new hidden neurons will be added only if it has good impact on the output determination. Also the less contributed neuron will be removed. The sensitivity, specificity and accuracy of proposed SLNN are 92.0%, 91.33% and 91.67%respectively.
Key words: Tongue, diabetic,Sequential Learning, Neural Network 1. INTRODUCTION
In tongue investigation both the meridians and the inner organs of the human body are accepted to have numerous connections and complete associations with the tongue. Henceforth tongue examination assumes a significant part in outwardly demonstrating the in general physical and mental agreement and disharmony of an individual. At first tongue examination is prepared by dividing the tongue into tongue tip, tongue edges, tongue focus, and tongue root.
The trouble making of heart and lungs are reflected in tip of tongue, while two-sided sides of the tongues mirrors the liver and gallbladders and any issue in these parts are reflected in reciprocal sides of tongue [1-4]. At the focal point of tongue the neurotic changes in the spleen and stomach are reflected. Additionally the kidneys, digestive organs, heart, and bladder segment changes are reflected in tongue root. Some likeness of lopsidedness or pathology is red body, yellow covering, or thick covering like mozzarella cheddar, etc.
This section talks about issues that are looked in recognizing medical issues and steps to conquer it from tongue pictures. Examine around a few strategies for feature choice and classification for thyroid illness conclusion. Two normal sicknesses of the thyroid organ are hypothyroidism and hyperthyroidism which discharges thyroid chemicals for directing the pace of body's digestion . The thyroid infections classification is a significant errand. To extricate or choose feature set is a significant issue of example acknowledgment which is remembered for the pre-handling stage. As a contextual analysis, for feature choice is utilized in two different ways called Sequential forward choice and consecutive in reverse determination, which are two notable heuristic plans. Hereditary calculation is another feature choice technique considered for the mainstream strategy for nonlinear advancement troubles . To isolate the thyroid sicknesses the Support vector machine is utilized as classifier.
In [7-8] present an outline of the clinical picture handling writing on thyroid analysis. In light of the danger of malignancies and hyper function thyroid infection is amazingly normal and of concern. In the event that not analyzed at ideal time, thyroid knobs may become malignant.
In most recent couple of years different picture preparing calculations have been proposed for proficient and powerful PC helped location of thyroid knobs. Calculations work on USG, SPECT pictures and planar scintigraphy[10-12]. An outline of calculations in each progression (preprocessing step, division step, feature extraction step, feature choice advance, classification venture) for thyroid sickness analysis is given. Fuzzy intellectual guide based choice emotionally supportive network and other as of late proposed strategies are introduced [13,14]. An outline of surface portrayal through clamor safe picture features is given too .
2. SEQUENTIAL LEARNING NEURAL NETWORK BASED DIABETIC DETECTION
In order to provide the high computation speed, increment precision, lessen computational dimensionality the proposed tongue image processing is used. The proposed framework will extract the features of tongue image and will get grouped. The tongue image is analyzed in the proposed functional block diagram which is shown in the Figure 1. The methodology comprised of 4 phases which has preprocessing, segmentation, feature extraction and classification.
Figure 1 Block Diagram for the Proposed SLNN
Input Image Preprocessing Segmentation
Feature Extraction Classification
2.1 Preprocessing - Resourceful Ethical Filter (REF)
Preprocessing tongue images usually involves a series of continuous actions, for example, environmental correction, image recruitment, geometric adjustment, radiation correction, and twisted crude image corrugation. The application to trigger these changes should also make it easier to pre-process free distortion images. Spatial arrangement of at least two scenes of image insertion is obtained at different times or by different sensors. Figure 2 illustrates the implementation of preprocessing protocols.
Figure 2 Flow chart of Preprocessing 2.1.1 Resourceful ethical filter Algorithm:
Step 1: In each image, this point falls on the original checks in the context of possible reflections Step 2: It calculates the isolation between the image source and the catastrophe being the point it uses.
Step 3: The edges will make a contrast in the direction of the previous separator in the reflection system with the α line fusion between the image interface and the line image.
Step 4: Using them to obtain pre-processed information and the reflection coefficient of applying margins in the event of alpha and all separators.
Step 5: The image area quality is found by the diffusion and reflection coefficient of this system which is standard. The number of preprocessing results is shown 3
a) Input Image b) Preprocessed Image Figure 3 Result of Preprocessing
4.2.2 MULTI-SCALE INVARIANT CLUSTERING SEGMENTATION
Multi-Scale Invariant Clustering (MSIC) segmentation is utilized for breaking the preprocessed image. MSIC is the way toward apportioning that can be utilized for discrete article with comparative properties. In this strategy, significant data assessments are analyzed by many clustering methods. It is a directed classification of thoughts, data, just as models found in the gatherings or gatherings from the feature vectors. The proposed strategy is needed to enlist the division between all data and eliminate the main assessment of data in the estimation of Euclidean partition and figure the necessary evaluation of thickness ρi and alongside the Gaussian section as a condition is utilized to appear as something else.
𝛿𝑖 = 𝑚𝑖𝑛𝑗𝑑𝑖𝑗𝑃𝑗 > 𝑃𝑖
Alongside the subtleties of introducing the first thought of info image and MSIC calculation in shading spaces, hence can be depicted in the new segmentation strategy:
MSIC - Algorithm
Initial step is to change the information image for the feature illustration.
1. The second step is to read the first image details and the 6 color channels is extracted.
2. Identify the cluster which is focused and the number is concentrated along the cluster images.
3. The thickness ρ is calculated and remove δ by using the condition. Later made the thickness and separation choice charts.
4. According to the first presented rule, high density (ρ) and considerable distance (δ) cluster centers as well as selecting data points. Then, find the cluster number.
5. Let's say the remaining points for the clusters.
Point 𝑥𝑖notation with the same label as the point 𝑥𝑖if the following two conditions are met:
𝑃𝑗 > 𝑃𝑗 (2) 𝑑𝑖𝑗 = 𝑚𝑖𝑛𝑖#1𝑑𝑖𝑓 (3)
6. The information point for each cluster has to be separated along the Euclidean position.
7. In order to leave the cluster, the information point is closest to its point. On the off chance, the information points take it to the nearest cluster, and not the closest to its cluster.
8. Repeat a complete information about each phase of the up-to-stage information points to go through until one cluster goes further than the other. Now the concerts are stable and closes the clustering process. The result of segmentation is shown in figure 4
a) Cluster-1 b) Cluster-2 c) Clsuter-3
d) Clsuter-4 e) Cluster-5 Figure 4 Clustering Segmentation Results
2.3 Feature Extraction
Feature extraction is a dimensionality reduction part in the execution of any image classification since it delivers a huge impact on the consequences of classification. During feature extraction, more than ten categories of elements get extracted from different categories of features using texture, color and shape features.
2.4 SEQUENTIAL LEARNING NEURAL NETWORK CLASSIFIER
The architecture of Sequential Learning Neural Network is shown in Figure 5.
Figure 5 Architecture of SLNN
The design of SLNN is equivalent to that of Radial Bias Function (RBF) networks. Each hidden unit in the network has two boundaries called a center (Xj) and a width (σj) related with it. The enactment function of the hidden units is Gaussian function and it is radially symmetric in the input space. The output of each hidden unit relies just upon the radial distance between the input vector Xi and the center boundary Cifor that hidden unit. The reaction of each hidden unit is scaled by its associating loads Wj to the output units and afterward added to deliver the general network output. The general network output is determined by following conditions.
𝐹𝑅𝐵𝐹 = 𝑊𝑗 𝑗𝜑𝑗 , 𝑗 = 1 to n (number of hidden units) (4)
𝜑𝑗 = 𝑒−|𝑋𝑗−𝐶𝑖|2 / 2𝜎𝑗2(5) Where
𝜑𝑗 =Response of the jth hidden unit
𝑊𝑗 = Weight Connecting hidden unit j to output unit Xj =Center of jth hidden unit
σj = width of jth hidden unit Algorithm of SLNN
Step 1: Center Value is calculated using K-Means Clustering
Step 2: The width value is calculated using P-Nearest neighbor method
Step 3: The RBF activation function 𝜑𝑗 is calculated for the training inputs using the equation (4.5)
Step4: Sequential learning is applied as follows 4.1 Initially, no hidden neuron exists
4.2 initialization has been done with following values n= 0, K=0 and h=1
n = number of input patterns (500)
K = Number of hidden neurons (max of 10) h = learning cycle
4.3 For each observation (Xn,yn) ,the overall network output is calculated using equation 4.4 4.4 The novelty of the data is verified using the variables enand𝛽𝑚𝑎𝑥. They are calculated as follows
𝑒𝑛 = 𝑦𝑛 − 𝐹𝑅𝐵𝐹 (6)
𝛽𝑚𝑎𝑥 = 𝑀𝑎𝑥 𝜑𝑖 (7)
If en>0.1 and 𝛽𝑚𝑎𝑥 < 0.6 and K <=10 A new hidden unit is added (K=K+1) Else
The weight updating is done for all the hidden units as follows 𝑊𝑗 𝑛𝑒𝑤 = 𝑊𝑗 𝑜𝑙𝑑 + 𝛼 ∗ 𝜑𝑗 (8) Where
𝛼 = learning Rate Constant (0.1)
4.4 that all the preparation designs are introduced, the quantity of learning cycle is augmented (h=h+1) and standards for eliminating hidden units is checked
𝜃𝑖 = 𝑁𝑛=1𝜑𝑗 𝑥𝑛 < 0.1 (9)
If the above condition is satisfied, the hidden unit corresponding to this activation function has less contribution to the output. So it will be removed.
Step 5: If the network indicates the value of Root Mean squared error is close to zero, the network is converged. Else it repeated from step 4.3
𝜎𝑗 = 𝑠𝑞𝑟𝑡(1
𝑃 𝑃𝑖=1|𝑋𝑗 − 𝑋𝑗𝑃|2)( 10)
Where the 𝑋𝑗 is the p-nearest neighbors to centroid𝑋𝑗𝑃. This ensures that the basis functions overlap to some degree and hence a relatively smooth representation of the distribution is obtained.
The performance of the classification has been assessed by accuracy from the confusion matrix. The parameters used to compute the above mentioned performance measures are as follows: True Positive (𝑇𝑃) is the positive units classified as positive. True Negative (𝑇𝑁)
is the negative samples classified as negative. False Positive (𝐹𝑃) is the negative samples classified as positive. False Negative (𝐹𝑁) is the positive samples classified as negative.
3. SIMULATION RESULTS AND DISCUSSIONS OF SLNN
In this section, the simulation results and performance analysis of proposed sequential learning neural network Based diabetic detection system are discussed. The simulation is developed using MATLAB software, and the details of simulation parameters are shown in Table 1.
Table 1. Details of Simulation Parameter
Number of Training images 500
Number of Testing images 300
Dataset Name BioHit
Tool MATLAB 2019a
Figure 6. Simulation Result of Diabetic detection
The simulation result of diabetic detection is shown in Figure 6. In the final part of the work, the tongue image is analyzed based on the image splitting by centroid calculation and the data sets for the specific parts are catalogued in a classification methods. The datasets for the query image are obtained during processing and the data sets are compared and the patient is classified as diabetic or non-diabetic.
Figure 7. Training and Validation Loss of SLNN
The Figure 7 shows the simulation result of training and testing loss of proposed SLNN method. By using SLNN the MSE value is 0.025.
Table 2 Confusion Matrix result of RANN
The simulation results confusion matrix value of SLNN Based diabetic detection is shown in Table 2. Based on this value, the sensitivity, specificity, accuracy and F1-score are evaluated.
Table 3. Overall Performance evaluation
Table 3 discuss the overall performance analysis of diabetic detection using tongue images with different classification methods. This comparison clearly shows the proposed MSVM and SLNN gives the good result compared with existing particle Swarm Optimization (PSO) and Genetic Algorithm (GA).
Figure 8. Overall Performance Analysis
Table 3 and figure 8 discuss the overall performance analysis of diabetic detection using tongue images with different classification methods. This comparison clearly shows the proposed MSVM and SLNN methods give good results compared with conventional particle swarm optimization and genetic algorithm methods. The sensitivity, specificity and accuracy of MSVM
60 65 70 75 80 85 90 95 100
PSO GA MSVM SLNN
Classification Ratio (%)
Overall Performance Analysis
Sensitivity (%) Specificity(%) Accuracy
Parameters Sensitivity (%)
F1- Score Particle Swarm Optimization
85.59 88.32 88.45 0.74
Genetic Algorithm (GA) 88.52 80.62 84.6 0.72
Proposed MSVM 92.0 89.33 90.67 0.89
Proposed SLNN 93.1 91.33 91.67 0.91
are 92.0%, 89.33% and 90.67% respectively. The sensitivity, specificity and accuracy of the proposed SLNN method are 93.10%, 91.33% and 91.67%.
In this research work proposed a new framework for tongue images classification for diabetic detection using sequential learning neural network method. The sensitivity, specificity and accuracy of proposed SLNN are 92.0%, 91.33% and 91.67%respectively. The proposed SLNN system shows better performance with lesser hidden units. The training of SLNN is much faster but the classification time is high. So there is a need to reduce the classification process time. This motivates towards another network in future.
1. Festin, Patrick & Cortez, Rex &Villaverde, Jocelyn. (2020). Non-Invasive Detection of Diabetes Mellitus by Tongue Diagnosis Using Convolutional Neural Network. 135-139.
2. Zhang J, Xu J, Hu X, Chen Q, Tu L, Huang J, Cui J “Diagnostic Method of Diabetes Based on Support Vector Machine and Tongue Images.”Biomed Res Int.
017;2017:7961494.doi:0.1155/2017/7961494. Epub 2017 Jan 4.
3. Bo Pang, David Zhang*, Senior Member, IEEE, Naimin Li, and Kuanquan Wang, Member, IEEE “Computerized Tongue diagnosis based on Bayesian Networks”, IEEE Transactions on biomedical engineering, no 10, vol. 51, October 2004.
4. Jianfeng Zhang, Xiaojuan Hu “Diagnostic method of Diabetes based on support vector machines and tongue images” IEEE Trans biomed vol. 22 no 34 January 2015.
5. Dhanalakshmi, P.Premchand and A.Govardhan “An Approach for tongue Diagnosing with Sequential Image Processing Method”, Int J of Computer Theory and Engineering, Vol. 4, No. 3, June 2012.
6. Minchunhu et al J “Correction parameter estimation on the smartphone &its application to Automatic tongue diagnosis”, Med Systue (2016)40:18springer science& business media Newyork 2015.
7. Subash Kumar and Nagarajan V 2017,“Local contourlet tetra pattern for image retrieval”, Spri J of Signal Image and Video Processing
8. Mr.N.V.cibin et al Diagnosis of Diabetes mellitus &NPDR in diabetic from tongue images using LCA classifier. IJARTET. vol .2, 2015.
9. [Wang, X., Zhang, B., Yang, Z., Wang, H., & Zhang, D. (2013). Statistical analysis of tongue images for feature extraction and diagnostics. IEEE Transactions on Image Processing, 22(12), 5336-5347.
10. Zhang, B., Kumar, B. V., & Zhang, D. (2013). Detecting diabetes mellitus and nonproliferative diabetic retinopathy using tongue color, texture, and geometry features.
IEEE transactions on biomedical engineering, 61(2), 491-501.
11. Kim, K. H., Do, J. H., Ryu, H., & Kim, J. Y. (2008, November). Tongue diagnosis method for extraction of effective region and classification of tongue coating. In 2008 First Workshops on Image Processing Theory, Tools and Applications (pp. 1-7). IEEE.
12. Zhao, Q., Zhang, D., & Zhang, B. (2016, October). Digital tongue image analysis in medical applications using a new tongue ColorChecker. In 2016 2nd IEEE International Conference on Computer and Communications (ICCC) (pp. 803-807). IEEE.
13. Liang, R., Wang, Z. P., Yang, X. Y., Ren, Y. J., Zhang, Y., & Yao, X. Y. (2012, August). Applied research of colorimetric in the teaching of tongue diagnosis. In 2012 International Symposium on Information Technologies in Medicine and Education (Vol.
1, pp. 426-429). IEEE.
14. Ranganathan, S. (2019). Rain Removal in the Images Using Bilateral Filter. International Journal of MC Square Scientific Research, 11(1), 9-14.
15. Manahoran, N., &Srinath, M. V. (2017). K-Means Clustering Based Marine Image Segmentation. International Journal of MC Square Scientific Research, 9(3), 26-29.