• Nu S-Au Găsit Rezultate

View of Deep Learning based Approach for Efficient Segmentation and Classification using VGGNet 16 for Tissue Analysis to Predict Colorectal Cancer

N/A
N/A
Protected

Academic year: 2022

Share "View of Deep Learning based Approach for Efficient Segmentation and Classification using VGGNet 16 for Tissue Analysis to Predict Colorectal Cancer"

Copied!
12
0
0

Text complet

(1)

4002 http://annalsofrscb.ro

Deep Learning based Approach for Efficient Segmentation and Classification using VGGNet 16 for Tissue Analysis to Predict Colorectal Cancer

Vidhya.S 1, PG Student, Mrs. R.Shijitha 2, Assistant Professor

1

Department of Biomedical Instrumentation Engineering, School of Engineering, Avinashilingam Institute for Home Science and Higher Education for Women, Coimbatore-641043.

2

Department of Biomedical Instrumentation Engineering, School of Engineering, Avinashilingam Institute for Home Science and Higher Education for Women, Coimbatore-641043.

1

Email: [email protected] ,

2

Email: [email protected]

Abstract— The pivotal strategy in a huge amount of image processing applications are to consider the significant features from the image data. In this, the interpretation, understanding, and description of the scene will be offered through machine. Image-dependent machine and deep learning mechanisms has shown expert-level accuracy recently in the classification of medical image. The images of tissue were segmented and classified to detect the tumor. In this paper, the effective technique for the segmentation and classification of tissue analysis image for the detection of colorectal tumor types by means of Microsatellite instability mutation status (MSImut) and microsatellite stable (MSS) data types were presented. In this study, K-Means based Morphological segmentation and deep learning based VGGNet 16 classification is carried out to recognize the outcomes of colorectal cancer depending on the tissue analysis samples.

Initially, the input image is pre-processed in order to remove excess noise present in the image. K-Means with morphological segmentation technique is applied for the execution of segmentation process effectively. Deep learning based VGG NET CNN was an algorithm that are well-established and is employed for classification. The performance analysis is carried out and the outcomes are compared with existing techniques to prove the effectiveness of proposed method.

Index Terms—Medical image, Tissue analysis, tumor detection, deep learning mechanism, VGGNet 16, K-Means based Morphological segmentation, Microsatellite instability mutation status (MSImut) and microsatellite stable (MSS).

I. INTRODUCTION

In recent days, [1, 2] the growth of computer aid diagnosis (CAD) systems helps in the reduction of workload. Digital pathology continues to increase energy wide-reaching for diagnostic purposes. Nowadays, deep learning techniques were appeared to solve several medical image processing area problems [3].In recent years, the number of cancer cases has increased compared to previous years. In primary stage of the tumor, it is difficult to recognize. Once it is diagnosed, the course of treatment like radiation, chemotherapy etc. can be planned but late diagnosis of tumor is fatal for the patients.

Pathologists regularly investigate tissue slides over a microscope and make prognostic and diagnostic outcomes depending on the interpretation. The growing number of tissue slides, and the significance of this examination kind in both biological research and clinical medicine, create this visual approach a tedious one and unsuccessful. However, slides of computerized pathology might aid doctors produce more precise and faster diagnosis, and consists of the prospective to transform performance of current pathology prognosis and diagnosis. Immunohistochemistry (IHC) is regarded as the detecting targeted antigens (proteins) process in the tissue sections by means of labeled antibodies utilization of interactions over antibody-antigen. Biopsy is usually done once after the diagnosis of some irregularity with the use of ultrasound and mammography [4].

In biopsy, a tissue sample is removed surgically to be examined. This can point out which type of cells is cancerous, and also the cancer type these were related to. Microscopy imaging biopsy samples data are complex in nature and large in size. Consequently, pathologists will face an increase in workload substantially for the diagnosis of histopathological

(2)

cancer. In recent days, [1, 2] the growth of computer aid diagnosis (CAD) systems helps in the reduction of workload.

Digital pathology continues to increase energy wide-reaching for diagnostic purposes[5]. Nowadays, deep learning techniques were emerged to solve several problems in the medical image processing area. A classification technique for the image classification of breast cancer tissue was presented based on deep convolutional neural networks (CNN) [6].

CNN is regarded as the better solution for solving [7] classification problems once the input is the data having high- dimensional like imagery [8]. This network in turn “learns” to take out confined features from the images and for classifying the input in relation to the extracted features. The disease in IHC images diagnosis needs the identification of the nucleus which comprises of biomarkers that were positively stained.

In the H-DAB-stained images the biomarker P53 might respond to the stain DAB and emerge brown systems were developed recently and efforts on considerable researches were being spend for further enhancement.

In this paper, an approach for histology microscopy image analysis was presented for classification of colorectal cancer type. This approach utilizes image preprocessing, segmentation processed. And the image classification is done for tumor detection histology image classifier.

 To propose the preprocessing technique to be used for enhancement of input image and to eradicate the noise present in the dataset image.

 To segment the preprocessed image using K-Means with morphological segmentation.

 To present classifier approach to separate the abnormal and normal stages based on deep learning technique VGGNet for tissue analysis to detect Microsatellite instability mutation status (MSI-mut) in colorectal tumors.

 To estimate the performance of proposed system to prove the effectiveness of proposed scheme.

Section II comprises of information regarding the tissue analysis for tumor segmentation, and detection methods for different scheme. In section III, proposed methodology on tissue recognition is employed in a stage wise manner. Section IV provides information regarding the presented scheme outcomes on comparing several techniques. In section V, general view on project and likewise more expansion information facts are presented.

II. RELATED WORKS

[9]expanded the algorithms of image processing that is, the Gaussian along with median filtering and the gradient filtering with the utilization of MATLAB 2016b, for segmenting the characteristics of surface that is, the pits and ridges present in the oral tissue SEM images of usual (13 samples) and the (36 samples) Oral Submucous Fibrosis (OSF) issue.

Subsequent to the segmentation, measurement of quantitative parameters like region, textural and thickness features such as ridges range filter, entropy and contrast in addition to pit area and the ridge ratio area against pit area was prepared.

[10]proposed an approach of two-stage for the computation of oral histological images at which the twelve layered deep convolutional neural networks (CNN) were employed on behalf of the constituent layer segmentation at the initial stage and at the second phase the keratin pixels were recognized from the keratin regions that were segmented with the use of texture-dependent (Gabor filter) feature that were trained random forests (RF).

[11] presented a novel (CNN) based prearranged regression model that is exposed to be competent of handling moving cells, inhomogeneous environment noises, and huge variation in shapes and sizes. The projected technique only needs a few training images by means of weak observations (immediately one tick close to the middle of the item).

[12]offered an indication occurrence of oral cancer, dissimilar kinds, and different techniques of diagnosis. As well, a short preamble was provided for several stages of immunoanalytical which in turn comprises the preparation of tissue image, microscopic image, and whole slide imaging analysis. The cancer therapy response might be monitored continuously for ensuring the treatment process effectiveness which in turn requires analytic outcome barely as rapid as probable for enhancing the patient care and quality.

[13]aimed to take out the components of paraffin from the paraffin-integrated tissue of oral cancer spectrum by means of three multivariate analysis (MVA) techniques; Partial Least Squares (PLS), Independent Component - Partial Least Square (IC-PLS) and Independent Component Analysis (ICA). The anticipated components of paraffin were employed for eliminating the paraffin contribution from the spectrum of tissue. These three techniques were related by means of the efficacy of removing paraffin and the capability of retaining information of the tissue.

(3)

[14]projected the image processing analysis of the oral cancer, other oral disease, and oral potential malignant disorders with the use of optical instruments. The intention of this approach was to verify the optical instruments convenience in the oral screening. About 314 patients who were being screened at Tokyo Dental College with the use of optical instruments among 2014 and 2018 were taken up in this examination. The visualization of fluorescence images has been verified with the use of objective and subjective estimations. The subjective evaluation for the identification of oral cancer in turn provides sensitivity of about98.0% and specificity of about 43.2%. On concerning the intent estimation for the oral cancer identification, specificity and sensitivity were about 61.9% and 62.7% intended for the mean luminance, for luminance ratio it is 90.3% and 55.7%, on behalf of luminance standard deviation it was56.5% and 67.7%, in favor of luminance coefficient of variation it was around 72.5% and 85.4%.

[15]presented a novel segmentation technique with the Gabor filter. The input image is being filtered in the course of a Gabor filters bank. The number of scales employed in the bank of filters constructing is automatically computed and adaptive depending on the image size. The Filtered output was considered as 2-d feature vectors. Also, PCA (principal component analysis) is carried out for reducing the dimensionality. As well, the primary principal component is employed as the feature image intended for further dispensation towards the segmentation. This image feature is specified as the both thresholding and K-means clustering input intended for the final segmentation. The different approaches outputs are related and the outcomes were estimated.

[16]considered the HSI utilization as a tool for imaging the detection and analysis of cancer. The fundamental ideas connected to this knowledge are comprehensive. The most applicable, high-tech study that be capable of be establish in the writing by HSI for the analysis of cancer were summarized and presented, together ex-vivo and in-vivo. Finally, the current limitations of this technology were discussed in the cancer detection field, mutually with a number of into probable future insights steps in the technology development.

[17]evaluated and presented novel automatic technique intended for diagnosis of OSCC with the use of deep learning technology on the CLE images. The technique is related in opposition to the textural feature-dependent approaches of machine learning which in turn signifies the present state of the art. On behalf of this approach, CLE sequences of images (7894 images) from the patients were diagnosed by means of OSCC that were attained from four desired locations at the oral cavity which comprises OSCC lesions. The current technique is created to break the traditional recognition of CLE image by means of region underneath the curve of 0.96 about and88.3% mean accuracy of (86.6%sensitivity, 90%specificity).

[18] employed a technique of Meta-Learning (ML) for learning like Boosting and Bagging on the RS data. Furthermore, tumor tissue and the normal one class classification was employed through (Linear Discriminant Analysis), QDA (Quadratic Discriminant Analysis) and AdaBoost (Adaptive Boosting) classifiers. The current learning intends at the RS data examining through entire 110 samples, together with53 normal ones and57 tumors.

[19] stated a approach of deep learning for the visual analysis and automatic detection of tissue regions invasive ductal carcinoma (IDC) in whole slide images (WSI) of the breast cancer (BCa). Deep learning approaches are methods of learn-from-data which involves modeling of computational learning process. This technique was related to the way of human brain working with the use of diverse understanding layers or levels of most useful and representative features resulting from the representation of learned hierarchical depiction.

[20]projected a SVM and Deep Boltzmann Machine (DBM) fusion classification intended for classifying and learning the normal tissue and post - and pre -cancerous tissue as of the hyper spectral imaging. The varied pixel as of environment is predictable for cancerous area recognition. The consequence of a hypercube patient was offered intended for the deep learning technique validation probability of pixel-wise map cancerous and typical healthy tissues on the hyper spectral imaging.

(4)

III. PROPOSEDWORK

This section offers the detailed explanation of the proposed mechanism. The flow diagram of the proposed mechanism is shown below in figure 1.

Figure 1 flow of the proposed mechanism A. Preprocessing

Initially, the input image is pre-processed in order to remove excess noise present in the image. The presence of noise will affect the overall quality of image. Thus, to enhance the quality of image and to get better result on further processing of image, pre-processing is done. Once after the acquisition of image, the image is being preprocessed.

Generally, preprocessing f the image is done to get rid of the excess noise present in the image and to correct the blurred portion of the image attained from the input dataset. The grayscale image and filtering enhancement is done in this step so as to attain the enhanced portion of the image. Also, the tinkling effect is made to acquire the enhanced and clearer image greatly for the supposed reason. The filter that is employed in this stage is a high-pass filter, however as this approach is working with image samples that are required for the purpose of medical research. The high-pass filter is obliged to be conceded with a mask for a better image; in the direction of accomplishing this sobel operator is employed.

B. K-Means with Morphological segmentation

Segmentation is the process of partitioning a digital image into multiple segments (sets of pixels, also known as image objects). Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.). This subsection is envisioned to deliver a transitory overview on the K-Means with morphological segmentation technique. In this stage, we apply the k-means algorithm for executing effective segmentation process. It has the feature of k-means segmentation therefore can provide better performance than other technique. The K-means algorithm is used to process the vagueness of information. The notions of lower and upper approximation of rough sets are the vital ones for k-means clustering

(5)

algorithms. A calculation of the centroids of clusters expects to be improved to add the effectiveness of lower as well as upper bounds.

The morphological operation depends on the morphological features of the image. Dilation and Erosion were the two fundamental morphological operations employed commonly. The dilation is employed for the purpose of dilating the image size, whereas, erosion is employed for shrinking the image size. The detection of edge depends on the level of intensity. At the time of morphological operation, detection of edge is simple and easy having greater level of segmentation outcome. The altering level of image intensity was employed for the detection of edge. In the morphological operation erosion and dilation are employed commonly for the reconstruction and enhancement of image.

Steps followed for the morphological segmentation are as follows:

(i) Read input image as the input from the database.

(ii) Input image is then attained through the application of Threshold T, so that T (b, c) =

𝑤𝑕𝑒𝑟𝑒, 𝑔 𝑐, 𝑑 𝑠𝑖𝑔𝑛𝑖𝑓𝑖𝑒𝑠 𝑡𝑕𝑒 𝑔𝑟𝑒𝑦 𝑠𝑎𝑐𝑙𝑒 𝑜𝑟 𝑖𝑛𝑡𝑒𝑛𝑠𝑖𝑡𝑦 𝑓 𝑡𝑕𝑒 𝑖𝑚𝑎𝑔𝑒

0,𝑂𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒 1,𝑔(𝑐,𝑑)≥𝑇

(1)

(iii) On the binary image, apply erosion having appropriate element of structuring and the resultant image is termed as the image that is eroded.

B●C = (B⊕ C) C (2)

Where, B illustrates the image that are binary and C is the structuring factor.

iv. On applying dilation,

BοC = (B⊕C) C (3)

Where, C illustrates the image that are binary, and C signifies the structuring element.

v. Generate ROI of colorectal Tumor tissue area.

Here the process can be repeated until all the tissue areas can be properly segmented.

C. Deep Learning based VGGNet for classification:

Then the level of the tissue for colorectal tumor analysis (whether it’s in mild or in abnormal stage) has to be identified whether the stage is mild, moderate, or otherwise severe after extracting the characteristics. Deep learning based VGG NET CNN was a algorithm that are well-established, was used here in this classification. This is a method for pre-trained convolution. In this case, CNN entitles an assessment of discrepancies between one or two objective variables and one or both of them. The VGG NET CNN provides chances and utilizes a plan. Statistical distributions are usable. CNN first reads and resized the picture and then measures the class likelihood in the rating phase. CNN marks a significant advance in the identification and analysis of images.

An Improved Residual VGG NET CNN technique ordered in the form of these layers.

 ReLU layers

 Convolutional layers

 Pooling layers

 a Fully connected layer

CNNs have finite pre-processing measures when considering other image classification algorithms. This CNN must be used for various purposes in various fields.

a) Convolution layer

The main purpose of this method is to focus on data’s in the images. The coding layer is the main phase of CNN consistently. This approach detects the input image functions and generates the map.

b) ReLU layer

A simple straightforward layer of units is the next step to a convolution layer. In order to improve nonlinearity within the network, the application method was introduced on such feature charts. Negative values are deleted rapidly here.

c) Pooling layer:

The cycle of pooling reduces the input scale gradually. The phase of pooling reduces the fit. For the number of needed parameters increasing, the appropriate parameters are easily shown.

d) Flattening layer

It's a really simple move by flattening the polled feature map in the sequential column statistics.

e) Fully connected layer

(6)

These are the characteristics that can be associated with the properties. With the enormous percentage inaccuracy, the classification process is finalized. The error is monitored and noted primarily.

d) Softmax

Softmax used in neural grids to map unregulated network operation to a probability distribution of predicted output classes. The Softmax has been used for several problems in diverse fields of analysis. The Decimal chance means 1.0.

Take the differences corresponding to Softmax

 Complete Softmax is the Softmax which is capable of estimating a probability for each class imaginable.

 In the unspecified case of negative identities, Softmax estimates a chance for all valid identities.

This CNN makes it possible to measure a discrepancy between one or more of the various variables. CNN measures the chances and the work. It is what has been accrued. In this method, CNN will first interpret, redistribute, and then calculate the class likelihood of the image.

F = det q − k classify N 2 (4)

Where F is the feature, q is the pointed feature,β1β2 are the classified features. These are to be declared as det q = β1β2 (5)

classify q = β1β2 (6) The CNN classification was concluded as

F=β1β2− V β1+ β2 2 (7) Where V is the empirical constant.

Pseudo code (VGGNET CNN classification)

Input: Enhancement image 𝐹𝑖𝑚

Output: filtered image 𝐹𝑐

Initialize the layers of network Initialize trained features Initialize label

Train label =70%

Test label =30%

Lab=single (label) For ii=1: length (Lab)

Class= Evaluate (label== Lab (ii)) Train cut=length (class)-traincut

Train data= [train data; train features; class(1: Train cut)]

Predict labels=classify (net, train data) End

End

For ii=1: size (traindata,1)

Train data=[train data; train features; class(1: Train cut)]

End

For ii=1: size (trainfeatures,1)

Train data= [train features; train features; class(1: Train cut)]

End

IV. PERFORMANCE ANALYSIS

The performance analysis of the proposed system is deliberated in this section. Figure 2 signifies the input oral carcinoma image and figure 3 depicts the segmented image.

A. Performance outcomes for data type MSImut

The performance analysis is made with the dataset MSImut and the outcomes attained were illustrated in this section.

(7)

Figure 2 Input image

Figure 3 Filtered image

Figure 4 masked image

(8)

Figure 5 segmented image

Figure 6 output of disease

Figure 7 Train set confusion matrix

Figure 1 is the representation of input image, figure 2 shows the filtered image, grey scale converted image in figure 4 and the segmented image in figure 5. The output of the disease is shown in figure 6. The training set confusion matrix is estimated and is shown in figure 7.

Figure 8 comparative analysis of proposed and existing system

(9)

Figure 8 is the representation of comparative analysis of the proposed and existing techniques. The analysis shows that the proposed system is better on comparing existing one.

B. Performance outcomes for data type MSS

The performance analysis is made with the dataset MSS and the outcomes attained were illustrated in this section. Figure 9 is the representation of input image, figure 10 shows the filtered image, grey scale converted image in figure 11 and the segmented image in figure 12. The output of the disease is shown in figure 13. The training set confusion matrix is estimated and is shown in figure 14.

Figure 9 Input image

Figure 10 Filtered image

(10)

Figure 11 masked image

Figure 12 segmented image

Figure 13 output of disease

Figure 14 comparative analysis of proposed and existing system

(11)

Figure 14 is the representation of comparative analysis of the proposed and existing techniques. The analysis shows that the proposed system is better on comparing existing one.

V. CONCLUSION

In this approach, the tissue images were analysed to predict the colorectal tumor by means of effective segmentation and classification of input images. The input images from two data types MSImut and MSS were preprocessed initially, and are segmented with the use of K-means based Morphological segmentation technique. The segmented outcome is then classified with the use of VGGNet 16 classifier which was an effective deep learning technique to predict the tumor region in a more accurate manner. The performance of the presented technique is estimated through MATLAB environment. The attained outcomes are illustrated and were compared with existing techniques to prove the effectiveness of proposed strategy. The performance is estimated in terms of accuracy, sensitivity, specificity, precision, recall, and F-Measure. From the analysis, it was evident that the presented technique is better on comparing the traditional methodologies.

REFERENCES

[1] A. Golatkar, D. Anand, and A. Sethi, "Classification of breast cancer histology using deep learning," in International Conference Image Analysis and Recognition, 2018, pp. 837-844.

[2] S. Vesal, N. Ravikumar, A. Davari, S. Ellmann, and A. Maier, "Classification of breast cancer histology images using transfer learning," in International Conference Image Analysis and Recognition, 2018, pp. 812-819.

[3] J. Amin, M. Sharif, N. Gul, M. Yasmin, and S. A. Shad, "Brain tumor classification based on DWT fusion of MRI sequences using convolutional neural network," Pattern Recognition Letters, vol. 129, pp. 115-122, 2020.

[4] N. Bayramoglu and J. Heikkilä, "Transfer learning for cell nuclei classification in histopathology images," in European Conference on Computer Vision, 2016, pp. 532-539.

[5] G. Chundayil Madathil, S. Iyer, K. Thankappan, G. S. Gowd, S. Nair, and M. Koyakutty, "A novel surface enhanced Raman Catheter for rapid detection, classification, and grading of oral cancer," Advanced healthcare materials, vol. 8, p. 1801557, 2019.

[6] G. Carneiro, Y. Zheng, F. Xing, and L. Yang, "Review of deep learning methods in mammography, cardiovascular, and microscopy image analysis," in Deep Learning and Convolutional Neural Networks for Medical Image Computing, ed: Springer, 2017, pp. 11-32.

[7] H. Chen, Q. Dou, X. Wang, J. Qin, and P. A. Heng, "Mitosis detection in breast cancer histology images via deep cascaded networks," in Thirtieth AAAI Conference on Artificial Intelligence, 2016.

[8] B. Wei, Z. Han, X. He, and Y. Yin, "Deep learning model based breast cancer histopathological image classification," in 2017 IEEE 2nd international conference on cloud computing and big data analysis (ICCCBDA), 2017, pp. 348-353.

[9] R. Nag, M. Pal, R. R. Paul, J. Chatterjee, and R. K. Das, "Segmentation and analysis of surface characteristics of oral tissues obtained by scanning electron microscopy to differentiate normal and oral precancerous condition,"

Tissue and Cell, vol. 59, pp. 82-87, 2019.

[10] D. K. Das, S. Bose, A. K. Maiti, B. Mitra, G. Mukherjee, and P. K. Dutta, "Automatic identification of clinically relevant regions from oral tissue histological images for oral squamous cell carcinoma diagnosis," Tissue and Cell, vol. 53, pp. 111-119, 2018.

[11] Y. Xie, F. Xing, X. Kong, H. Su, and L. Yang, "Beyond classification: structured regression for robust cell detection using convolutional neural network," in International Conference on Medical Image Computing and Computer-Assisted Intervention, 2015, pp. 358-365.

[12] G. Ulaganathan, K. T. M. Niazi, S. Srinivasan, V. Balaji, D. Manikandan, K. S. Hameed, et al., "A clinicopathological study of various oral cancer diagnostic techniques," Journal of pharmacy & bioallied sciences, vol. 9, p. S4, 2017.

[13] P. Meksiarun, M. Ishigaki, V. A. Huck-Pezzei, C. W. Huck, K. Wongravee, H. Sato, et al., "Comparison of multivariate analysis methods for extracting the paraffin component from the paraffin-embedded cancer tissue spectra for Raman imaging," Scientific reports, vol. 7, p. 44890, 2017.

(12)

[14] T. Morikawa, A. Kozakai, A. Kosugi, H. Bessho, and T. Shibahara, "Image processing analysis of oral cancer, oral potentially malignant disorders, and other oral diseases using optical instruments," International journal of oral and maxillofacial surgery, vol. 49, pp. 515-521, 2020.

[15] A. A. Nawandhar, L. Yamujala, and N. Kumar, "Performance Analysis of Image Segmentation for Oral Tissue,"

in 2017 Ninth International Conference on Advances in Pattern Recognition (ICAPR), 2017, pp. 1-6.

[16] M. Halicek, H. Fabelo, S. Ortega, G. M. Callico, and B. Fei, "In-vivo and ex-vivo tissue analysis through hyperspectral imaging techniques: Revealing the invisible features of cancer," Cancers, vol. 11, p. 756, 2019.

[17] P. Pande, S. Shrestha, J. Park, I. Gimenez-Conti, J. Brandon, B. E. Applegate, et al., "Automated analysis of multimodal fluorescence lifetime imaging and optical coherence tomography data for the diagnosis of oral cancer in the hamster cheek pouch model," Biomedical optics express, vol. 7, pp. 2000-2015, 2016.

[18] M. Sharma, L. Sharma, M.-J. Jeng, L.-B. Chang, S.-F. Huang, and S.-L. Wu, "Meta-Learning Techniques to Analyze the Raman Data for Optical Diagnosis of Oral Cancer Detection," in 2019 IEEE International Conferences on Ubiquitous Computing & Communications (IUCC) and Data Science and Computational Intelligence (DSCI) and Smart Computing, Networking and Services (SmartCNS), 2019, pp. 644-647.

[19] A. Cruz-Roa, A. Basavanhally, F. González, H. Gilmore, M. Feldman, S. Ganesan, et al., "Automatic detection of invasive ductal carcinoma in whole slide images with convolutional neural networks," in Medical Imaging 2014: Digital Pathology, 2014, p. 904103.

[20] P. R. Jeyaraj, B. K. Panigrahi, and E. R. Samuel Nadar, "Classifier Feature Fusion Using Deep Learning Model for Non-Invasive Detection of Oral Cancer from Hyperspectral Image," IETE Journal of Research, pp. 1-12, 2020.

Referințe

DOCUMENTE SIMILARE

All rights reserved, also regarding any disposal, exploitation, reproduction, editing, distribution, as well as in the event of applications for industrial property

(2020) proposed a new hybrid approach using different machine learning techniques to predict the heart disease.. Classification algorithms like Logistic Regression,

To find the image classification of weather reports using a convolution neural network and suggest a deep learning algorithm using TensorFlow or Keras.. KEYWORDS: Computer

So to overcome the limitations of the previously developedmethods, face recognition based automatic attendance marking system is developed using deep learning

Lung Cancer Detection Based on CT Scan Images by Using Deep Transfer Learning. Traitement

In this paper, an advanced deep learning algorithm called Leaf Disease Estimation using Deep Learning Principle (LDEDLP) based plant leaf disease detection strategy is

had suggested machine learning based automatic segmentation and hybrid feature analysis for Diabetic Retinopathy classification using fundus Image (2020).. This study used

This paper uses the novel deep learning model, namely the Elite Opposition-based Bat Algorithm for Deep Neural Network (EOBA-DNN) for performing polarity classification of