• Nu S-Au Găsit Rezultate

View of Deep Learning Model for Image-Based Diagnosis of COVID-19 Classification Using Convolution Neural Network


Academic year: 2022

Share "View of Deep Learning Model for Image-Based Diagnosis of COVID-19 Classification Using Convolution Neural Network"


Text complet


Deep Learning Model for Image-Based Diagnosis of COVID-19 Classification Using Convolution Neural Network

Mohamed Yacin Sikkandar,

Department of Medical Equipment Technology, College of Applied Medical Sciences,

Majmaah University, Al Majmaah, 11952, Saudi Arabia, Email: m.sikkandar @mu.edu.sa


COVID-19 is a pandemic and considered as a life nasty disease. In IT community, Machine learning (ML) and Deep learning (DL) approaches can play a vital role in identifying COVID-19 patients by visual analysing their chest x-ray (CXR) images. The aim of this study was to evaluate the depth of the layer and the degree of fine-tuning of CXR-based COVID-19 transfer learning with a deep convolution neural network (CNN) to identify effective transfer learning strategies. And also to classify CXR images in two classes, COVID-19 presence or absence. Features extracted from the image of the CXR using the Gray-level difference method. And the Genetic Algorithm is used to choice the features from the extracted features. And fine tuning of the DL algorithm of the CNN classifier is used to categoriseCXR images as either positive or negative. Classification performance is improved by the use of 5-fold cross-validation techniques. To avoid over-fitting, each fold dataset was separated into self-determining training and validation sets using a split of 80 to 20 percent. The projected method was appraised using two X-ray datasets of COVID-19. The proposedtechniqueattained accuracy rates of 96.09 percent and 98.09 percent respectively for the first and second data sets.

Keyword: covid-19, convolution neural network, chest x-ray images, Gray-level difference, Genetic algorithm and VGG-16 model.

1. Introduction

COVID-19 spread is a world-wideproblem, screening and diagnostic processof covid -19 disease is much difficult. Many researchers across the globe are proactively involved however few of them can solve efficiently.The analysis of medical images is one of the famous approaches which can help in the accurate diagnosis of COVID-19 complications. COVID-19 are of the coronavirus band, which demonstrates high efficiency in many image processing applications such as image analyses [1], image classification and segmentation, in terms of the usage of chest imaging as a consequence of different methods [2 and 3] or pathological identification in general. A descriptor image moment [4] classifies the image by extracting the importance features from the images, and can then use these features for the classifications task with classifiers, such as SVM [5]. Deep neural network approaches [6] offer high performances, unlike man- made features, in classifying images based on extracted characteristics. In order to classify CXR imaging into COVID-19 patient class or normal case classes, various efforts have used ML methods. According to the properties of ML. Both of these initiatives have required a comprehensive path to research. In order for COVID-19 to be automatically diagnosed with chest radiographic images, for example, the authors proposed a CNN model [7]. Their accuracy of classification reported by MobileNet Architecture is 96.78 percent [8].

In the same context, the study carried out in the study [9] used the method to transfer learning. The accuracy of InceptionV3 and Inception-ResNetV2, respectively, is 97% and 87% correspondingly.


Figure 1. Chest X-ray images

This ground-glass pattern can therefore occur on the edges of the lung vessels at the early stages of COVID-19, but X-rays may be hard to identify in a visual manner as an asymmetric diffused air space opacity. Provided that in comparison to the small number of highly qualified radiologists, the number of patients suspicious rises rapidly, the supporting diagnostic techniques can be accelerated earlier in their evaluation by utilising an roboticshowingprocedure with objective and scalable presentation. DL technology has made considerable strides in recent years in processing and diagnosing medical videos, a particular domain of AI technology, and is a potentially important platform for the solution of these challenges. Given the shortage of reported evidence available to now, an analysis was carried out on DL approaches to COVID-19 diagnostics from CXR. As there is only minimal data accessible, previous studies have concentrated on developing a new DL framework for efficient diagnostic algorithms centred on deep CNNs.

Previous experiments have therefore focussed only on the effectiveness of the anew developed network by a contrast between the various CNNs, so there has been no comparative study of the layer depth effect, defined as scalability and degree of fineness of CNN transfer learning [11].

In this paper, a COVID-19 system of classifying CXR pictures is proposed. The motivation for this study is to suggest an accuratemethodology in classifying the CXR picture of COVID-19 dependent disease severity.

Next, pre-processing, feature extraction, collection of features and techniques of classification. These approaches are explained briefly in the proposed section.A variety of publications have recently been publicized with pre-trained CNNs from X-ray and CT photos for computer-aided identification of COVID- 19.The new CAD methods for the identification of COVID-19 from X-rays and CT scans, have been outlined by Shi et al.[12]. In a controlled range of 50 COVID-19 and 50 non-COVID instances, Narin et al.[13] using Resnet-50 for a predictor COVID-19. For the classification of COVID-19, Castiglioni et al.[14]

used the Resnet-50 and non-COVID instances using 250 COVID-19 cases and 250 non-COVID cases with equilibrated results.

Densenet was using a composed dataset of 25 COVID-19 and 25 non-COVID pictures for Densenet, which were used for Hemdan and other [15]. Panwar et al. [16] proposed to incorporate 5 Custom Layers of the VGG 16 network, the Transfer Learning Model, COVnet. 142 COVID-19 and 142 regular photos were employed in the process.


Pereira et al. [17] used the extracted characteristics of the InceptionV3 method in conjunction with extracted texture characteristics of local binary patterns (LBP), elongated quinar patterns (equivalent to Equivalent Projects), local directional number (LDN), locally coded transformational histogram (LETRIST) (OBIFs). In order to address an issue of class imbalance, training data has been resampled. The system was used for classifying photos into COVID-19, regular, MERS, SARS, varicella, and pneumocystis. Out of 90 images, 1144 samples were taken in group COVID-19.

Toraman et al. [18] also used a 4-layer capsule network and a main capsule layer. The tool used 231 COVIC-19 photos, 1050 photographs of pneumonia and 1050 no-found pictures. Zhang et al., [19] provides a DL patient screening model for coronavirus utilising X-ray chest pictures. 100 X-ray images of chest from 70 COVID-19 patients were used and 1431 X-ray picture from others in which COVID-19 and COVID-19 was listed, correspondingly. This model is made up of the support networks, the head of classification and the head of identification of anomalies. The support network is an 18 CNN layer pre-trained on the ImageNet dataset, and ImageNet offers a broad, widespread image classification data collection. This model will diagnose 96 percent and 70.65 percent respectively of COVID-19 and non-COVID-19 patients.

As a way of detecting COVID 19 with the built architecture of the CNN known as CVNets, Li et al.[20]

used patients' chest CT pictures. This research group obtained 90 percent, 96 percent and 0.96 per cent respectively of sensitivity, accuracy and region under the receiver operating curve (ROC). However, ensemble models still have two weaknesses. Firstly, it is prone to the over-fitting problem in most cases because of the limited amount of CXR images in the medical domain. Secondly, the ensemble model is computationally expensive as it has to extract patterns using millions of parameters during the training step.

This also leads to tuning the hyper-parameters carefully, which is a challenging task itself.

Existing CXR-based methods for COVID-19 diagnosis have three major limitations. Firstly, they do not perform well as some of them require a separate classifier after the feature extraction step, which is a demanding task.Secondly, the spatial relationship between the region of interests (ROIs) in images has been ignored in the literature, though they help to improve the performance of CXR images more accurately.Finally, existing deep learning-based systemsneed a higher number of training parameters, which not only yield a computation burden in the classification but also lead to over-fitting problems because of the partialobtainability of COVID-19. In this study, a system of classification COVID-19 relies on the characteristics and techniques of orthogonal moment collection is proposed to extract the characteristics from the COVID-19 pictures, a new descriptor package, GLDM.A genetic algorithm for selecting the important features has been created.Assess the model's output with two COVID-19 x-ray datasets. The proposed method is proved to be a novel DLideal by the combination of the VGG-16 and cross fold validation, which improve the models for CXR image classification. This manuscript is structured accordingly. The model indicated was represented in Section 2. Section 3 describes the research effects of the proposed model. In Section 4, the paper was concluded.

2. Materials and Methods

2.1 Image Pre-processing

Since picture details from several centres are obtained in this experiment, most photographs have varying contrasts and dimensions. Both photos used in this analysis also needed a correction of contrast with


the histogram equalisation method and a single scale before the evaluation. In this research, pre-processing was conducted using the CLAHE method, which was implemented in previous research in connection with lung segmentation and classification. Figure 2 displays CXR pictures with the CLAHE methodology corrected. Each picture has been resized to a uniform size of 800 to 800 for accuracy of image analysis [21].

2.2.Gray-level difference method (GLDM)

This method[22] is focused on two pixels, which have given the grey level an absolute difference and have been divided by a particular movement d. Eq. 1 determines the motion variable. And in Eq. 2 determines the function of probability density is defined, there was done in below equations:

𝛿 = ∆𝑥, ∆𝑦 𝑙𝑒𝑡 𝑆𝛿(𝑥, 𝑦) (1) 𝛿 = 𝑆 𝑥, 𝑦 − 𝑆(𝑥 + ∆𝑥, 𝑦 + ∆𝑦) (2)


𝛿) = 𝑃𝑟𝑜𝑏[𝑆𝛿 𝑥, 𝑦 = 𝑖] (3) Which ∆𝑥, ∆𝑦 are the parameters of the process and which are integers, 𝑆𝛿(𝑥, 𝑦 is the input image, x and y are the locations of the image 𝑆 𝛥 (𝑥, 𝑦) with 1 ≤ 𝑥 ≤ 𝑀 𝑎𝑛𝑑 1 ≤ 𝑦 ≤ 𝑁 (that M and N are the image dimensions). Then by concatenation of comparison, angular second time, entropy, and mean that are calculated from PDF, a function vector is calculated..

2.3. GLDM feature extraction

To extract the features of iris tissue, we need an effective feature extraction process to extract the best features of iris tissue for our job. So we used the GLDM function extractor for this reason. In this analysis, four potential types of vector d are considered: (-d,-d), (-d), (-d), and (0,d), in which d is the distance, and in this review, its value was considered to be equivalent to 11. Five texture characteristics have been calculated from Dðijd, including: comparison, linear, entropy, mean and inverse difference moments. In addition, four probability density functions were obtained for the four distinct displacement vectors with d, and the texture characteristics were determined for each probability density function (PDF). This PDF is also calculated in four simple directions: 0, 45, 90 and 135. Finally, the parameters of the system have been set as (1, 1), (2, 2), (5, 5) for our experimental findings.

2.4.Feature selection

The first aim of the suggested system of feature selection is to obtain at least the same accuracy rate as other features. The second aim is to increase the precision rate. Not only does it cost too much time and money to gather information on features, but additional information also leads to wastage of time when categorization and diagnosis are performed. In order to achieve a better response and to seek a better correlation between the features and the result, it is better to reduce the dimension in relation to the number of features. The application of genetic algorithms is a heuristic method for searching. It can be used to look for an optimal solution in areas that are too expansive to be examined in detail. The algorithm is a way to solve problems based on natural selection, the process which drives biological development, both in a


restricted and non-intricable way. It has many applications, including the natural sciences, IT, finance and economics, industry, administration and engineering.

The GA technique is an iterative approach which involves a population communicating with an eye to finding answers to a problem through a limited number of pictures named "genome" (solution). The simple GA is continued: an underlying chromosome population is formed indiscriminately or heuristically. In every developmental development (generation), the population's chromosomes are decoded and evaluated by a fitness function that depicts the search area's streamlining issue. Chromosomes are selected by their health to form another population (the next generation). There are many options, among which the fitness- proportionate choice is one of the least difficult, where chromosomes are chosen with a probability matching their relative fitness. This ensures that the normal number of times a chosen person corresponds to his or her relative population performance. Therefore, high-fitness colours, while low-fitness chromosomes, offer the population an excellent opportunity for recreation and transmission of new individuals. By genetic operations called crossover and mutation, new genes are added into the community. Crossovers are done with a probabilities of trading two new chromosomes for two selected persons (parents) their genomes (offspring). Meanwhile the mutation process avoids premature union with neighbouring Optima by a random study of new focal points in the hunting space; it is done arbitrarily with a certain low probability.

GA is an iterative method that is stochastic that cannot guarantee that the best is done. In addition, a maximum number of generations or the optimal fitness benefit may be suggested as a halt state.

The GA is a tool for selecting the right characteristics. In this methodology, the first to create a binary, random vector S composed of the features. There was a Eq. (4).

𝑠𝑗 = 𝑌𝑖: 𝑌𝑖 = 1 ; 𝑖𝑓 𝑣𝑒𝑐𝑡𝑜𝑟 𝑠𝑗𝑐𝑜𝑛𝑡𝑎𝑖𝑛𝑠 𝑓𝑒𝑎𝑡𝑢𝑟𝑒 𝑖 0 ; 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒(4) An objective function is then specified for each chosen combination of features based on the misclassification success criterion. This goal function is used to minimise the penalty function in order to find the optimal blend of functions. The error rates (mcr) here are mcr=1 – precision rates and are obtained with Eq. (5), where m is the sum of classification goals, and 𝑎𝑖𝑗 is the sum of situations in which aim I is identified by the classification system as target j. The elements of 𝑎𝑖𝑗 create a matrix in (6), the uncertainty matrix which depends on the problem and the dataset.

𝑚𝑐𝑟 = 𝑎𝑖𝑗−[ 𝑎 𝑎𝑖𝑗;(𝑖=𝑗 )]

𝑖𝑗 ; 𝑖, 𝑗 = 1,2, … … … . . 𝑚 (5) Now a weighted amount of 𝑚𝑐𝑟 𝑎𝑛𝑑 𝑛𝑓 (number of selected features) is the target to be minimised.

𝑀𝑖𝑛𝑍 = 𝑤1∗ 𝑚𝑐𝑟 + 𝑤2𝑛𝑓(6) Distributing the right-hand-side of Eq. (7) by𝑤1, we have:

𝑀𝑖𝑛𝑍 = 𝑚𝑐𝑟 + 𝑤2𝑛𝑓(7) Assuming 𝑤2/𝑤1 = 𝑊, the objective function becomes:

𝑀𝑖𝑛𝑍 = 𝑚𝑐𝑟 + 𝑊 ∗𝑛𝑓(8) Now, 𝑊 can be distinct as:

𝑊 ∝ 𝑚𝑐𝑟


This lead to,

𝑀𝑖𝑛𝑍 = 𝑚𝑐𝑟(1 + 𝛽 ∗𝑛𝑓(9) Where 𝛽 may be described as an additional function penalty 0 ≤ 𝛽 ≤ 1 . GA attempts to find the right mix of characteristics with the least number of features that reduce both costs and the misclassification rate utilising this objective function. Here a predefined number of iterations is used to establish the key criteria for finishing the iterations in GA.

2.5.Classification process by using CNN scheme

This research used as backbone networks two separate deep CNNs: VGG-16. VGG is a CNN, a Center in Engineering, University of Oxford, and has been taught by the Visual Geometry Community. The sums 16 and 19 are the sum of layers with VGG network trainable weights. In general and in medical imaging classifications, VGG architecture was generally embraced and accepted as a state of the art. Since the VGG- 16 and the VGG-19 are similarly neural, but have different layer profile, a comparative layer depth assessment may be done under the same architectural situation according to the degree of layer depth.

Figure 2. Architectural representation of CNN model

When the training data set is limited it may be an effective way to obtain a reasonable precision and less training time to pass a pre-trained network on a big annotated dataset and to fine-tune it for a defined mission. While the classification of CXR photos disease varies from the classification of artefacts and natural images, they have equally studied characteristics. The model weights were initialised based on pre- training on a general image data set during the smoothing of transmission learning with deep CNNs.


1458 pictures is uniformly split down in five folds as the training dataset. This was achieved in order to do five times cross-validation for model training assessment thus eliminating unnecessary or bias[33-35]. The data set was allocated between 80 and 20 percent of each fold, into different training and validation sets. The validity package chosen was an entirely different fold from the other training plates and the training condition during the training course was assessed. After completing one model training phase, the other independent folding was used as a validation set and the earlier validation set was re-used in the training


session for the assessment of the model training. Figure 4 provides a summary of the 5-times cross- validation conducted in this analysis. In an additional way to avoid overfitting of the last completely linked layers, the validity loss was controlled at each point, and an early stop was also applied.

Figure 3. The 5-fold cross validation training model

In this analysis, 5 blocks, whatever the depth of the network layer, are made of the VGG-16 that was used as a backbone neural network. Thus, fine tuning a total of 6 stages, sequentially frozen from 0 to 5 blocks from the last block based on the amount of blocks unfrozen. Consequently, VGG-16 was used as support networks and each deep CNN was split into six subgroups by stage of modification.

Figure 4. The VGG-16 CNN fine tuning training model


Figure 5. The VGG-16 CNN architectural model


Figure 6 displays the flow diagram for the chest x-ray picture classification system, which sums up all components of the model. A collection of photos in two groups, COVID-19 and regular instances, are used in the classifier. The suggested procedure was initially used to eliminate the irrelevant elements from the test collection and to measure the COVID-19 data set label. Extracts chest X-ray characteristics via GLDM. The derived characteristics are then separated into evaluation and training sets. Using the Genetic algorithm to decrease these characteristics and delete the redundant and unnecessary characteristics. And we have used several methods of cross validation to reliably assess covid patients. The method has been accomplished by the usage of a CNN classifier, centred on a training sample, to assess the best of the best.

As terminal requirements are met, the process of upgrading solutions stopped.


Figure 6. Flowchart of the proposed method.


For this analysis, we used two various datasets,Joseh Paul et al, gathered the first dataset at GitHub [21], as well as photographs from 43 reported publications. Each image in the metadata is alluded to. Images from the chest x-ray images (pneumonia) database[23], of regular pneumonia and bacterial pneumonia.

There are 216 positive photographs (some of them collected on the Italian Cardiothoracic Radiologist's Twitter account). They include 1675 unfavourable images of the COVID-19. The data were derived primarily from Guangzhou Women's and Children's Medical Centre's retrospective cohorts of paediatric patients aged one or five. This dataset is referred to as dataset-1. Data Availability (https:/github.com/ieee8023/covid-chestxray-data set): All the image files are included in the GitHub repository.

The other data collection is gathered in conjunction with medical physicians, alongside its Pakistan and Malaysia partners by researchers from the University of Qatar, Doha, Qatar, and the University of Dhaka in Bangladesh[23]. They have also used photographs from COVID-19 Database[25] of the Italian Organization of Surgical and Interventional Radiology. This dataset includes 219 positive images of COVID-19 and 1,341 negative images of COVID-19. This dataset is classified as dataset-2.


Both data sets provide a variety of attributes about the origins of the array. The COVID-19 photographs of a patient aged between 40 and 84 from both sexes were obtained in both datasets. There are 216 positive COVID-19 pictures and 1,675 negative COVID-19. Fig 3 displays sample pictures of the two datasets.

Figure 7. (A) Sample images of dataset-1 (B) Sample images of dataset-2.

3. Results and discussion

In our method uses a fine-tuning approach, we compare our method with some of the fine-tuned models based on some pre-trained deep learning models To implement fine-tuning on top of other pre- trained models, we use some similar settings as used in our method Moreover, to achieve the optimal accuracy from the existing methods, we perform additional hyper-parameters tuning during the training. In this experimentation was conducted by using a platform of MATLAB 2018B tool.

We defined the experiments conducted in this section toobtain our model result. The precision, sensitivity, specificity and F-score were determined for a thorough evaluation of the test data set screening results.

The sensitivity, precision, and specificity parameter total can be resolute consequently:

𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = 𝑇𝑃 + 𝑇𝑁/𝑇𝑃 + 𝑇𝑁 + 𝐹𝑁 + 𝐹𝑃 (10) 𝑆𝑒𝑛𝑠𝑖𝑡𝑖𝑣𝑖𝑡𝑦 = 𝑇𝑃/𝑇𝑃 + 𝐹𝑁 (11)

𝑆𝑝𝑒𝑐𝑖𝑓𝑖𝑐𝑖𝑡𝑦 =𝑇𝑁

𝑇𝑁+ 𝐹𝑃. (12) F-Score = 2 × (Precision × Recall /Precision + Recall) (13)


TP and FP are the sum of correctly and wrongly predicted images, respectively. Likewise, TN and FN represent the sum of correctly and wrongly predicted images, correspondingly.

Table 1. Classification performance analysis by using dataset-1 CNN


Fine tuning block

conditions Accuracy sensitivity specificity F-score


1 Positive


95.12 85.47 90.00 92.23

Negative case

92.32 89.35 92.11 89.45

2 Positive


96.45 91.54 93.85 90.00

Negative case

91.32 90.69 90.99 91.02

3 Positive


95.98 91.89 90.65 89.63

Negative case

96.64 92.69 92.65 85.02

4 Positive


91.63 92.31 93.47 89.69

Negative case

96.81 93.67 92.85 92.15

5 Positive


95.45 94.56 94.00 90.52

Negative case

94.96 91.00 94.52 91.80

In table 1 defined that, performance measure of proposed model by using first dataset.in VGG 16 with fine tuning block of performance is deliberated.

Table 2. Classification performance analysis by using dataset-2 CNN


Fine tuning block

conditions Accuracy sensitivity specificity F-score


1 Positive


95.61 85.47 91.60 92.63

Negative case

92.49 89.35 92.31 89.45

2 Positive


94.68 91.54 92.65 90.50


Negative case

91.42 90.59 91.69 91.92

3 Positive


96.62 91.89 90.90 89.63

Negative case

91.92 92.69 92.65 85.31

4 Positive


96.21 92.32 93.47 88.69

Negative case

92.42 91.67 92.85 92.15

5 Positive


96.62 94.56 93.61 89.52

Negative case

92.46 89.36 94.52 91.68

While CNN generates the best outcomes on massive data sets, much data and computing resources are needed to be educated. Sometimes the data collection would not be enough to train a CNN from scratch.

Table 2 shows the second dataset output tests.

Table. 3 Comparisons of overall performance of proposed method with existing technique S.NO Reference Model Accuracy F-Score

1 Rahimzadeh

and Attar [26]

Modified Deep CNN

91.4 90.43

2 Sethy and

Behera [27]


Learning Model

95.38 95.52

3 Hemdan et al.


COVIDX-Net 90 90.04

4 Haque et al.


A CNN Model 97.56 97.61

5 Minaee et al.


Deep Transfer Learning Model

92.29 92.03

6 Our model 97.92 95.15

Table 3 and figure 8. labelled to liken our model with state-of-the-art models that have used COVID- 19 CXR images for classification tasks. The performance measure are designated in table and graphical representation. In this comparisons conclude that, our proposed model achieved better accuracy in two dataset than other given existing models.


Figure 8. Graphical representation of performance metrics 4. Conclusion

In this paper, we projected a novel deep learning model using attention module on top of VGG-16, called attention-based VGG-16, to classify the COVID-19 CXR images. We evaluated our method on three COVID-19 CXR datasets. The evaluation results indicate that our technique is not only efficient in terms of classification accuracy but also training parameters. From this result, we can conclude that our proposed method is more appropriate for COVID-19 CXR image classification.In this research, we suggested and studied some new strategies for creating a CNN model for the identification and evaluation of COVID-19 cases using chest X-ray pictures. Study findings suggest that image pre-processing has an added advantage to produce better image data to create models for deep learning. In this analysis we suggested a procedure for visual diagnosis of the COVID-19 cases of chest x-ray pictures. This involves elimination of unnecessary areas and standardisation of image contrast to noise ratio. The suggested GLDM was used to derive COVID- 19 X-ray characteristics. Then an updated Genetic Algorithm variant was used as a form of feature selection.

The fine-tuned CNN classifier VGG-16 model was used to evaluate whether a certain image of a chest X ray as COVID-19 or a standard image. On two separate databases, the proposed approach was tested. The proposal produces equal results on the precision, sensitivity, and specificity and F-score assessment parameters for the least amount of characteristics in relation to a successful CNN architecture. Both high efficiency and resource utilisation were accomplished by choosing the most appropriate features of the proposed solution. Our proposed work may involve other medical and other related applications.


1. Hosseini M. S. and Zekri M., "Review of medical image classification using the adaptive neuro-fuzzy inference system," Journal of medical signals and sensors, vol. 2, p. 49, 2012. pmid:23493054

2. M.Kavitha, T.Jayasankar, P.Maheswaravenkatesh, G.Mani, C.Bharatiraja, and

BhekisiphoTwala, “COVID-19 Disease Diagnosis using Smart Deep Learning Techniques”, Journal of Applied Science and Engineering (2021).

3. Quek C., Irawan W., and Ng E., "A novel brain-inspired neural cognitive approach to SARS thermal image analysis," Expert Systems with Applications, vol. 37, pp. 3040–3054, 2010.


4. A.Sheryl Oliver, M.Anuratha, M. Jean Justus, KiranmaiBellam, T.Jayasankar, “An Efficient Coding Network Based Feature Extraction with Support Vector Machine Based Classification Model for CT Lung Images,” J.

Med. Imaging Health Inf. ,vol.10,no.11.pp.2628–2633(2020),ISSN :2156-7018

5. A. Sheryl Oliver, Kavithaa Ganesan, S. A. Yuvaraj, T. Jayasankar, Mohamed Yacin Sikkandar &N. B.

Prakash, Accurate prediction of heart disease based on bio system using regressive learning based neural network classifier, Journal of Ambient Intelligence and Humanized Computing (2020),https://doi.org/10.1007/s12652-020-02786-2

6. Suykens J. A. and Vandewalle J., "Least squares support vector machine classifiers," Neural processing letters, vol. 9, pp. 293–300, 1999.

7. M.Anuradha, T.Jayasankar, PrakashN.B3, Mohamed Yacin Sikkandar, G.R.Hemalakshmi, C.Bharatiraja,A.

Sagai Francis Britto, “IoT enabled Cancer Prediction System to Enhance the Authentication and Security using Cloud Computing,” Microprocessor and Microsystems (Elsevier 2021), vol 80, February, (2021) https://doi.org/10.1016/j.micpro.2020.103301

8. Apostolopoulos I. D. and Mpesiana T. A., "Covid-19: automatic detection from x-ray images utilizing transfer learning with convolutional neural networks," Physical and Engineering Sciences in Medicine, p. 1, 2020.

9. Narin A., Kaya C., and Pamuk Z., "Automatic detection of coronavirus disease (COVID-19) using X-ray images and deep convolutional neural networks," arXiv preprint ar X iv:2003.10849, 2020.

10. Hosny K. M., Hamza H. M., and Lashin N. A., "Copy-for-duplication forgery detection in colour images using QPCETMs and sub-image approach," IET Image Processing, vol. 13, pp. 1437–1446, 2019.

11. Hosny K., Elaziz M., Selim I., and Darwish M., "Classification of galaxy color images using quaternion polar complex exponential transform and binary Stochastic Fractal Search," Astronomy and Computing, p. 100383, 2020.

12. Shi F., Wang J., Shi J., Wu Z., Wang Q., Tang Z. Review of artificial intelligence techniques in imaging data acquisition, segmentation and diagnosis for covid-19. IEEE Rev Biomed Eng. 2020 doi:


13. Narin A., Kaya C., Pamuk Z. 2020. Automatic detection of coronavirus disease (covid-19) using X-ray images and deep convolutional neural networks. arXiv preprint arXiv:2003.10849.

14. Castiglioni I., Ippolito D., Interlenghi M., Monti C.B., Salvatore C., Schiaffino S. Artificial intelligence applied on chest X-ray can aid in the diagnosis of covid-19 infection: a first experience from Lombardy, Italy.

medRxiv. 2020.

15. Hemdan E.E.-D., Shouman M.A., Karar M.E. 2020. Covidx-net: a framework of deep learning classifiers to diagnose covid-19 in X-ray images. arXiv preprint arXiv:2003.11055. [Google Scholar]

16. Panwar H., Gupta P., Siddiqui M.K., Morales-Menendez R., Singh V. Application of deep learning for fast detection of covid-19 in X-rays using ncovnet. Chaos Solitons Fractals. 2020:109944. [PMC free article]

[PubMed] [Google Scholar]

17. Pereira R.M., Bertolini D., Teixeira L.O., Silla C.N., Jr, Costa Y.M. 2020. Covid-19 identification in chest X- ray images on flat and hierarchical classification scenarios. arXiv preprint arXiv:2004.05835. [PMC free article] [PubMed] [Google Scholar]

18. Toraman S., Alakuş T.B., Türkoğlu İ. Convolutional capsnet: A novel artificial neural network approach to detect covid-19 disease from X-ray images using capsule networks. Chaos Solitons Fractals. 2020:110122.

19. Zhang, J.; Xie, Y.; Li, Y.; Shen, C.; Xia, Y. Covid-19 screening on chest X-ray images using deep learning based anomaly detection. arXiv 2020, arXiv:2003.12338.

20. Li, L.; Qin, L.; Xu, Z.; Yin, Y.; Wang, X.; Kong, B.; Bai, J.; Lu, Y.; Fang, Z.; Song, Q.; et al. Artificial intelligence distinguishes COVID-19 from community acquired pneumonia on chest CT. Radiology 2020, 200905.

21. Singh, R.K.; Pandey, R.; Babu, R.N. COVIDScreen: Explainable deep learning framework for differential diagnosis of COVID-19 using chest X-Rays. Res. Sq. 2020.

22. Cohen J. P., Morrison P., and Dao L., "COVID-19 image data collection," arXiv preprint ar X iv:2003.11597, 2020.


23. P. Mooney. (2020, 2020-April-11). Chest X-Ray Images (Pneumonia). Available:


24. D. A. L. Izzo Andrea. (2020, April-11-2020). Radiology. (2020). COVID-19 Database. Available:


25. Chowdhury M. E., Rahman T., Khandakar A., Mazhar R., Kadir M. A., Mahbub Z. B. et al., "Can AI help in screening Viral and COVID-19 pneumonia? " arXiv preprint ar X iv:2003.13145, 2020.

26. Rahimzadeh, M.; Attar, A. A New Modified Deep Convolutional Neural Network for Detecting COVID-19 from X-ray Images. arXiv 2020, arXiv:2004.08052.

27. Sethy, P.K.; Behera, S.K. Detection of coronavirus disease (covid-19) based on deep features. Preprints 2020, 2020.

28. Hemdan, E.E.D.; Shouman, M.A.; Karar, M.E. Covidx-net: A framework of deep learning classifiers to diagnose covid-19 in X-ray images. arXiv 2020, arXiv:2003.11055.

29. Haque, K.F.; Haque, F.F.; Gandy, L.; Abdelgawad, A. Automatic detection of COVID-19 from chest X-ray images with convolutional neural networks. In Proceedings of the 2020 3rd IEEE International Conference on Computing, Electronics & Communications Engineering (IEEE iCCECE ’20), Essex, UK, 17–18 August 2020.

30. Minaee, S.; Kafieh, R.; Sonka, M.; Yazdani, S.; Soufi, G.J. Deep-covid: Predicting covid-19 from chest X-ray images using deep transfer learning. arXiv 2020, arXiv:2004.09363.



This errand is to set up a Deep Learning computation fit for predicting tweets of different kinds of assessments, for instance, positive, negative, fair, incredibly antagonistic

[6] developed the exudates detection model by employing image processing steps for extracting the features and neural network is adopted for performing the

The proposed work uses the effectiveness of GoogLeNet with minor modifications using transfer learning mechanism in classifying COVID19 patients and normal patients

Transfer learning on a collection of 2000 radiograms square measure usually accustomed train four modern convolutional neural networks, at the side of

To find the image classification of weather reports using a convolution neural network and suggest a deep learning algorithm using TensorFlow or Keras.. KEYWORDS: Computer

had suggested machine learning based automatic segmentation and hybrid feature analysis for Diabetic Retinopathy classification using fundus Image (2020).. This study used

This paper uses the novel deep learning model, namely the Elite Opposition-based Bat Algorithm for Deep Neural Network (EOBA-DNN) for performing polarity classification of

Through the medium of this paper, a deep learning model, both hybrid and novel in nature which uses a parallel union of a long short-term memory (LSTM) and convolutional