• Nu S-Au Găsit Rezultate

View of Melanoma Classification: A Survey

N/A
N/A
Protected

Academic year: 2022

Share "View of Melanoma Classification: A Survey"

Copied!
5
0
0

Text complet

(1)

Melanoma Classification: A Survey

Harshkumar Modi, Bhavya Chhabra and P. Mahalakshmi

Department of Computer Science and Engineering, SRM University, Kattankulathur.

ABSTRACT: Melanoma is a major skin cancer type that has a very high death rate. The various sorts of skin lesions cause an inaccurate analysis due to their high resemblance.

Precisecategorization of the skin lesions in prematurephasewill allowdermatologists to treat the patients in well timeand save their lives. This is backed by a researchthat shows that 90%

of the cases are curable, if identified in the initial phase. With the advancements in the computing power and image classification, automatic detection of the melanoma using computer algorithms has become far reliable. With many methods used, neural networks prove to be the best solution devised to attain the highest accuracy in classifying melanoma through early symptoms. We did our survey to find the drawbacks of recent models that serve this purpose with the goal to overcome them and provide a better solution.

Keywords: Melanoma, Neural Networks, Lesions, automatic detection

Introduction: In the year 2018, the World Health Organization stated that approximately there were 1.4 crore new cancer patients and nearly 96 lakh deaths round the world due to cancer. These figures shows that cancer is the top contributor to deaths. To begin with, skin cancer develops on the epidermis, which is the top layer of the skin visible to unaided eyes.

Skin cancer is one of the most foremost cause to the global deaths. There have been various kinds of skin cancers found. Melanoma is a well-known form of skin cancer that is typically the most malignant lesion when compared with types of skin lesions. It is also one of the rapidly spreading cancer of skin, which in recent research observations shows that the number of patients with skin cancer is increasing every year. To save human lives, automated computer-aided are taken in use for the quick and accurate results of these skin causing cancer. Computer-Aided Diagnosis (CAD) systems are one such method that has been in use for multiple disease identification at early stages. To provide dermatologists with an accurate diagnosis, image-based CAD systems use photographs of skin lesions without any other medical details. In addition, various skin lesions may be identified by an image-based CAD method based on features derived from the colours in dermal photographs. Based on its precision, a CAD system could help in early diagnosis of skin cancer and then create opportunities to save human lives beforehand.Dermoscopy is the most well-known skin imaging procedure, which has shown an increase in melanoma detection relative to that of the naked eye. With researchers finding ways to classify these images they began with using colour texture features and wavelength network features to classify the lesion based on its colour.

Methods: Dermoscopy is the most well-known skin imaging procedure, which has shown an increase in melanoma detection relative to that of the naked eye. With researchers finding ways to classify these images they began with using colour texture features and wavelength network features to classify the lesion based on its colour.

The researchers (Amir Reza Sadri and team) used a 3D multidirectional (colour texture feature) CTF matrix to evaluate the images and then scaled it using grey scale spatial

(2)

dependence matrix to find the intensity of the melanoma cells. After grading this on the scale they classified the images into different classes using multilayer propagation neural network.

The researcher M. Aldeen with his team in the same year focused on colour and texture enhancement using the colour palette and QuadTree colour clustering. The methodology is a 5-step process where they pre-process the dataset for contrast boost, and then use a hybrid threshold method to identify the lesions' borders. Now they construct quantile overlays which are radial to study the lesion better. Colour palette is created manually. Finally, the QuadTree clustering is applied with reference to the colour palette to estimate the type and the number of colours in each quintile. The approach is robust and is known to boost the ROC curve.

M. Q. Khanwith his team of researchers in their paper uses geometrical, colour and textural features all together with image segmentation to create a model. Image segmentation is widely known to be efficient in solving medical problem statements; K-means clustering is applied on the images which is a versatile technique that works well with large datasets as well.

A. Sáez and teams’ research of 2016 is based on the thickness of the melanoma lesions. It is known that including thickness as a factor can increase the model accuracies up to thirty five percent. The thickness can be divided into three stages; ranging from 0.76mm to 1.5mm. But these parameters stand-alone cannot achieve a high accuracy. Merging techniques and methodologies can help with better classification of the lesions.

These approaches succeeded in achieving accuracy in the training and testing databases.

However as the dataset was small and consist of clean data, the model was unable to provide high accuracy result in images that had hair or veins or images without proper lightning. This led to pointing out problems in the methods and its effectiveness.

Later scientist began using machine learning and neural networks to classify these lesions through multiple parameters. Their approach used different neural networks like CNN and DNN with classification algorithms like GoogleNet, ResNet, SVM, etc. With years developing more extensive datasets, these approaches gave a better accuracy model when applied on raw data.

Here, the researchers, L. Ichim and D. Popescu, used five features of the images: texture, shape, colour, size, and convolutional pixel connections to train the first layer of the neural network. The second layer using back propagation gives the output of whether the cell is malignant of not. They used ResNet and AlexNet for their classification. It is a very standard approach that gives accurate results to certain databases.

Serestina Viriri also started with using a deep learning-based neural network with pixel classifier that rates each pixel into different category. Once this is done, the data is feed into lesion classifier that finds out whether the cell is melanoma or not. This way they pre-process the data to remove physical factors like light spots, hair and other such factors.

Features play an essential role in the accurate classification of any type of images. Image smoothing is done in pre-processing which allows better feature extraction and image segmentation.

(3)

Zhen Yu along with Xudong Jiang and others used Convolution Neural Networks with state of the art Fisher vector encoding method to encode the images which are then fed into a Support Vector Machine. This way the machine is fed with more descriptive factors because of the FV encoding. Through this methods they overcame the need of large dataset for training CNN and produce higher accuracy with smaller dataset as they extracted more features.

Developing state of the art algorithms and combining pre developed algorithms helped improve the models. The obstructions in images like hair, vein and lightning were efficiently removed giving better outputs. However, Convolution Neural Network require a lot of images to be trained and the problem with using image data alone is that the number of images available in these classes is small. In addition, the skin cancer benchmark datasets are small and include a few class numbers. Three separate problems make it a difficult job to automatically classify melanoma from dermoscopy images. Firstly, while skin lesions belong to different groups, the characteristics of these lesions are very similar, such as size, texture, colour, and form, making classification a problematic task. Lack of various skin colour databases makes the model difficult to apply globally.

One such model had the researcher Q. Zhou (2020) using Spiked Neural Networks instead of Convoluted Neural Networks due to benefits in the run-time efficiency and stability in the accuracy achieved. Their model required lesser pre-trained models, and with fewer layers, the model's complexity also got reduced significantly. The model's unique point was how it used unsupervised learning with feature selection techniques to add to the model's accuracy with benefits of SNNs over CNNs.

Dividing the solution for the problem statement in two steps also proved to be a method.

Used by L. Yu, H. Chen and their team, their steps included – one to perform segmentation and then perform classification on the dataset to construct an automated framework. They use very deep Residual Networks that have skip connections, and due to their immense depth, the architectures learn the data well. Following this, they also developed a novel Fully

Convolutional Residual Network (FCRN) integrating a multi-scale contextual information integration scheme for more accurate skin lesion segmentation. Although they shared their concern regarding the size of the data set, they were able to obtain decent results.

The researchers (Romero-Lopez 2017) have made use of the VGG-16 CNN architecture to solve the problem. After relevant pre-processing and data augmentation, they use VGG-16 architecture in three different ways and then compare the methods. The first method is to train the CNN from scratch. The second method is to train with pre-trained weights (transfer learning). The last method is to do selective training by freezing the lower layers and only training the top-most layers to draw out more distinct features. After the comparison, method two was known to display the best results. In conclusion, the researchers expressed their concern about the limited dataset.

A transfer learning by R. Ashraf (2020) model using CNN was also used to classify the melanoma cells. However, their approach began with using k-means algorithm and tuning it such that it can be used to get the region of interest, which is the region of melanoma cells.

This includes removing the extra areas and finding optimal ways to get the exact size of the melanoma cell from the image data. Then using CNN and transfer learning, they trained and

(4)

tested the model. The approach helps to clean the image set and get the exact region of things required for the image classifications and remove the extra clutter.

The proposed methodology by L. Song and others (2020) is a complete workflow for Melanoma classification, as the models can take input of image of any size. Instead of

cropping images for lesion analysis, they resize the images to a standard size, which prevents any loss of data. They employ zero-center normalization and data augmentation to process the images. Their deep learning architecture consists of FPN (Feature Pyramid Network), RPN (Region Proposal Network) and Convolutional Subnets. The FPN provides flexibility with scales of wide range and provide high detail features even on low-resolution images.

The RPN is used to separate foreground objects from the background for focused learning.

Lastly, they use convolutional subnets, namely ‘classify subnet’ (calculates probability for a specific melanoma kind), ‘detect subnet’ (to find and localize on the lesion) and

‘segmentation subnet’ (for masking each lesion). Emperically, they found out ResNet 101 architecture combined with a loss function based on jaccard distance and focal loss gives the best results.

An out of the box approach by S. Albahli and team (2020) was seen in one such research paper where the researchers approach this problem by using YOLO algorithm which is an object detection algorithm and tuned it to use for melanoma detection. The researchers here used this algorithm to find the active melanoma cell region and avoiding detection of other false detections like hair, skin spots, clinical or birth marks. With this they achieved desired accuracies on ISIC dataset that they used. The method proved to open ideas of using

algorithms that are fundamentally used for functions and tune them to the desired use.

Materials:

The researcher(s) and their teams have used various databases to train and test their models.

Amongst these, some were large databased whereas somewhere small. Frequently used databases where ISIC 2016, ISIC 2017, ISIC archive, and pH2 databases. ISIC 2016 consist of 900+350 images to train and test, whereas ISIC 2017 has 2000+600 images to test and train. ISIC archive was a combination of older databases and has ~13000 images. However another database that was frequently used was pH2 database which had clean but only 200 images to work with.

Conclusion:

These difficulties can we overcome through physical features which are not devised from the images but are instead taken by humans from the patient. Such features will help reduce the false readings take or predicted by the computer. It would also increase the features in training the model to increase accuracy and train neural networks better despite of smaller database.

References:

1. Skin Cancer Classification using Deep Learning and Transfer Learning. Hosny, K.

M., Kassem, M. A., & Foaud, M. M. (2018, December). In 2018 9th Cairo International Biomedical Engineering Conference (CIBEC) (pp. 90-93). IEEE.

(5)

2. Deep CNN and data augmentation for skin lesion classification. Pham, T. C., Luong, C. M., Visani, M., & Hoang, V. D. (2018, March). In Asian Conference on Intelligent Information and Database Systems (pp. 573-582). Springer, Cham.1, 2.

3. Warsi, F., Khanam, R., Kamya, S., & Suárez-Araujo, C. P. (2019). An efficient 3D colour-texture feature and neural network technique for melanoma detection.

Informatics in Medicine Unlocked, 100176.

4. Deep Learning–Based Methods for Automatic Diagnosis of Skin Lesions. Sensors, 20(6), 1753. El-Khatib, H., Popescu, D., & Ichim, L. (2020).

5. Deep Learning-Based System for Automatic Melanoma Detection - ADEKANMI A.

ADEGUN1 and Serestina Viriri2, (Member, IEEE)

6. WN-based approach to melanoma diagnosis from dermoscopy images Amir Reza Sadri, Sepideh Azarianpour, Maryam Zekri, Mehmet Emre Celebi, Saeid Sadri

7. Skin Lesions Classification Into Eight Classes for ISIC 2019 Using Deep Convolutional Neural Network and Transfer Learning - MOHAMED A. KASSEM 1, KHALID M. HOSNY 2, AND MOHAMED M. FOUAD3

8. Melanoma Recognition in Dermoscopy Images via Aggregated Deep Convolutional Features - Zhen Yu, Xudong Jiang, Senior Member, IEEE, Feng Zhou, Jing Qin, Member, IEEE, Dong Ni, Member, IEEE, Siping Chen, Baiying Lei*, Senior Member, IEEE, and Tianfu Wang

9. (M. Q. Khan et al., "Classification of Melanoma and Nevus in Digital Images for Diagnosis of Skin Cancer," in IEEE Access, vol. 7, pp. 90132-90144, 2019, doi:

10.1109/ACCESS.2019.2926837)

10. (A. Sáez, J. Sánchez-Monedero, P. A. Gutiérrez and C. Hervás-Martínez, "Machine Learning Methods for Binary and Multiclass Classification of Melanoma Thickness From Dermoscopic Images," in IEEE Transactions on Medical Imaging, vol. 35, no.

4, pp. 1036-1045, April 2016, doi: 10.1109/TMI.2015.2506270)

11. S. Sabbaghi Mahmouei, M. Aldeen, W. V. Stoecker and R. Garnavi, "Biologically Inspired QuadTree Color Detection in Dermoscopy Images of Melanoma," in IEEE Journal of Biomedical and Health Informatics, vol. 23, no. 2, pp. 570-577, March 2019

12. Q. Zhou, Y. Shi, Z. Xu, R. Qu and G. Xu, "Classifying Melanoma Skin Lesions Using Convolutional Spiking Neural Networks With Unsupervised STDP Learning Rule," in IEEE Access, vol. 8, pp. 101309-101319, 2020

13. L. Ichim and D. Popescu, "Melanoma Detection Using an Objective System Based on Multiple Connected Neural Networks," in IEEE Access, vol. 8, pp. 179189-179202, 2020

14. L. Yu, H. Chen, Q. Dou, J. Qin and P. Heng, "Automated Melanoma Recognition in Dermoscopy Images via Very Deep Residual Networks," in IEEE Transactions on Medical Imaging, vol. 36, no. 4, pp. 994-1004, April 2017

15. Romero-Lopez, A., Giro-i-Nieto, X., Burdick, J., & Marques, O. (2017). Skin Lesion Classification from Dermoscopic Images Using Deep Learning Techniques.

Biomedical Engineering

16. R. Ashraf et al., "Region-of-Interest Based Transfer Learning Assisted Framework for Skin Cancer Detection," in IEEE Access, vol. 8, pp. 147858-147871, 2020

17. L. Song, J. Lin, Z. J. Wang and H. Wang, "An End-to-End Multi-Task Deep Learning Framework for Skin Lesion Analysis," in IEEE Journal of Biomedical and Health Informatics, vol. 24, no. 10, pp. 2912-2921, Oct. 2020

18. S. Albahli, N. Nida, A. Irtaza, M. H. Yousaf and M. T. Mahmood, "Melanoma Lesion Detection and Segmentation Using YOLOv4-DarkNet and Active Contour," in IEEE

Referințe

DOCUMENTE SIMILARE

During the period 1992-2004, for criminal offenses with elements of abuse in the field of real estate turnover in Kosovo there were accused in total 35 persons and none

, Convergence of the family of the deformed Euler-Halley iterations under the H¨ older condition of the second derivative, Journal of Computational and Applied Mathematics,

Keywords: trickster discourse, meaning, blasphemy, social change, transgression of social norms.. The Myth of the trickster and its

For instance the zero dimensional subsets of the non-orientable manifolds are neither vanishing sets of the top differentiable forms, nor critical sets of any differentiable

The evolution to globalization has been facilitated and amplified by a series of factors: capitals movements arising from the need of covering the external

We then go on to examine a number of prototype techniques proposed for engineering agent systems, including methodologies for agent-oriented analysis and design, formal

De¸si ˆın ambele cazuri de mai sus (S ¸si S ′ ) algoritmul Perceptron g˘ ase¸ste un separator liniar pentru datele de intrare, acest fapt nu este garantat ˆın gazul general,

Un locuitor al oglinzii (An Inhabitant of the Mirror), prose, 1994; Fascinaţia ficţiunii sau despre retorica elipsei (On the Fascination of Fiction and the Rhetoric of Ellipsis),