• Nu S-Au Găsit Rezultate

View of Deep Multiple Instance Learning for Automatic Detection of Diabetic Retinopathy in Retinal Images

N/A
N/A
Protected

Academic year: 2022

Share "View of Deep Multiple Instance Learning for Automatic Detection of Diabetic Retinopathy in Retinal Images"

Copied!
14
0
0

Text complet

(1)

Deep Multiple Instance Learning for Automatic Detection of Diabetic Retinopathy in Retinal Images

Ms. C.L. Annapoorani1, Dr. J. Sofia Bobby2, B. Anandhi3 and P. Hema4

1Assistant professor, 2Associate Professor, 3,4Students Department of Biomedical Engineering,

Jerusalem College of Engineering, Chennai.

[email protected]

ABSTRACT:

Diabetic Retinopathy (DR) is an eye disease caused by the increase of insulin in blood and may cause blindness. An automated system for the early detection of DR can save a patient vision and can also help the ophthalmologist in screening of DR which contains different types of lesion, i.e., micro aneurysms, hemorrhages, exudates.

Earlydiagnosisbyregularscreeningandtreatmentisbeneficialinpreventingvisual impairment and blindness.This project presents a method for detection and classification of exudates in

colored retinal

images.Iteliminatesthereplicationexudatesregionbyremovingtheopticdiscregion.Several image processing techniques including Image Enhancement, Segmentation, Classification, and registration has been developed for the early detection of DR on the basis of features such as blood vessels, exudes, hemorrhages and micro aneurysms. This project presents a review of latest work on the use of image processing techniques for DR feature detection. Image Processing techniques are evaluated on the basis of their results.

Exudatesarefoundusingtheirhighgraylevelvariation,andtheclassificationof exudates is done with exudates features and SVM classifier.

INTRODUCTION

Diabetic retinopathy (DR) is a complication of diabetes which causes impairment of Vision even blindness. These project deals about the detection of the Micro aneurysm and Haemorrhage which is the sign of the diabetic retinopathy. This chapter explains in detailed about the overview and a brief explanation about the project.

1.1 DIGITAL IMAGE PROCESSING

Digital image processing is the manipulation of digital images through a digital computer.

The input of that system is a digital image and the system process that image using efficient algorithms, and gives an image as an output. The most common example is Adobe Photoshop. It is one of the widely used applications for processing digital images.

WORKING:

Working of image processing

(2)

In the above figure 1.1, an image has been captured by a camera and has been sent to a digital system to remove all the other details, and just focus on the water drop by zooming it in such a way that the quality of the image remains the same.

1.2MATLAB

MATLAB is developed by Math Works. It allows matrix manipulations; plotting of functions and data; implementation ofalgorithm; creation of user interfaces; interfacing with programs written in other languages,includingC,C++,Java,and FORTRAN;analysedata;develop algorithms; and create models and applications. Ithasnumerousbuilt- incommandsandmathfunctionsthathelpyouinmathematical calculations, generating plots and performing numerical methods.MATLAB is used in every facet of computational mathematics.

1.2 DIABETIC RETINOPATHY

Diabetic retinopathy is an eye condition that occurs due to diabetes. It can arise as a result of the high blood sugar levels that diabetes causes. The early stage is known as non-proliferative diabetic retinopathy. The eye may accumulate fluid during long periods of high blood sugar.

This fluid accumulation changes the shape and curve of the lens, causing changes in vision.

Once a person gets their blood sugar levels under control, the lens will usually return to its original shape, and vision will improve. More than 2 in 5 people with diabetes in the United States have some stage of diabetic retinopathy. Diabetes also increases a person’s risk of developing other eye problems, including cataracts and open-angle glaucoma.

1.3.1 SYMPTOMS

Diabetic retinopathy does not usually produce symptoms during the early stages. Symptoms typically become noticeable when the condition is more advanced. Diabetic retinopathy tends to affect both eyes. The signs and symptoms of this condition may includeblurred vision, impaired color vision, poor night vision.

1.3.2 RISK FACTOR

Anybody with diabetes is at risk of developing diabetic retinopathy. However, the risk is higher if the person: has uncontrolled blood sugar levels, has high blood pressure, has had diabetes for a long time.

1.3.3 MICROANEURYSMS

Microaneurysms are an eye condition that usually manifests in the form of tiny red dots within the eye, usually surrounded by yellow rings that are the result of vascular leakage.

Microaneurysms have no other signs or symptoms, and do not affect vision in any way.

Microaneurysms usually serve as the earliest signs of diabetic retinopathy.

1.3.4 EXUDATES

Exudate is produced from fluid that has leaked out of blood vessels and closely resembles blood plasma. Fluid leaks from capillaries into tissue at a rate that is determined by the permeability of the capillaries and the hydrostatic and osmotic pressures across the capillary walls.

1.3.5 HEMORRHAGE

A subconjunctival haemorrhageoccurs when a tiny blood vessel breaks just underneath the clear surface of your eye (conjunctiva). The conjunctiva can't absorb blood very quickly, so

(3)

the blood gets trapped. You may not even realize you have a subconjunctivalhemorrhage until you look in the mirror and notice the white part of your eye is bright red.

SYSTEM ANALYSIS AND REQUIREMENTS 3.1 EXISTING SYSTEM

A computer-aided screening and grading system relies on the automatic detection of lesions.

Fundus images with DR exhibit red lesions, such as micro aneurysms (MA) and haemorrhages (HE), and bright lesions, such as exudates and cotton wool spots. The Existing method takes as input a color fundus image together with the binary mask of its region of interest (ROI).The ROI is the circular area surrounded by a black background. It outputs a probability color map for red lesion detection. The method comprises six steps. First, spatial calibration is applied to support different image resolutions. Second, the input image is pre- processed via smoothing and normalization. Third, the optic disc (OD) is automatically detected, to discard this area from the lesion detection.

DISADVANTAGES OF EXISTING SYSTEM

⮚ The prediction of Retinopathy is quite difficult

⮚ Segmentation method may produce unwanted noise.

⮚ PSNR value is high

⮚ Image Assessment analysis provides poor performance.

3.2 PROPOSED SYSTEM

Diabetic Retinopathy cause changes in eye damage the blood vessel. Image will undergo a standard method of applying image processing which include image acquisition, pre- processing like filtering(Median/Wiener/Gaussian),contrast enhancement (Histogram Equalization/Adaptive Histogram), feature extraction like GLCM, Region Properties ,Image Assessment techniques followed by exact identification of disease. We will use Skin locus model and color histogram for classification of the retinal images into category of Normal.

The Overall classification rate of the proposed system will give the better efficiency and accuracy of identifying the disease with respect to existing systems.

ADVANTAGES OF PROPOSED SYSTEM

⮚ Retinopathy Prediction is Helps prevent vision loss by early detection

⮚ Automated Blood Vessel Extraction algorithms can save time, patients’ vision and medical costs.

⮚ Error Probability low

⮚ PSNR value is very low when compared to existing system.

⮚ Adaptive Histogram gives brightness and intensity to segment eye disease properly.

3.3 CONTRIBUTION AND SCOPE

The development of an automatic detection of both microaneurysms and hemorrhages for computer-aided screening and grading of diabetic retinopathy .

3.4 ARCHITECTUREDIAGRAM

Architecture diagram is a graphical representation of the concepts, principles ,elements and componentsthat are part of architecture .The architectureconsist ofsteps that are used in the project .

(4)

Fig 3.1 Block diagram

The first stage of any vision system is the imageacquisition stage. After the image has been obtained, various methods of processing can be applied to the image to perform the many different vision tasks required today. Digital imaging can be classified by the type of electromagnetic radiation or other waves whose variable attenuation, as they pass through or reflect off objects, conveys the information that constitutes the image. In all classes of digital imaging, the information is converted by image sensors into digital signals that are processed by a computer and made output as a visible-light image.In digital photography, computer- generated imagery, and colorimetry, a grayscale or greyscale image is one in which the value of each pixel is a single sample representing only an amount of light, that is, it carries only intensity information.

Grayscale images, a kind of black-and-white or gray monochrome, are composed exclusively of shades of gray. The contrast ranges from black at the weakest intensity to white at the strongest.Grayscale images are distinct from one-bit bi-tonal black-and-white images which, in the context of computer imaging, are images with only two colors: black and white (also called bilevel or binary images). Grayscale images have many shades of gray in between.Grayscale images can be the result of measuring the intensity of light at each pixel according to a particular weighted combination of frequencies (or wavelengths), and in such cases they are monochromatic proper when only a single frequency (in practice, a narrow band of frequencies) is captured. The frequencies can in principle be from anywhere in the electromagnetic spectrum (e.g. infrared, visible light, ultraviolet, etc.)

A colorimetric (or more specifically photometric) grayscale image is an image that has a defined grayscale colorspace, which maps the stored numeric sample values to the achromatic channel of a standard colorspace, which itself is based on measured properties of human vision.If the original color image has no defined colorspace, or if the grayscale image is not intended to have the same human-perceived achromatic intensity as the color image, then there is no unique mapping from such a color image to a grayscale image.Colour images

(5)

are often built of several stacked colour channels, each of them representing value levels of the given channel. The result obtained from the diagnosis of DR. One hundred ten images (normal images and abnormal images) have been taken from DIABETDB1 database. Out of these, fifty eight eye images are used as training sample with fivefold validation and fifty two images as testing sample. There are six testing samples shown in the result which accurately classify using linear SVM classifier. Simulation has been performed in MATLAB R2015a.

Accuracy of proposed DR detection system are evaluated based on sensitivity and specificity.

SYSTEM DESIGN

System design is the process of defining the architecture, module, interface and data for a system to satisfy specified requirements. Software design is a process through which the requirements are translated in to a representation of software. Design provides us with representation of software that can be assesses for quality. Design is the only way that we can accurately translating a customer’s requirement in to a finished software product.

4.2 MODULE DESCRIPTION 4.2.1 IMAGE ACQUISITION:

The first stage of any vision system is the image acquisition stage. After the image has been obtained, various methods of processing can be applied to the image to perform the many different vision tasks required today. However, if the image has not been acquired satisfactorily then the intended tasks may not be achievable, even with the aid of some form of image enhancement. Digital imaging or digital image acquisition is the creation of a digitally encoded representation of the visual characteristics of an object, such as a physical scene or the interior structure of an object. The term is often assumed to imply or include the processing, compression, storage, printing, and display of such images. A key advantage of a digital image, versus an analog image such as a film photograph, is the ability make copies and copies of copies digitally indefinitely without any loss of image quality.

Fig 4.1 Input Image 4.2.2 2D Image Input

The basic two-dimensional image is a monochrome (greyscale) image which has been digitised. Describe image as a two-dimensional light intensity function f(x,y) where x and y are spatial coordinates and the value of f at any point (x, y) is proportional to the brightness or grey value of the image at that point. A digitised

image is one where spatial and grey scale values have been made discrete. Intensity measured across a regularly spaced grid in x and y directionsintensities sampled to 8 bits (256 values).

(6)

4.2.3 GRAY IMAGE:

In digital photography, computer-generated imagery, and colorimetry, a grayscale or greyscale image is one in which the value of each pixel is a single sample representing only an amount of light, that is, it carries only intensity information. Grayscale images, a kind of black-and-white or gray monochrome, are composed exclusively of shades of gray.

4.2.4 Grayscale As Single Channels Of Multichannel Color Images

Colour images are often built of several stacked colour channels, each of them representing value levels of the given channel. For example, RGB images are composed of three independent channels for red, green and blue primary color components; CMYK images have four channels for cyan, magenta, yellow and black ink plates, etc. Here is an example of color channel splitting of a full RGB color image. The column at left shows the isolated color channels in natural colors, while at right there are their grayscale equivalences:

Fig 4.2Conversion RGB to Gray 4.2.5WIENER FILTER

The goal of the Wiener filter is to compute a statistical estimate of an unknown signal using a related signal as an input and filtering that known signal to produce the estimate as an output.

For example, the known signal might consist of an unknown signal of interest that has been corrupted by additive noise. The Wiener filter can be used to filter out the noise from the corrupted signal to provide an estimate of the underlying signal of interest. The Wiener filter is based on a statistical approach, and a more statistical account of the theory is given in the minimum mean square error (MMSE) estimator article.

4.2.6 Gray Level Co-Occurrence Matrix (GLCM):

Feature extraction involves simplifying the amount of resources required to describe a large set of data accurately. When performing analysis of complex data one of the major problems stems from the number of variables involved. Analysis with a large number of variables generally requires a large amount of memory and computation power or a classification algorithm which over fits the training sample and generalizes poorly to new samples. Feature extraction is a general term for methods of constructing combinations of the

(7)

variables to get around these problems while still describing the data with sufficient accuracy.

Texture tactile or visual characteristic of a surface. Texture analysis aims in finding a unique way of representing the underlying characteristics of textures and represent them in some simpler but unique form, so that they can be used for robust, accurate classification and segmentation of objects. Though texture plays a significant role in image analysis and pattern recognition, only a few architectures implement on board textural feature extraction. In this paper, Gary level co-occurrence matrix is formulated to obtain statistical texture features. A number of texture features may be extracted from the GLCM. Only four second order features namely angular second moment, correlation, inverse difference moment, and entropy are computed. These four measures provide high discrimination accuracy required for motion picture estimation.

Correlation:

It passes the calculation of the correlation of a pixel and its neighbour over the whole image means it figures out the linear dependency of gray levels on those of neighbouring pixels.

Contrast:

It passes the calculation of the correlation of a pixel and its neighbour over the whole image means it figures out the linear dependency of gray levels on those of neighbouring pixels.

Energy:

Since energy is used for doing work, Thus orderliness. It makes use for the texture that calculates orders in an image. It gives the sum of square elements in GLCM. It is fully different from entropy.

Homogeneity:

In short term it is going by the name of HOM. It passes the value that calculates the tightness of distribution of the elements in the GLCM to the GLCM diagonal

σ 2 - The variance of the intensities of all reference pixels in the relationships that

contributed to the GLCM, calculated as:

(8)

μ- the GLCM mean (being an estimate of the intensity of all pixels in the relationships that contributed to the GLCM), calculated as:

4.2.7 HISTOGRAM EQUALIZATION:

This method usually increases the global contrast of many images, especially when the usable data of the image is represented by close contrast values. Through this adjustment, the intensities can be better distributed on the histogram. This allows for areas of lower local contrast to gain a higher contrast. Histogram equalization accomplishes this by effectively spreading out the most frequent intensity values.

4.2.8MORPHOLOGICAL OPERATION

Morphology is a technique of image processing based on shapes. The value of each pixel in the output image is based on a comparison of the corresponding pixel in the input image with its neighbors. By choosing the size and shape of the neighborhood, you can construct a morphological operation that is sensitive to specific shapes in the input image.

4.2.8.1 DILATION & EROSION :

Dilation and erosion are two fundamental morphological operations. Dilation adds pixels to the boundaries of objects in an image, while erosion removes pixels on object boundaries.

The number of pixels added or removed from the objects in an image depends on the size and shape of the structuring element used to process the image.

SYSTEM IMPLEMENTATION AND RESULT 5.2 Data Analysis and Visualization

MATLAB provides tools to acquire, analyze, and visualize data, enabling you to gain insight into your data in a fraction of the time it would take using spreadsheets or traditional programming languages. You can also document and share your results through plots and reports or as published MATLAB code

5.3 Acquiring Data

MATLAB lets you access data from files, other applications, databases, and external devices.

You can read data from popular file formats such as Microsoft Excel; text or binary files;

image, sound, and video files; and scientific files such as net CDF and HDF. File I/O functions let you work with data files in any format.

5.4 Analyzing Data

MATLAB lets you manage, filter, and preprocess your data. You can perform exploratory data analysis to uncover trends, test assumptions, and build descriptive models. MATLAB provides functions for filtering and smoothing, interpolation, convolution, and fast Fourier transforms (FFTs).

(9)

5.5 Visualizing Data

MATLAB provides built-in 2-D and 3-D plotting functions, as well as volume visualization functions. You can use these functions to visualize and understand data and communicate results. Plots can be customized either interactively or programmatically. The MATLAB plot gallery provides examples of many ways to display data graphically in MATLAB. For each example, you can view and download source code to use in your MATLAB application.

5.9 IMAGE QUALITY ASSESSMENT:

Measurement of image quality is important for many image processing applications. Image quality assessment is closely related to image similarity assessment in which quality is based on the differences (or similarity) between a degraded image and the original, unmodified image. There are two ways to measure image quality by subjective or objective assessment.

Subjective evaluations are expensive and time-consuming. It is impossible to implement them into automatic real-time systems. Objective evaluations are automatic and mathematical defined algorithms. Subjective measurements can be used to validate the usefulness of objective measurements. Therefore objective methods have attracted more attentions in recent years. Well-known objective evaluation algorithms for measuring image quality include mean squared error (MSE) and peak signal-to-noise ratio (PSNR). MSE & PSNR are very simple and easy to use.

5.9.1 Mean Squared Error (MSE):

The mean-squared-error (MSE) is thesimplest, and the most widely used, full-reference image qualitymeasurement.This metric is frequently used in signal processing and is

definedas follows

5.9.2 Peak Signal to Noise Ratio (PSNR):

The PSNR is evaluatedin decibels and is inversely proportional the Mean Squared Error.

It is given by the equation.

5.9.3 Average Difference (AD):

AD is simply the average of difference between the reference signal and test image. It is given by the equation.This metric is frequently used in signal processing and is defined as follows

5.9.4 Maximum Difference (MD):

MD is the maximum of the error signal (difference between the reference signal and test image).

(10)

5.9.5 Mean Absolute Error (MAE):

MAE is average of absolutedifference between the reference signal and test image. It is givenby the equation.

5.9.6 Normalized Cross-Correlation (NK):

The closeness betweentwo digital images can also be quantified in terms of correlationfunction. Normalized Cross-Correlation (NK) measures thesimilarity between two images and is given by the equation

5.9.7 Structural Content (SC):

SC is also correlation based measureand measures the similarity between two images.Structural Content (SC) is given by the equation

5.10 RESULT

The result obtained from the diagnosis of DR has been shown. One hundred ten images (normal images and abnormal images) have been taken from DIABETDB1 database. Out of these, fifty eight eye images are used as training sample with fivefold validation and fifty two images as testing sample. Testing samples shown in the result which accurately classify using linear SVM classifier. Simulation has been performed in MATLAB R2018a or higher.

Accuracy of proposed DR detection system are evaluated based on sensitivity and specificity.

(a)(b)

(11)

(c)(d)

(e) (f)

(g) (h)

(i)

Figure 5.1 Results of normal image1 (a) original image, (b) RGB to Gray converted, (c) filtered image, (d) contrast enhancement, (e) analysis, (f) segmentation, (g) output image indicates normal eye, (h) histogram of an grayscale image, (i) histogram of an adapt hist equalised image.

(a) (b)

(12)

(c) (d)

(e) (f)

(g) (h)

(i)

Figure 5.2 Results of abnormal image (a) original image, (b) RGB to Gray converted, (c) filtered image, (d) contrast enhancement, (e) analysis, (f) segmentation, (g) output image indicates hard exudates, (h) histogram of an grayscale image, (i) histogram of an adapt hist equalised image.

CONCLUSION AND FUTURE WORK:

Diabetic Retinopathy (DR) damages the blood vessels and causes irreversible loss of vision.

Sl.

NO.

IMAGES

IMAGE QUALITY ASSESSMENT

AD M

D

MSE RMSE PSNR NAE NCC SC

(13)

1.

-20.6327 28 89.6739 24.058 20.5056 0.399514 1.35536 0.533986 2.

-18.5084 45 91.769 21.6927 21.4045 0.379634 1.33186 0.553122 3.

-15.0657 54 67.3629 21.0076 21.6833 0.227769 1.19005 0.693593 4.

-6.61545 89 42.6519 14.9501 24.6379 0.125343 1.03769 0.91626 5.

-19.5552 39 101.544 23.1415 20.843 0.488559 1.44737 0.468417 Table 5.1 Values of image quality assessment

SL.N O.

IMAG ES

STATUS GLCM

CORRELATI ON

CONTRA ST

ENERG Y

HOMOGEN EITY 1.

Hard Exudates

0.935886 0.0402551 0.436612 0.979874

2.

Microaneury sm

0.968451 0.0407786 0.332002 0.979611

3.

Soft Exudates

0.983195 0.0399246 0.296226 0.980041

4.

Normal 0.991165 0.0608081 0.337223 0.973209

5.

Haemorrhag es

0.961139 0.0402336 0.353138 0.979883 Table 5.2 Gray level co-occurance matrix values

Blindness may appear as a result of unchecked and severe cases of diabetic retinopathy. Early location and treatment of DR are essential public health intercessions that can incredibly diminish the probability of vision loss. This project proposed the method to extract various features for early detection of DR. Deep learning approach has recently provided a promising direction for automatic diabetic retinopathy screening for its high sensitivity and specificity.

Automated assessment is useful for early screening of DR. Such automatic screening systems

(14)

will mainly benefit he patients in DR screening programs or annual eye examinations, who may be unaware about the disease also the ophthalmologists. In future this system will evolved further for finding the neo vessel and allow automatic DR grading.

REFERENCES

1. Asti Herliana, Toni Arifin, Sari Susanti , Agung Baitul Hikmah, “Feature Selection of Diabetic Retinopathy Disease Using Particle Swarm Optimization and Neural Network,” Southeast Asian J Trop Med Public Heal., vol. 34, no. 4, pp 2018

2. Shailesh Kumar, Basant Kumar, “Diabetic Retinopathy Detection by Extracting Area and Number of Microaneurysm from Colour Fundus Image,” Neural Comput. Appl., vol. 27, no.

5, pp. 1149– 1164, 2018.

3. Mohammed.Y. Esmail, Mohammed Abdulrahim Alzain, “Mobile based Tele-medicine Diabetic Retinopathy Screening,” Commun. Comput. Inf. Sci., vol. 459 CCIS, no. Dm, pp.

113–122, 2018.

4.Yuji Hatanaka, Mitsuhiro Miyashita, Chisako Muramatsu “Automatic Microaneurysms Detection on Retinal Images Using Deep Convolution Neural Network,” Comput. Med.

Imaging Graph., vol. 37, no. 5–6, pp. 403–408, 2013. 2018 5th International Conference on Signal Processing and Integrated Networks (SPIN) 363.

5. Noppadol Maneerat, Teerapon Thongpasri, Athasart Narkthewan, Chom Kimpan , “Detection of Hard Exudate for Diabetic Retinopathy Using Unsupervised Classification Method,” vol.

II, pp. 6–9, 2019.

6. K Nihel ZAABOUB1,2, Ali DOUIK, “Early Diagnosis of Diabetic Retinopathy using Random Forest Algorithm. Methods Programs Biomed., vol. 114, no. 1, pp. 1–10, 2020.

7. Behdad Dashtbozorg, Jiong Zhang_, Fan Huang, and Bart M. ter Haar Romeny., “Retinal Microaneurysms Detection using LocalConvergence Index Features,” Comput. Methods Programs Biomed., vol. 114, no. 3, pp. 247–261, 2018.

8. Sudeshna Sil Kar and Santi P. Maity “Automatic Detection of Retinal Lesions for Screening of Diabetic Retinopathy” Humanitarian Technology Conference (R10- HTC), 2016 IEEE Region 10, 21-23 Dec. 2018.

9. Elaouaber Zineb Aziza, Lazouni Mohamed El Amine, Messadi Mohamed “Decision tree CART algorithm for diabetic retinopathy classification,” IEEE Access, vol. 5, pp. 2563–

2572, 2019.

10. A. Deka and K. K. Sarma, “SVD and PCA features for ANN based detection of diabetes using retinopathy,” Proc. CUBE Int. Inf. Technol. Conf. - CUBE ’12, no. April, p. 38, 2012.

11. E. Ricci andR. Perfetti, “Retinal blood vessel segmentationoperators and support vector classification,” IEEETrans. Med. Imaging, vol. 26, no. 10, pp. 1357–1365,2018.

Referințe

DOCUMENTE SIMILARE

The results having the average specificity value of 96% and the average sensitivity of around 89%, and having accuracy of 96.95%, we have achieved a feature extraction

Development and validation of a deep learning system for diabetic retinopathy and related eye diseases using retinal images from multiethnic populations

and Zhou H., "Diabetic Retinopathy Detection Using Prognosis of Microaneurysm and Early Diagnosis System for Non-Proliferative Diabetic Retinopathy Based on Deep

[6] developed the exudates detection model by employing image processing steps for extracting the features and neural network is adopted for performing the

Toate acestea sunt doar o parte dintre avantajele in care cred partizanii clonarii. Pentru a si le sustine, ei recurg la o serie de argumente. Unul dintre ele are in atentie

The averaging theory is one of the most powerfrrl tools in approaching problems governed by differential equations, The goal of this note is to present a theoretical

Due to the fact that beyond the goal of this paper, our final goal is to identify CA rules that are able to successfully segment images, we intend to study the application of CA

Let there be color!: joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification. ACM Transactions on Graphics 35