**An Efficient Cervical Image Segmentation Method Using Lagrange Dual ** **Dictionary Learning in Convolutional Neural Network **

**K.Shanthi**^{1}** ,Dr.S.Manimekalai**^{2}

1Research Scholar, Department of Computer Science, Thiruvalluvar University, India

2Head Department of Computer Science, TheivanaiAmmal College for Women Autonomous, Villupuram,India

1[email protected],^{2}[email protected]

*Abstract- Cervical cancer lies in the fourth place of the most dominant disease found in women whereas; *

when detected on time and accurately saves life. A cancer diagnosis is complicated as the entire process has to be analyzed carefully in a random domain which can be overcome by Computer Assisted Diagnosis (CAD). The significantprocess of a computer-assisted diagnostic system aiming at earlier detection of cervical cancer is segmenting the cancer cell precisely. This process offers several benefits such as improvement in diagnostic accuracy, reduction in time for diagnosis, and also improves the uniformity of the diagnostic results taken from different laboratories. In this paper, preprocessing and segmentation steps are discussed by proposing a novel supervised dictionary learning method for segmentation process. The algorithm is named as, Lagrange Dual Dictionary Learning Algorithm (LDDLA) in convolutional a neural network which aims to minimize the segmentation problems in a way by building specific dictionaries for everytype, and hence the input image can be segmented with the help of the dictionary related to the sparsest representation. As a result, the proposed method achieves 98.78% of accuracy, 94.7% recall, 79.3% of precision, 94.65% of training accuracy in 1.8 sec.

**Keywords- Cervical cancer, Preprocessing, Segmentation, Dictionary method, Neural network **

**1. ** **INTRODUCTION **

Cervical cancer also pushes women to death due to the severe health issues caused by this type of cancer and hence has to provide treatment in its early stage. Bust mostly symptoms are observed only at the advanced stages. This type of cancer arises from cervixdue to the abnormal cell growth. Those cells are capable to attack or spread disease to the other parts of human body. From the investigation made, it is observed that HPV is major cause of most of the cervical cancers [1]. Cervical wall comprises of4 layers namely parabasal, Basal, superficial and intermediate layers. Out of these, basal layer is deepest layers withyoungest cells. Squamous and columnar cells are located in the superficial layer. There are 2 forms of cervical cancer; the first one affects more frequently occurring squamous cells and the second one affects column cells called cervical adenocarcinomas [2].

Cervical cancer precursor lesions showfew distinct abnormal morphological features recognized by Computer Assisted Diagnosis (CAD) system. Characteristics like marginal shape, blood vessel caliber, color or opacity, intercapillary distribution and spacing, and contour are measured by doctorsfor clinical analysis. Cervical Intraepithelial Neoplasia 3 (CIN3) can be automatically discriminated from normal epithelium and immature metaplasia [3]. Although the intercapillary lengths correlate to the seriousness of the condition, these distances are automatically measured to avoid such cervical neoplasms.

Fig 1: Schematic representation of cervix

Figure 1 shows the CAD examination involving systematic visual estimation of lower genital tract (vulva, cervix and vagina), by focusing on structure of metaplastic epithelium, squamous epithelium, Squamo Columnar Junction (SCJ)including the transformation zone on cervix. By using CAD system, workload of diagnosticiansis reduced allowing to concentrate on diagnosis and identifying abnormal cells of the cervix. Thereby, increasing the detection accuracy of cervical cancer, and further incidenceas well as mortality of cervical cancer is reduced. In developing countries with fewdiagnosticians, CAD enhances the efficiency ofdiagnostician; hence, CAD is very beneficial for detecting cervical cancer in the early stages.

Segmentation of nucleus is the first process of CAD system while screening cancer cells which has to be done accurately and ensure whether cells have lesions by extracting and classifying features.

Generally, nucleus resembles a cervical cell and hence detecting it has to be precise or else it directly affects the successivetasks like segmenting cytoplasm and recognition [4]. Few other challenges faced while segmenting cervical nucleus are overlapping, poor contrast, uneven stain, and the existence of neutrophils. A vast development in devices used for capturingenormous images throws huge challenge in analyzing those images.

Conventional machine learning methods widely assist in automatic diagnosis, but fail while handling huge volume of data. The framework of Convolutional neural network (CNN) extremely performs very well for data with high dimension, as it analytically learns the basic complex function, and provides remarkable performance when compared with the conventional machine learning methods [5].

Thus, CNN is employed for cervical classification of cytological images where the entire single cell image is provided as input rather than manually extracted nucleus features and cytoplasm of the cervical cell images. The conventional machine learning approaches involved cell segmentation, feature extraction and classification and the success lies on the accuracy of the segmentation. But in CNN, the key component is to effectively classify data with high dimension and has a major role in the achievement of image processing. These models provide a greater impact in health sector only because of this property [6].

The motivation of this work is as follows, identifying cervical cells manually for detecting cancer is a time-consuming and difficult task. As numerous cells are found on the glass slide, the process is more complex. The factors causing errorsare poor contrast and unreliable staining. In the initial stage, cervical cancer cannot be diagnosed with any symptoms. Analyzing the microscopic images is difficult as the degree of certainty is low in determining the impact of cancer.

Detecting cervical cancer is easierand diagnosing is accurate, but the discriminating cervical cancer is highly independent and detailed discussion is required among experts. Moreover, in computer aided screening, the step to be more concentrated is segmentation as only accurate images helps in enhancing the performance of classification and reducing the processing time. Due to all these factors, it is clearly stated that an efficient image segmentation procedure has to be designed to segment massive amount of input cells in less time and the reduction of errors while screening for diagnosing cervical cancer.

Paper is arranged as, section 1 describes overview of cervical cancer, role of Computer Aided Detection (CAD) in cancer detection and application convolutional neural network in cancer detection, section 2 describes the existing techniques for cancer prediction with its limitation. In section 3 proposed methodology is explained with subsections like data pre- processing, segmentation by building new architecture. Section 4 gives detailed experimental analysis and then section 5 ends up with conclusion and future work.

**2. ** **LITERATURE REVIEW **

**Pin wang et al., (2020) suggested anPsiNet-TAP for Pap smear image classification. As number **
of images are restricted, the transfer learning is adopted for achieving pre-trained model. Then convolution
layer is modified and by pruning few of the convolution kernels which communicates with the final
classification process, this approach was optimized. For testing this PsiNet-TAP method, 389 cervical Pap
smear images were utilized. Remarkable accuracy of 94% was achieved. The drawback is that this
approach minimized local reconstruction error by increasing the global reconstruction error [7].

**Ramzan et al.,(2020)suggested **anapproach for segmenting multiple brain regions based on 3D-
CNN and employed residual learning along with dilated convolution operations for learning end-to-end
mapping efficiently from MRI images to voxel-level brain segments. This work segmented up to 9 brain
regions which included cerebrospinal fluid, white as well as gray matter and their sub-regions. For three
and nine brain regions, 0.879 and 0.914 are the Mean dice scores achieved with data from three various
data sources[8].

**Allehaibi et al., (2019)developedmethod to segment entire cervical cell using mask regional **
CNNas well asclassifiedby involving a smaller Visual Geometry Group-like Network (VGG-like Net).

Spatial information as well as prior knowledge was used in ResNet10 to support Mask R-CNN which utilized Herlev Pap Smear dataset. During segmentation, while applying Mask R-CNN on entire cell, it outperformed approaches used previously with theaccuracy rate of 74.4%,precisionof 0.92 and recall with 0.91.Limitation of this method is that higher power for processingis requiredthroughout the network [9].

**Xiang li et al., (2019) presented a novel discriminative dictionary learning approach to enforce a **
lowrank constraint on every classspecifically shared dictionary, i.e. spanning subspace with low-rank. A
regularization term for the shared dictionary is employed forminimizinginter-class scattering of its related
shared coefficients such that shared patterns of the image in CNN are learnt. Algorithms for optimization
were coined for solving the issues faced during learning process. The results focused on voxels with the
results lying between 0.3 to 0.7 with 64.3% of accuracy. The limitation is thatdeep hierarchical features
embedded in the cell are notextracted automatically, providingthe center of coarse nucleus is known.

Consequently, low accuracy rate is provided when segmentation is not performed appropriately and when the medical knowledge of cervical cytology is not explicitly utilized [10].

**Haominglin et al., (2019) ** constructed an approach based on CNN which integrated the
appearance of cell image with cell morphology to segment cervical cells in Pap smear. Cervical cell dataset
is trained which consists of re-sampled patches of images centered on nucleus. Various CNN approaches
which were pre-trained on the cervical dataset named ImageNet were improved for comparison. Herlev
cervical dataset was involved for estimating the performance of the method introduced. Among the CNN
models involved, GoogLeNet with morphological as well as appearance information provided higher o
accuracy of classification. The drawback is that analyzing finestsegmentation of cervical cell is still
challenging[11].

**Yupei Zhang et al., (2019)used multi-needle detection using enhanced sparse dictionary learning **
approach which involved images with no needles. By the use of learned dictionaries, residual pixels were
obtained by reconstructing target image which were clustered to producethe centers. These centers helped
in constructing Regions of Interest (ROIs). This approach accurately detected 95% of needles and
produced a tip location error of 1.01 mm. Limitation is, this method eas not suitable for convolutional
neural network which used less dimensional data [12].

**Wu et al.,(2019) suggested a novel Skip Connection U-net (SC Unet) with an atlas-based **

approach forpreprocessing where non-brain tissues were removed and hence segmentation accuracy was improved. A dataset with 60 paired images was used to evaluate the efficiency of this method. ThisSC U- net approach produced 78.36% of mean dice score which was more than the U-net approach which produced 74.99%and another deep learning approach produced 74.80%.The drawback is, this method consumes more time for convergence [13].

**Elayaraja et al., (2018) **suggested a novel method to screen cervical cancer with cervigram
images. At first, OLHT was used forenhancingedges of the cervical image followed by Dual Tree Complex
Wavelet Transform (DT-CWT) for obtaining the image with multi resolution. Then, features like wavelet,
moment invariant, LBP and GLCM were extracted using cervical image with multi resolution.

Morphological operations were applied for detecting and segmenting cancers region from the abnormal cervical image. The performance of detectingcervical cancer aresensitivity with 97.42%, specificity with 99.36% andaccuracy with 81.29%. The limitation is, this method detected cancer only on external boundary region and hence ended with low accuracy [14].

**HaiyanZhenga et al., (2018) ** introduced 2D deep learning-based approach which involved
uncertainties in iterative segmentation. This approach described the regions with uncertainty in MRI
images. The results fine-tuned by increasing the weights of regions with uncertaintywhile training
iteratively. From the experiment, it was observed that this approach outperformed other methods used for
comparison by achieving 73.88% of Dice similarity coefficienton MRI cancer image dataset. The
limitation is more computational complexity while designing the deep learning model [15].

**Asha Das et al., (2018) suggested a sparse coding and dictionary learning on covariance-based **
CAD matrices which formeda Riemannian manifold for classifying breast tumor. These matrices were
represented as the sparse integration of Riemannian dictionary atoms. Manifold non-linearity was dealt by
using Reproducing Kernel Hilbert Space (RKHS).The results showed that the suggested method achieved
79% accuracy. It was challenging when classifying numerous tumors with poorly discriminatedcells and
usually hollow nucleus with broken membrane [16].

**Hou et al.,(2018) suggested a novel coarse-to-fine approach for segmenting CSF, WM and GM by **
employing two cascading 3D CNN. Initially, Densely Connected Fully CNN (DC-FCNN) wasdeveloped
with feature reuse, which used spatial information and reduced the limitation over computer memory.

Next, 6-CNN was developed to correct boundary voxel which reduced the computational cost and improved accuracy of segmentation. The limitation is, when the amount of voxels was more it faced computational complexity while designing the network [17].

**Ling Zhang et al., (2017) **addressed various limitations on existing methods and constructed a
new approach for classifying cervical cells directly based on deep features using CovnNet where
segmentation was involved. Initially, ConvNetwas pre-trained and was The cervical cell dataset is trained
which consists of re-sampled patches of images centered on nucleus. While testing, aggregate function was
used which calculated the average predicted scores of the set of image patches. This model producedan
accuracy of 88.3%, AUC with 0.99, and specificity of 98.3% for Herlev Pap smear dataswt. The limitation
is, more processing time and complexity [18].

**Yiming Liu et al., (2017) **suggested method to segment cervical nucleus where prior pixel-level
information was used fordelivering supervisory information to train mask regional CNN (Mask-RCNN).

This was used for extracting multi-scale nuclei features, and segmentation as well as bounding box of nuclei were achieved by using forward propagation of MaskRCNN. To upgrade segmentation process, LFCCRFcontaining unary along with pairwise energy was used. With the Herlev Pap smear dataset, cervical nuclei achieved similarity index more than 0.95 with low SD and obtained 68% of accuracy. Even when the performance of segmenting cervical nucleus was improved, the accuracy of segmenting abnormal nucleus has to be still increased due to the clinical importance [19].

**3. ** **RESEARCH METHODOLOGY **

The major objective of developing this paper is to design an efficient system for automatically detecting cancer cells by applying learning techniques on CAD images.This is done by proposing a novel

dictionary learning approach and applying it to segment images on larger scale. This approach takes the advantage of Lagrange Multiplier to minimize the issues during segmentation by creating specific dictionaries for every type and the input image is segmented with the help of dictionary related to the sparsest representation. As a result, it is expected to segment nucleus, cytoplasm and background regions in terms of superficial squamous epithelium cells, columnar epithelium cells and squamous non- keratinized epithelium cells.

Fig 2: Architecture diagram of proposed system

Figure 2 illustrates the architecture of the designed system. In preprocessing, the processes involved are computation of histogram, identifying the number of peaks and suppression of peaks that are irrelevant. With the intensity values of the threshold, peaks values are fixed from which optimal multilevel thresholds are estimated by use of Otsu method. After then, Lagrange dual dictionary learning segmentation is carried out. While handling numerous images, present supervised dictionary learning approaches are less satisfactory due to high computational complexity.

**Data preprocessing **

During segmentation, training and testing is done by isolating the cervical cell from its mask.

Dataset contains both original image as well as mask which is linked with the type of cancer. These collected images are read with file name and is then segregated into 2 image types as original cervical cell and mask. During preprocessing, when the mask image is read, it is converted into a binary image where white represents the pixels denoting the portion of cervical cells (cell nuclei along with cytoplasm) and other pixels are with black. For filtration process, weiner filter is used which is a least mean square filter.

The overall mean square error (MSE) is minimized during inverse filtering and noise smoothing.

Moreover, additive and multiplicative noise are suppressed.

Fig 3: Block diagram representation of preprocessing stage
**OTSU based preprocessing **

During preprocessing, the processes involved are histogram computation, determining the number
of peaks and suppression of peaks that are irrelevant. With the intensity values of the peaks, threshold
values are obtained from which optimal thresholds are estimated by using Otsu method for multilevel
image threshold. For calculating the initial threshold level for preprocessing the images should be partition
into two classes C1 and C2 with the image mean grey value M*0*such that C1 = {0, 1, 2,…M} and C2 = {M
*+ 1, M + 2, … Z-1}, where Zis total number of gray levels of image. *

M0 = ^{𝑀−1}_{𝑖=0} 𝑖 ∗ 𝑞(𝑖) (1)

Where,
q(i)=^{n(i)}

𝑁 ,q(i) ≥0, ^{𝑀−1}_{𝑖=0} q i = 1 (2)

From the initial threshold, the higher and lower threshold will be calculatedas follows:

The mean of C1 where, Qc1= ^{𝑡}_{𝑖=0}q(i) (3)

M1 = ^{𝑖∗𝑞1}_{Qc1}

𝑡0 𝑖=0

(4)
The mean of C2 where, Qc2= _{𝑞1}1 − 𝑄𝑐2

M1 = ^{𝑖∗𝑞1}

Qc2 𝑍−1 𝑖=𝑀0+1

(5)

Once the higher and lower threshold find, it is possible to select of the final threshold range that optimize the algorithm by removing the gray values that are too low or too high. This way, noise of the image will be reduced as well as the time complexity of the algorithm. Now, the foreground and background histogram is given as,

€1 = ^{𝑥+0}

2 and €2 = ^{𝑥+255}

2 (6)

Where €1 is the computation for the mean between 0 and the median and €2 is the mean from median to 255.

After fixing the threshold value the standard variance of two classed will be find out as follows,

*αc1 = *^{𝑀}_{𝑖=€1} 𝑖 − 𝛽𝐶1 ^{2}∗ 𝑞1*/Qc1 * (7)

*αc2 = *^{€2 }_{𝑖=𝑀+1} 1 − 𝛽𝐶2 ^{2}∗ 𝑞1*/Qc2 * (8)

where Tϵ [€1, €2 ].

The between-class variance can now be computed using the between-class variance with standard deviation as shown below

*α (i)= 𝑄𝑐1 𝑄𝑐2 𝛽𝑐1 − 𝛽𝑐2 *^{2}* * (9)

As a result, maximum intensity pixels in each resolution enhanced images are used to select the optimum pixel intensity to form the enhanced image. The pixels in enhanced cervical image are having higher pixel values than the source cervical image. The abnormal patterns are clearly visible in enhanced cervical image.

**Data segmentation **

Data segmentation with the given an input image is carried out with three primary goals namely to obtain the list of bounding boxes for every image pixel, class label connected to every bounding box and confidence score of every bounding box as well as class label. Rather than predicting the bounding box, the better way is to predict a mask for every original image, providing a pixel-wise segmentation instead of coarse, even possibly unreliable bounding box.

Fig 4: Block diagram representation of segmentation stage

**Architecture of convolutional neural network (CNN) **

AZFnet-like network is employed for deep learning training during segmentationto improve performance rate of recognition with increase in the depth of CNN.

Network comprises of deep structureswith 11 to 18 weight layers as well as uses only 7*7 filter
layers to select the features of the image at a finer resolution level. The Activation Maps were increased
from (385,384,256) to (512,1024,512), in the 3^{rd}, 4^{th}and 5^{th} Convolutional layers which in turn increased
the ability of the network to detect several features. At first, 3*3 MaxPooling dropout layer with a
parameter of 0.8 was added; Next, in output layer, local response normalization was involved. At last,
batch size was fixed as 1000 while training the neural network. 3 × 3 filter as well as exponential linear
unit activation function was utilized in the convolutional layer. The pooling layerreduced dimensionality of
feature and overfitting was avoided. 4 × 4 filter was employed in maxpool layer, while dropout layer
involved a parameter of 0.8, and 7 x 7 filter was used in the upsampling layer. The structure of the network
was flattened when the convolutional layers were consolidated. At the end, this was the output of the
convolutional layer andsigmoid function.

Fig 5: ZFnet model for proposed algorithm
**Back propagation **

Gradient descent is typically used to update learable parameters by backpropagation optimization algorithm, i.e., kernels and weights, of network so as to reduce loss. The loss function gradient sets the path in which it has the highest increase rate, and each learning parameter is modified with an arbitrary step-size dependent on a hyperparameter, defined as the learning rate, to the negative direction of the gradient. The gradient is a partial derivative of the loss in respect of any learned parameter, and the following is a single parameter change,

x(l) = x(l)-β^{𝜕𝐽 (𝑥,𝑎)}

𝜕𝑥 (𝑙) = x(l)-β 𝜕𝐽 (𝑥,𝑎;𝑟 𝑖 ,𝑠(𝑖)

𝜕𝑥 (𝑙) −*Ω𝑥*

𝑛

𝑖=1 (10)

y(l) = y(l)-β^{𝜕𝐽 (𝑥,𝑎)}

𝜕𝑥 (𝑙) = y(l)-β 𝜕𝐽 (𝑦,𝑎;𝑟 𝑖 ,𝑠(𝑖)

𝜕𝑦 (𝑙) 𝑛

𝑖=1

(11) where,

β is update rate of the parameters, x and y are weight matrix as well as offset vector for every layer and (a(i), b(i)), 1 ≤ i ≤ N is given set of samples.

Feed forward layer adjusts the activation values obtained from the inception model and finally will be closer to 0 and compulsorilyless than 1. These values help the model to function in an easiest way by reducing computational complexity. The size of the output image achieved from inception model is reduced using pooling such that the model is trained efficiently and the computational complexity is reduced. Every node is connected with each other node by the dense layer in a feed forward way. Hence, while detecting the features, redundancy is reduced and permits feature reusability. The parameters are reduced but only appropriate for training small datasets.

**Region of Interest (ROI) **

The ROI extracted using ZFnet is aligned by a single-layer for extracting the features. Then the single-layer features are replaced using multiple layer features. Every ROI has to be aligned using multilayer features. Then the features of various layers are combined such that every feature has multilayer features.

**Lagrange Dual Dictionary Learning Algorithm (LDDLA) **

Hao and Fei utilized an inappropriate Augmented Lagrange Multiplier approachfor updating two dictionaries. Let v (low)represent an image with low-quality and 𝐷ic(𝑙) = [a1 𝑙 , a2 𝑙 ,...,aM𝑙 ], and let 𝐷ic(𝑙)∈ R𝑛×M denote a low dictionary created from v(low). Likewise, consider v (high) as the counterpart of v (high)with high-quality and 𝐷ic (ℎ) = [b1 ℎ , b2 ℎ ,...,𝑑Mℎ ]; 𝐷ic(ℎ)∈ R𝑛×M constructed from v (high). As a corresponding relation between v (low)and v (high), they can be connected with a general following model:

v (low) = 𝑄 v (high) + µ(low) (12)

where µ(low) and 𝑄 represent the noise and transform operator accordingly. For some specific v (high), consider that every patch vi (high) in v (high) arerepresented as linear grouping of atoms in dictionary 𝐷ic(ℎigh).

vi(high) = 𝐷ic(ℎigh)𝛼𝑖 + β (13)

where β is the error; ‖ /β/^2 <µ ,𝛼𝑖stands for sparse coefficient, ‖/𝛼𝑖/‖0 ≪M. The combination of (12) and (13) provides,

vi (low) -Dici𝛼𝑖/^2 < Ω = / vi (low) -Q Dic(high) 𝛼𝑖/^2 < Ω (14)

In accordance with the derivations above referred as Sparse-Land Model, low-quality patch vi

(low) are sparse coded by same vector 𝛼𝑖 under dictionary 𝐷ic (𝑙ow) = 𝑄𝐷(ℎigh). Thus, given the dictionaries (𝑙ow) and (high) with one-to-one accurate mapping of atoms, vi (high) can be roughly recovered just by multiplying 𝐷(high) with sparse representation obtained from 𝐷(low) which is given as

vi(high) =𝐷ic (high) 𝛼𝑖 + µi. (15)

Once the high resolution and low resolution dictionaries are calculated it is important to update
and optimize it, which is done by stochastic gradient descent algorithm To initialize the optimization of
dual dictionaries, first a set of unsupervised dictionaries is learnt for higher and lower resolution
*Dun(low)= {D(low)}L−1,where low=0 .Dun(high) = {D(high)} L−1,where * *high=0. Each dictionary *
D(low) and D(high)represents the descriptors at a specific scale. For *x which is an internal node at each *
layer *l, model parameters Wxareinitially assigned 0. The dictionary D(low) and *D(high) is comprised of
inherited part Div (low) and Div (high) from its hidden part. For training set of images{(a, b)}v, several
iterations N(it) are performed for updating DsvandWx. At each iteration, a batch of samples are randomly
selected from data, samp(v)and compute the representation pv= Δ(a,Div )and compute the representation
pv= Δ(b,Div ) for every sample a of the batch. Considering the cost of reconstructing the code at every
iteration, the representation z is approximated by the codefragments of the descriptors in every sample. As
a whole,a top-down fashion is followed in optimization, i.e., learning is sequentially performed from top to
bottom. Nodes at these layers are independently tackled.

**Algorithm: **

**INPUT: ic(𝑙) = [a1 𝑙 , a2 𝑙 ,...,aM𝑙 ] **
𝐷ic (ℎ) = [b1 ℎ , b2 ℎ ,...,𝑑M ℎ ],

Initial set 𝐷ic(𝑙) ∈ R𝑛×M and 𝐷ic(h) ∈ R𝑛×M Data (a, b),parameters Q,β,α,Δ,µ,Ω

**OUTPUT: optimized parameters with weight DsampvandWx **
**Compute: v (low) and v (high) **

*For l= 0 to L-1 do *

vi(high) = 𝐷ic (high) 𝛼𝑖 + µi.

*Foreachv at layer l do *

*Initialize: {(a,b)}v<- {a€𝐷ic(𝑙) and b€𝐷ic(h)} *

Wx<-0, Dsampv<-Dic(low),Dsampv<-{Div,Dsv}

For k=1 to N(it) do

choose a batch from samp(v)

compute the representation pv= Δ(a,Div )andpv= Δ(b,Div )

*Update dictionary *
Dsv<-Ω (Dsv-ΔKγWxR𝑛)
Wx<-Ω (Wx-ΔKγWxR𝑛)
End

End End

**4. ** **PERFORMANCE MEASURES **

Forestimating and validating the introduced segmentation approach for uncertainty regions in cervical images, annotated cervical cancer datasets are used to perform experiments. The 3D cervical MRI image dataset collected as secondary dataset from open source with different types of squamous epithelial cells. The image size is 310 × 310 resolution with large variations in shape as well as appearance. In every image, the cervix boundary is described manually by human experts consistent with domain knowledge and utilizedto measureperformance of segmentationimplemented in MATLAB-2013.

Fig 6: Input cervical image

Figure 6 shows the model input cervical image which is infected by squamous epithelium cell carcinomaand hence it has to be preprocess by weiner filter the remove unwanted noise.

Fig 7: preprocessed image

Figure 7 indicates the OTSU threshold preprocessed image whereas the histogram is computed to find the available number of patterns and hence it has to be segment.

Fig 8: Segmented image

Figure 8 Segmented image indicates the cancer segmented image whereas the yellow part indicates the availability of cancer in cervix.

Fig- 9 Histogram value for input image

Figure 9 shows the histogram values for input cervical image, whereas the peak value attains at 49^{th} and
40^{th} histogram range.

Table I: Comparison of training accuracy and testing time

**Method ** **Training Accuracy (%) Dice score **

3D Convolutional Neural Networks (3D-CNN) [8] 91.4 87

Skip Connection U-net (SC Unet) [13] - 78.36

2D deep learning-based method [15] 99.86 73.88

CNN [10] 38.4 90.2

Densely Connected Fully Convolutional Network (DC-FCN) [17] - 82.77

Proposed ZFnet with LDDLA 94.65 81.45

Table I indicates the comparative results of existing networks and proposed Lagrange Dual Dictionary Learning Algorithm (LDDLA) and it shows that the training accuracy of proposed method is 94.65% which shows that less computational time in overall network.

Table II: Segmentation time of LDDLA in ZFnet
**Dataset **

**Image ** **Size(Pixels) ** **Time for Segmentation (sec) **

1 170x270 14.6

2 200x420 12.7

3 150x380 16.7

4 600x360 7.3

5 100x500 11.5

Table-II indicates the Segmentation time of Lagrange dual dictionary learning algorithm (LDDLA) in ZFnet with respect to different pixel of image and different number of dataset. It noticed that 170x270 takes 14.6 sec, 200x420 pixel takes 12.7 sec, 150x380 pixel takes 16.7sec, 600x360 takes 7.3 sec and 100x500 pixel takes 11.5 sec.

DSC (Dice similarity coefficient): Measures quantitative results of segmentation (DSC). It is size of overlap of 2 segmentations divided by total size of 2 images.

DSC=2./𝑅(𝑠).𝑅(𝑔)/

/𝑅(𝑠)/+/𝑅(𝑔)/ (16)

where R(s) and R(g) denotesegmented regions using methods and manuallyfor an image respectively. Assuming the problem of segmentation as the one of binary classification where pixels are classified as foreground or background, accuracy, precision, and recall are considered. Then pixel classification is done using decision rules.

Accuracy is the ratio of correct predictions to the totalpredictionsmadeans is mathematically given as

Accuracy = ^{𝑇𝑃+𝑇𝑁}

𝑇𝑃+𝑇𝑁+𝐹𝑃+𝐹𝑁 (17)

Precision is ratio of total positive samples correctly classified to stotal samples determined as positive samples by the classifier. It represents the proportion of the model’s predictions of cancer where cancer is actually present and is mathematically defined by

P = ^{𝑇𝑃}

𝑇𝑃+𝐹𝑃 (18)

Recall is the ratio of total positive samples classified correctly to the total truly positive samples. It represents the proportion of all cases of cancer that the model accurately predicted.

Recall= ^{𝑇𝑃}

𝑇𝑃+𝐹𝑁 (19)

Table III: Segmentation results of existing and proposed convolutional network model

**Model ** **Dice **

**Coefficient ** **Accuracy ** **Precision ** **Recall **

2D Unet 78.13 98.14 79.00 80.71

2D Unet + SE 78.28 97.89 75.87 82.40

AlexNet 5C - 91.5 67.13 96.5

ResNet5C - 94.8 78.35 97.4

DenseNet - 5C - 93.3 73.45 93.3

ZFnet with LDDLA 81.45 98.78 79.34 94.7

Table III indicates the detailed analysis of accuracy, recall, precision of various traditional convolutional neural network architecture such as, 2D Unet, 2DUnet+SE, AlexNet 5C,ResNet 5C, DenseNet 5C with proposed ZFnet with Lagrange Dual Dictionary Learning Algorithm (LDDLA) and hence it achieves 98.78% of accuracy, 79.3% of precision and 94.7% of recall.

Fig- 10Comparison of accuracy, precision and accuracy

Figure 10 indicates comparison of overall accuracy, precision and accuracy, whereas the accuracy of 2D Unet is 98.14%, 2D Unet+SE is 97.89%, AlexNet 5C is 91.5%, ResNet 5C, DenseNet-5C is 93.3% and ZFnet with LDDLA is 98.78%

**5. ** **CONCLUSION **

In this paper, Lagrange Dual Dictionary Learning Algorithm based segmentation in convolutional neural network is proposed, leading to higher accuracy. This method can precisely segment the cancer affected area and find smoother, clearer boundaries of the cervix. Moreover, this method covers the maximum tumor regions, such that tumor detection is easier in the cervical images. The important concept to note is that higher efficiency is observed while implementation. After training the model, the instantaneous segmentation was done for enormous images in a short-while, which met the requirements of practical application for medical analysis. In future work, the constructed ZFnet model for segmentation has to be optimized by efficient optimization algorithm. The proposed method achieves 98.78% of accuracy, 94.7% recall, 79.3% of precision, 94.65% of training accuracy in 1.8 sec.

**REFERENCES **

[1] Rajendra A Kerkar, Yogesh V Kulkarni, “Screening for cervical cancer: an overview”, Journal of ObstetGynecol, vol.56, no.2, pp.115-122, 2006.

[2] L. B. Mahanta, D. C. Nath and C. K. Nath, “Cervix cancer diagnosis from pap smear images using structure based segmentation and shape analysis”, Journal of Emerging Trends in Computing and Information Sciences, vol.3, no.2, pp.245-249, 2012.

[3] Plissiti ME, Nikou C, Charchanti A, “Automated detection of cell nuclei in Pap smear images using morphological reconstruction and clustering”, IEEE Transactions InfTechnol Biomed., vol.15, no.2, pp.233- 241, 2011.

[4] Lu Z, Carneiro G, Bradley AP, “An improved joint optimization of multiple level set functions for the segmentation of overlapping cervical cells”, IEEE Trans Image Process, vol.24, no.4, pp.1261-1272, 2015.

[5] Mundhra D, Cheluvaraju B, Rampure J, Dastidar TR, “Analyzing microscopic images of peripheral blood smear using deep learning”, Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, pp.178-85, 2017.

[6] Plissiti ME, Nikou C, “A review of automated techniques for cervical cell image analysis and classification”, Biomedical imaging and computational modeling in biomechanics, pp.1-18, 2013.

[7] Wang, Pin, Jiaxin Wang, Yongming Li, Linyu Li and Hehua Zhang, “Adaptive Pruning of Transfer Learned Deep Convolutional Neural Network for Classification of Cervical Pap Smear Images”, IEEE Access, vol.8, pp.50674-50683, 2020.

[8] Ramzan, F., Khan, M. U. G., Iqbal, S., Saba, T., &Rehman, A. “Volumetric Segmentation of Brain Regions From MRI Scans Using 3D Convolutional Neural Networks”, IEEE Access, vol.8, pp103697-103709,2020 [9] Allehaibi, Khalid Hamed S., Lukito Edi Nugroho, LutfanLazuardi, Anton SatriaPrabuwono and Teddy

Mantoro, “Segmentation and classification of cervical cells using deep learning”, IEEE Access, vol.7, pp.116925-116941, 2019.

[10] Li, Xiang, Ying Wei, Yunlong Zhou and Bin Hong, “Subcortical Brain Segmentation Based on a Novel Discriminative Dictionary Learning Method and Sparse Coding”, IEEE Access, vol.7, pp.149785-149796, 2019.

[11] Lin, Haoming, Yuyang Hu, Siping Chen, Jianhua Yao and Ling Zhang, “Fine-grained classification of cervical cells using morphological and appearance based convolutional neural networks”, IEEE Access, vol.7, pp.71541-71549, 2019.

[12] Zhang, Yupei, Xiuxiu He, Zhen Tian, Jiwoong Jason Jeong, Yang Lei, Tonghe Wang, QiulanZeng, “Multi- needle Detection in 3D Ultrasound Images Using Unsupervised Order-graph Regularized Sparse Dictionary Learning”, IEEE Transactions on Medical Imaging, 2019.

[13] Wu, J., Zhang, Y., Wang, K., & Tang, X. ,”Skip Connection U-Net for White Matter Hyperintensities Segmentation From MRI”, IEEE Access, vol. 7,pp 155194-155202, 2019.

[14] Elayaraja P and Suganthi M, “Automatic approach for cervical cancer detection and segmentation using neural network classifier”, Asian Pacific Journal of Cancer Prevention: APJCP, vol.19, no.12, 2018.

[15] Zheng, Haiyan, Yufei Chen, XiaodongYue, Chao Ma, Xianhui Liu, Panpan Yang, and Jianping Lu, “Deep pancreas segmentation with uncertain regions of shadowed sets”, Magnetic Resonance Imaging , vol.6, no.8, pp. 45-52, 2018.

[16] Das, Asha, Madhu S. Nair and S. David Peter, “Sparse representation over learned dictionaries on the riemannian manifold for automated grading of nuclear pleomorphism in breast cancer”, IEEE Transactions on Image Processing , vol.28, no.3, pp.1248-1260, 2018.

[17] Hou, B., Kang, G., Zhang, N., & Hu, C. Robust 3D convolutional neural network with boundary correction for accurate brain tissue segmentation. IEEE Access, vol.6, pp.75471-75481,2018.

[18] Zhang, Ling, Le Lu, Isabella Nogues, Ronald M. Summers, Shaoxiong Liu and Jianhua Yao, “DeepPap: deep convolutional networks for cervical cell classification”, IEEE journal of biomedical and health informatics, vol.21, no.6, pp.1633-1643, 2017.

[19] Liu, Yiming, Pengcheng Zhang, Qingche Song, Andi Li, Peng Zhang and ZhiguoGui, “Automatic segmentation of cervical nuclei based on deep learning and a conditional random field”, IEEE Access, vol.6, pp.53709-53721, 2017.