• Nu S-Au Găsit Rezultate

View of Analysis of Medical Image Fusion Using Transform-Based Function and Neural Network

N/A
N/A
Protected

Academic year: 2022

Share "View of Analysis of Medical Image Fusion Using Transform-Based Function and Neural Network"

Copied!
15
0
0

Text complet

(1)

Analysis of Medical Image Fusion Using Transform-Based Function and Neural Network

Mitesh Kumar Research Scholar SRK UNIVERSITY, Bhopal [email protected]

Dr. Nikhil Ranjan Associate Professor SRK UNIVERSITY, Bhopal [email protected]

Dr. Bharti Chourasia Associate Professor SRK UNIVERSITY BHOPAL [email protected]

Abstract

Medical image fusion plays a significant role in computer-aided diagnosis of critical illness and disease. The continuous support of computer vision and medical science, image fusion methods are improved. In the improvement of image fusion algorithm, transform-based function and neural network is a significant participant. The transform-based function follows the categories of feature-based image fusion. The transform function such as DCT, DWT, CT, and other transform function variants applied for the extraction of features.

The transform-based process is texture dominated features transform. The texture is important features of medical imagery; the coverage area of texture features is 75% in the whole image. The relation of neural network and image fusion has a very long time. The neural network methods improve the fusion efficiency of medical image and produce good quality results in terms of PSNR and SIM. This paper presents the experimental analysis of various transform and neural network methods for medical image fusion. The study used standard medical image fusion dataset and measured standard parameters such as PSNR, SIM. The analysis process used MATLAB software, and this is a well-known software for the neural network and image processing.

Keywords: - Medical image, Fusion, CT, MRI, PET, ANN, wavelet, transform, MATLAB

INTRODUCTION

The visual impact and quality of image play an important role in medical imagery diagnosis. The image fusion is great potential to cover the all aspects of computer aided diagnosis of critical illness[1]. The process of image fusion proceeds with two and more imagewith set of fusion and developed fused image. The fused image has better visual impact and quality instead of source image[1, 2, 3]. The process of image fusion deals in two mode

(2)

of operation such as spatial domain and frequency domain. The spatial domain image fusion methods operate pixel-based operation and frequency-based image fusion applied transform-based function[4]. The pixel-based image fusion has certain limitation in terms of resolution and dimension of medical image. Most of medical devices generates 3-dimensions image with high intensity[5]. The intensity of image faces a problem of dimension reduction and transformation of imagery data. various authors reported the various format of medical image compromised with quality and intensity[6, 7, 8]. The most common used medical imagery such as CT (computer tomography), MRI (magnetic resonance imagery), PET (positron emission tomography) and many more such as x-ray images. Some of medical imagery has soft tissue of human body and others are hard and cannot predict and visualised for the process of analysis[9, 10]. The different format of medical imagery keeps certain features characteristics and different imagery sensor obtain different information of same part of body.

The objective of image fusion to obtain better image quality and similarity measure index[11, 12]. The loss of edge information also retains on the condition of noise and other artifacts. Despite of conventional methods of image fusion, various authors proposed fusion methods based on transform based methods. The transform-based method retains the structure of image and minimized the rate of distortion. But the methods of transform generate noise during the process of image fusion[13, 14, 15]. Now image denoising is also challenging job for image fusion. The more advantage of transform based function over the limitation of noise, so most of authors applied the transform-based function for processing of image fusion. The major transform function applied such as DCT (discrete coefficient transform), DWT (discrete wavelet transform), FFT (fast Fourier transform) and other derivates of transform function. Various authors applied contourlet transform function[16, 17, 18]. The most dominated transform is discrete wavelet transform, the sapling of these transform as high frequency and low frequency. The processing of sampling retains the edge information of biomedical and proceeds better way of the fusion methods[19, 20].

The conventional and dynamic artificial neural network (ANN) models improve the performance of image fusion methods[21, 22]. All conventional neural network methods applied for the process of image fusion, such as ART, BP, RBF and SOM models. In concern of dynamic and advance neural network models such as CNN, DEEP learning, RNN and many derived neural network models[23, 24, 25]. The efficacy and accuracy of training and matching pattern improves the quality of image fusion process. The process of image fusion describes in three segments, pixel based, feature based methods and decision-based image fusion[26, 27, 28].

The application of neural network applied in features based and decision-based methods. The enhance more quality of fused image applied swarm intelligence-based optimization algorithms. Many authors reported that particle swarm optimization, ant colony optimization and swarm derived methods [29, 30]. The swarm-based optimization algorithms reduce the unwanted features coefficient of source images. This paper analysed the performance of neural network methods based fusion algorithms. The source of image dataset is used by medical repository. the main contribution of this paper is (1). Study of medical imagery fusion methods based on neural network. (2). Finds bottleneck problem of image fusion. (3) analysis of applied methods in terms of PSNR and SIM. The rest of paper describe in manner of section II related work. In section III. Transform function and neural network. In section IV describe experimental analysis and finally conclude in section V.

II. RELATED WORK

Thecomplex structure of human body organ cannot visualize with proper mode of fusion. For the proper visualization various devices used such as CT scan, MRI, PET and many more. these devices take multiple imagery of single location and collect more information of particular human body organ. For the visualization applied the methods of image fusion. The utility and diversity of image fusion methods enhance the possibility of critical disease detection and plan for treatments. In the process of medical image fusion various authors continuously contribute various methods with different approach. Some approach and methods describe here.

Xiang, Tianyuan Et al. [1] Highly flexibility object finding and following under enemy of covertness and against impedance climate is a difficult examination subject. Since the region compelled straight coding and the helpful portrayal are characteristically direct model, this will cause the discriminant data is deficient in item following. In this way, a multi-scale highlight combination dependent on multitude knowledge community- oriented learning for full-stage against obstruction object following is examined. Here joins numerous highlights to depict the article in order to improve the portrayal execution of the single element, and utilizations neighborhood compelled straight coding to get better characterization execution; at that point utilize the multitude insight bit capacity to expand the helpful learning of nearby limitations to the piece space, and determine the part inadequate portrayal. Recreation results show that the improved calculation has evident points of interest continuously, soundness and quantitative files, and is appropriate for elite, ease video reconnaissance. Xu, Lina Et al. [2] Medical picture combination is an important class in the clinical applications which impactsly affects the final determination results. A half and half advancement strategy is introduced for building up a high efficiency method for the combination of the clinical pictures. Clinical imaging is a

(3)

significant issue in medication during to its incredible effect and affectability on diagnosing different clinical issues. By and large, assembling a far-reaching picture which incorporates all the valuable highlights of the clinical pictures assist the specialists with having simpler and more precise analysis on the illness. The cycle of mix of various data of the pictures and their com-bination is called picture combination.

Panigrahy, Chinmaya Et al. [3] Multi-center picture combination consolidates the engaged pieces of numerous pictures of a similar scene to create a completely engaged picture. DCPCNN is as often as possible utilized in picture combination outline work because of its attributes like worldwide coupling, beat synchronization of neurons and synchronous preparing of two pictures. The exhibition of the talked about strategy is contrasted and fourteen best in class multi-center picture combination strategies utilizing six target quality measurements. Test results exhibit that the examined technique is serious with the cutting-edge strategies as far as both emotional and target evaluations. Cart, D. Raveena Judie Et al. [4] Medical imaging is a help to humankind. Handling of clinical pictures is broadly sprouting with the guide of innovative headways. Clinical pictures taken from various modalities are balanced out to eliminate the jumpy ancient rarities and further handled for intertwining the pictures to empower the doctors to envision the consolidated highlights of CT and MRI. Arrangement of both the pictures is performed to guarantee parametric enlistment in order to viably mix and overlay for appropriate conclusion. To guarantee the nature of the pictures, fundamental advancement is actualized and the emotional representation and target assessment are performed. The examined approach, picture combination with adjustment and enlistment outflanks the current strategies regarding emotional and target assessment.

Huang, Bing Et al. [5] The clinical picture combination is the way toward blending numerous pictures from different imaging modalities to get an intertwined picture with a lot of data for expanding the clinical appropriateness of clinical pictures. Creator endeavor to give a diagram of multimodal clinical picture combination techniques, placing accentuation on the latest advances in the area dependent on the current combination strategies, remembering based for profound picking up, imaging modalities of clinical picture combination, and execution investigation of clinical picture combination on essentially informational collection.

At long last, the end is that the flow multimodal clinical picture combination research results are more significant and the advancement pattern is on the ascent however with numerous difficulties in the exploration field.

Azmi, Kamil ZakwanMohd Et al. [6] Underwater symbolism experiences extreme impacts because of particular constriction and dissipating impacts when light goes through water medium. Such harms limit the capacity of vision undertakings and decrease picture quality. There are a ton of upgrade strategies to improve the nature of submerged picture. Notwithstanding, the majority of the techniques produce bending impacts - in the yield pictures. The talked about common based NUCE technique comprises of four stages. The initial step acquaints another methodology with kill submerged shading cast. The mediocre shading channels are upgraded dependent on increase factors, which are determined by considering the contrasts between the unrivalled and substandard shading channels. In the subsequent advance, the double force pictures combination dependent on normal of mean and middle qualities is examined to deliver lower-extended and upper-extended histograms. The piece between these histograms improves the picture contrast fundamentally. Next, the multitude insight based mean adjustment is examined to improve the effortlessness of the yield picture. Through the combination of multitude knowledge calculation, the mean estimations of substandard shading channels are changed in accordance with be shut to the mean estimation of prevalent shading channel. Finally, the unsharp veiling strategy is applied to hone the general picture. Tests on submerged pictures that are caught under different conditions show that the examined NUCE technique delivers better yield picture quality, while altogether defeating other cutting-edge strategies. Bashir, Rabia Et al. [7] Image combination is the way toward joining at least two related pictures to create a solitary yield picture, containing more applicable data than any of the information pictures. The picture combination measure relies on: the application space; the quantity of pictures going through combination; and the kind of symbolism, for example, regardless of whether it is multi-phantom or multi-modular. For lucidity of introduction, this work takes two significant combination techniques, SWT and PCA, and applies them to an assortment of symbolism. Results show that in multi-modular picture combination, PCA seems to perform better for those information pictures that have diverse difference/brilliance levels. SWT seems to give better execution when the info pictures are multi-modular and multi-sensor. An element of the work is the quantity of target capacities utilized to assess the SWT and PCA techniques, permitting the utility of each to be judged. Chao, Zhen Et al. [8] In clinical applications, single methodology pictures don't give sufficient symptomatic data. In this way, it is important to consolidate the preferences or complementarities of different modalities of pictures.

As of late, neural organization strategy was applied to clinical picture combination by numerous scientists, however there are as yet numerous deficiencies. Creator examined a novel combination technique to consolidate multi-methodology clinical pictures dependent on the Fuzzy-RBFNN, which incorporates five layers: input, fluffy segment, front blend, derivation, and yield. Also, Authors examined a half and half of the GSA and EBPA to prepare the organization to refresh the boundaries of the organization. Two different examples of pictures are utilized as contributions of the neural organization, and the yield is the combined picture. An examination with

(4)

the regular combination techniques and another neural organization strategy through abstract perception and target assessment files uncovers that the talked about technique effectively incorporated the data of info pictures and accomplished better outcomes. Then, Authors additionally prepared the organization by utilizing the EBPA and GSA, separately. The outcomes uncover that the EBPGSA outflanked both EBPA and GSA, yet in addition prepared the neural organization all the more precisely by examining a similar assessment record.

El-Hoseny, Heba M. Et al. [9] a productive clinical picture combination framework dependent on DT-CWT and the MCFO procedure. The initial phase in the examined framework is the histogram coordinating of one of the pictures to the next to permit similar powerful reach for the two pictures. The DT-CWT is utilized after that to deteriorate the pictures to be melded into their coefficients. The MCFO method is utilized to decide the ideal decay level and the ideal increase boundaries for the best combination of coefficients dependent on specific requirements. At last, an extra differentiation upgrade measure is applied on the intertwined picture to improve its visual quality and strengthen subtleties. A relative report between the customary spatial and change area combination strategies and the examined improved DT-CWT combination framework is introduced. The talked about combination framework is emotionally and impartially tried and assessed with various combination quality measurements including normal slope, nearby differentiation, standard deviation, edge power, entropy, Peak Signal-to-Noise Ratio (PSNR), common data, Qab/f, computational expense, and preparing time.

Reproduction results exhibit that the examined upgraded DT-CWT clinical picture combination framework dependent on MCFO and histogram coordinating accomplishes an unrivaled presentation with better picture quality, substantially more subtleties. These qualities help in a more exact clinical finding. Zeinab Z. El Kareh Et al. [10] an ideal answer for wavelet-based clinical picture combination utilizing diverse wavelet families and PCA in view of the MCFO method. The fundamental inspiration of this work is to expand the nature of clinical combined pictures to give right analysis of infections to the target of ideal treatment. This can be accomplished by intertwining clinical pictures of various modalities utilizing a streamlining strategy dependent on the MCFO.

The MCFO procedure gives the ideal increase boundaries that accomplish the best intertwined picture quality.

Histogram coordinating is applied to improve the general estimations of the PSNR, entropy, nearby difference, and nature of the melded picture. A near report is performed between the examined calculation, the conventional DWT, and the PCA combination utilizing most extreme combination rule. The examined calculation is assessed emotionally and equitably with various combination quality measurements. Recreation results exhibit that the examined MCFO improved wavelet-based combination calculation utilizing Haar wavelet and histogram coordinating accomplishes a predominant presentation with the most noteworthy picture quality and most clear picture subtleties in a short preparing time.

He, Kangjian Et al. [11] MST- based strategies are well known for multi-center picture combination as of late due to the predominant exhibitions, for example, the melded picture containing more subtleties of edges and surfaces. Be that as it may, the majority of MST-put together techniques are based with respect to pixel tasks, which require a lot of information handling. Additionally, extraordinary combination methodologies can't totally protect the away from inside the engaged territory of the source picture to get the combination picture. To take care of these issues, a novel picture combination strategy dependent on center area level parcel and PCNN in NSCT space. A lucidity assessment work is developed to gauge which locales in the source picture are engaged.

By eliminating the engaged locales from the source pictures, the non-center districts which contain the edge pixels of the engaged areas are gotten. Next, the non-center areas are deteriorated into a progression of sub- pictures utilizing NSCT, and sub-pictures are intertwined utilizing various methodologies to get the melded non- center districts. Ultimately, the intertwined result is acquired by combining the engaged locales and the melded non-center districts. Test results show that the talked about combination plan can hold all the more away from of two source pictures and safeguard more subtleties of the non-center areas, which is better than regular strategies in visual review and target assessments.Huang, Chenxi Et al. [12] Recent examination has detailed the use of picture combination innovations in clinical pictures in a wide scope of viewpoints, for example, in the conclusion of cerebrum infections, the location of glioma and the determination of Alzheimer's illness. In their investigation, another combination technique dependent on the mix of the SFLA and the PCNN is talked about for the combination of SPECT and CT pictures to improve the nature of intertwined cerebrum pictures. To start with, the IHS of a SPECT and CT picture are deteriorated utilizing a NSCT freely, where both low-recurrence and high-recurrence pictures, utilizing NSCT, are gotten. Creators at that point utilized the joined SFLA and PCNN to meld the high-recurrence sub-band pictures and low-recurrence pictures. The SFLA is considered to enhance the PCNN network boundaries. At last, the combined picture was created from the switched NSCT and turned around IHS changes. Creators considered their calculations in contrast to SD, G, SF and E utilizing three distinct arrangements of mind pictures. The exploratory outcomes exhibited the prevalent presentation of the examined combination strategy to improve both accuracy and spatial goal essentially. Jin, Xin Et al. [13] a novel picture combination strategy dependent on S-PCNN, PSO and square picture preparing technique. As a rule, the boundaries of S-PCNN are set physically, which is mind boggling and tedious and ordinarily causes inconsistence. The boundaries of S-PCNN are set by PSO calculation to defeat these weaknesses and improve

(5)

combination execution. Initially, source pictures are partitioned into a few equi-measurement sub-squares, and afterward, spatial recurrence is determined as the trademark factor of the sub-square to get the entire source picture's CFM, and by this way the operand can be viably decreased. Also, S-PCNN is utilized for the examination of the CFM to get its OFG. Thirdly, the combined CFM will be got by the OFG. At last, the combined picture will be recon-structed as per the intertwined CFM and square guideline. In this cycle, the boundaries of S-PCNN are set by PSO calculation to get the best combination impact. By CFM and square strategy, the operand of the examined technique will be adequately decreased. The investigations show that the multi-center picture combination calculation is more efficient than other conventional picture combination calculations, and it demonstrates that the consequently boundaries setting strategy is viable too.

Gao Chen Et al. [14] Computational imaging assumes a significant part in clinical therapy for giving more far- reaching clinical pictures. The melded clinical picture is gotten by the backwards HSV and NSST change, progressively. The exploratory outcomes show that the examined plot is compelling, and it can combine more data into the last pictures than traditional techniques. The combined pictures created by the examined strategy not just give the underlying data of CT and MRI pictures, yet in addition save the practical data of PET pictures.

It tends to be induced that the talked about plan is a successful multimodal clinical picture combination technique, and the combination of at least three sorts of clinical pictures can give more data of human tissue than customary strategies. The subsequent stage in their examination is to zero in on the exhibition improvement of multimodal clinical picture combination. This work essentially manages CT, MRI and PET cerebrum clinical pictures.Kanmani, Madheswari Et al. [15] Multimodal clinical picture combination is a method that joins at least two pictures into a solitary yield picture to improve the exactness of clinical finding. A NSCT picture combination system that joins CT and MRI pictures is talked about. The talked about technique decays the source pictures into low and high recurrence groups utilizing NSCT and the data across the groups are joined utilizing weighted averaging combination rule. The loads are enhanced by PSO with a target work that mutually expands the entropy and limits root mean square mistake to give improved picture quality, which makes not quite the same as existing combination techniques in NSCT area. The exhibition of the talked about combination system is delineated utilizing five arrangements of CT and MRI pictures and different execution measurements show that the examined technique is profoundly productive and appropriate for clinical application in better dynamic. Kong, Weiwei Et al. [16] Medical imaging sensors, for example, positron discharge tomography and single-photon emanation figured tomography, can give rich data, yet each has its natural downsides. Multimodal sensor clinical picture combination turns into a viable arrangement. The central target of clinical imaging is to extricate however much prevalent and corresponding data as could reasonably be expected from the source into a solitary yield that can assume a basic part in clinical determination and clinical activities. A tale combination strategy is introduced for multimodal sensor clinical pictures, in view of LD in non-subsampled space. The source clinical pictures are first deteriorated into low-recurrence and high-recurrence sub-pictures, by means of non-subsampled plans. At that point, the coefficients of sub-groups are melded by an administrator, called LD.

The final combined picture is remade, through the reverse non-subsampled plans, with all composite coefficients. The examined combination technique was applied in a few clinical investigations, and the outcomes show that it is a considerably more clear and successful strategy than a portion of the cutting-edge strategies, regarding both abstract visual execution and target assessment results. Likewise, the exhibition of the examined strategy was contrasted and that of two non-subsampled plans, specifically, non-subsampled contourlet change and non-subsampled shearlet change.

Kou, Liang Et al. [17] a technique named (RMLP is supportive of presented to combine multi-center pictures that is caught by magnifying instrument. To begin with, the Sum-Modified-Laplacian is applied to gauge the focal point of multi-center pictures. At that point the thickness-based district developing calculation is used to section the engaged area veil of each picture. RMLP is obtuse toward clamor and can decreases the shading contortion of the intertwined pictures on two datasets. Li, Weisheng Et al. [18] Image combination can give more broad data since it joins at least two distinct pictures. Cloud model is an as of late examined hypothesis in artificial insight and has the upside of considering the haphazardness and fluffiness. Creators present a novel multimodal clinical picture combination strategy by cloud model hypothesis. The examined strategy fits the histograms of information pictures utilizing the high-request spline work firstly and afterward isolates stretches in accordance with the valley purpose of the fitted bend. On this premise, cloud models are produced adaptively through the opposite cloud generator. At last, cloud thinking rules are intended to accomplish the combined picture. Exploratory outcomes exhibit that the combined pictures by talked about technique show more picture subtleties and injury districts than existing strategies. The target picture quality evaluation measurements on the intertwined pictures additionally show the predominance of the talked about strategy. Li, Yi Et al. [19] Medical picture combination has pulled in much consideration lately, which expects to meld diverse clinical pictures into a more educational and clearer one. The melded picture can assist specialists with diagnosing sicknesses quickly and viably. Among various combination strategies, scanty portrayal-based picture combination is another idea that has arisen in the course of recent years. Notwithstanding, the high-recurrence segments of low-goal and the

(6)

high-recurrence segments of source pictures are gotten similarly, and scanty coefficients are tackled by a minimization issue. Accordingly, it overlooks the connection between's high-recurrence parts of low-goal and the high-recurrence segments of source pictures, and answers for the L0-standard minimization issue. To address these issues, Authors talked about another picture combination strategy dependent on histogram comparability and multi-see weighted scanty portrayal. By presenting a histogram similitude, various loads are allotted to the high-recurrence parts of low-goal and the high-recurrence segments of source pictures to productively bridle the reciprocal data. Furthermore, meager coefficients addressed by the L1-standard minimization issue are more precise. This method is additionally fused into clinical picture combination.

Exploratory outcomes show that the examined technique accomplishes best in class execution regarding both visual quality and quantitative assessment measurements. Liu, Yu Et al. [20] The use of DL methods in the field of pixel-level picture combination has likewise arisen as a functioning point over the most recent three years. An orderly survey of the DL-based pixel-level picture combination writing. Specifically, Authors first sum up the fundamental difficulties that exist in traditional picture combination explore and examine the points of interest that DL can offer to address every one of these issues. At that point, the new accomplishments in DL-based picture combination are looked into in detail. In excess of twelve as of late examined picture combination strategies dependent on DL procedures including CNNs, CSR and SAEs are presented.

III. TRANSFORM FUNCTION & NEURAL NETWORK

Transform based function improved the quality of medical imagery fusion. The second categories of image fusion methods are feature based, the transform function implied the process of feature extraction with multiresolution coefficients in terms of high frequency and low frequency [7, 8, 9]. The efficacy and efficiency of transform function measure in terms of distortion and preservation of edge information is very high instead of other methods. The mode of transform function selection is very difficult due to quality factor of function and necessity of algorithms [10]. The various authors reported in survey of medical image fusion in last two decade mainly focus on wavelet transform methods. Despite of wavelet transform methods, authors used contourlet and shearlet transform function. The different variants and derivates of wavelet transform in terms of energy and entropy of information processing utilized the process of feature extraction methods [15, 18].

(a) WAVELET TRANSFORMS

The wavelet transform is collection of finite series of high frequency and low frequency. The processing of all transform derives form mother wavelet transform[2]. The mother wavelet transform provide the two-basic function as continuous and discrete. The nature of continuous and discrete generates continuous wavelet transform (CWT) and discrete wavelet transform (DWT)[10]. The linearity and dimension of transform function derived from the length of signal that means M. the value of M decide the dimension of transform function in terms of decomposition level. In case of non-linear processing of transform function, the value of M=2[21, 24].

The nature of medical imagery is discrete, now applied transform function is discrete wavelet transform (DWT) with dimension 2. The major dominated component of discrete wavelet transform is texture feature coefficient.

The texture feature components is major dominated feature components of medical imagery data. the processing and description of transform function is described here.

The discrete wavelet transform process include high and low pass filter of a time series with down-sampling rate 2. The high pass filter [f(n)] is mother of discrete wavelet and low pass filter [h(n)] the mirror type of function.

The process of scaling function db4 shown in figure 2. The proceed product of high pass filter and low pass filter is called approximate and detailed coefficients [16, 17, 18]. The process of scaling [𝜑𝑗, 𝑘(𝑛)] and wavelet function [𝜓𝑗, 𝑘(𝑛)] both depends on high pass filter and low pass filter. The represents of this as

𝜑𝑗, 𝑘 𝑛 = 2𝑗2𝑕 2−𝑗𝑛 − 𝑘 … … … . (1)

𝜓𝑗, 𝑘 𝑛 = 22𝑗𝑓 2−𝑗𝑛 − 𝑘 … … … . . (2)

The process of derivation define as n=0,1, 2,…….,M-1; j=0,1,2,……,j-1; k=0,1,2,….,2j-1;J=5. The M is length of signal

The decomposition process of wavelet transforms deals with approximate and details. The approximate part of transform function is process of low frequency feature components of medical imagery and the detail part image preserve in terms of high frequency. The process of approximate part further decomposes and measure three components of features such as vertical components, horizontal components and diagonal components. Now the

(7)

processing of wavelet transform function is called wavelet transform is a feature extractor. The processing of feature extraction of wavelet transform function shown in below figure[2, 21].

Figure 1: Wavelet transform process.

(b) CONTOURLET TRANSFORM

Some silent feature components of medical imagery cannot extract due to the nature of directional intensity.

Some silent features such as edges, curves and contours, for these applied contourlet transform function. The contourlet transform function is rich set of basic function of directional and sub band decomposition[22]. The formation of contourlet transform is composition of double filter bank, Laplacian pyramidal and directional filter bank (DFB). These methods jointly called as pyramidal directional filter banks (PDFB)[26].

The processing of contourlet transform describe in two manners such as sub-band decomposition and direction transform. The Laplacian pyramidal is applied to guttered the point discontinuities and directional filter bank is used to link the point discontinuities into linear structure. The framework of contourlet transform function shown in below figure[22, 26].

Figure 2: Contourlet transform process.

(8)

(c) SHEARLET TRANSFORM

The shearlet transform function overcome the limitation of wavelet transform function. The formation and generation of sharelet transform function based on affine system with composite dilations. The principle of composite dilations by Guo and Easley for wavelet transform.These transform function remove the bottleneck problem of frequency selection low and high fusion[14, 25]. The process of sharelet transform resolved the issue of fusion methods. The processing of shearlet transform function shown in below figure.

Figure 3: Shearlet transform process.

The processing of feature extraction is major challenges in medical imagery fusion system. The diverse space of transform function provides the better way to the extraction of feature with different level of transform function.

The major contribution in feature extraction process wavelet transform function provides in terms of different variants such as DWT, LWT, and extended in from of contourlet transform and shearlettransform[25].

NEURAL NETWORK

The recognition and classification are sub variants of fusion of features. The process of medical imagery fusion also depended on the process of artificial neural network. The artificial neural network has great contribution for enhancement of medical image fusion. Both the conventional and dynamic neural network model applied in medical image fusion process. Some models of neural network describe here. The describe models of neural network applied for the experimental analysis[8].

1. DEEP NEURAL NETWORK (DNN)

A deep neural network implied great potential in case of medical imagery fusion process. The processing of image data influences the acceptability of deep neural network in case of image fusion. It is feed-forwarded artificial neural network with multiple layers between output and input[21]. The processing of hidden unit j, a nonlinear action function f(.) is applied to map all inputs form the lower layer, Xj, to scale state Yj. Which then transfer to upper layer,

𝑌𝑗 = 𝑓 𝑥𝑗 … … … (1) Were

𝑥𝑗 =𝑏𝑗+ 𝑦𝑖𝑤𝑖𝑗𝑖 … … … (2)

And bj is the bias of unit j; I is the unit index of lower layer; 𝑊𝑖𝑗 is the weight of connection between unit j and unit i in the layer. Most of time chose the activation function f(.) to be a sigmoid function

(9)

𝑓 𝑥𝑖 = 1

1 + 𝑒−𝑥𝑗 … … … (3) 2. MULTI-LAYER PERCEPTRON (MLP)

MLP is a multi-layer forwarded neural network trained by BP method, this is mostly applied methods of artificial neural network, due to efficiency of mapping process of input data. the MPL network consists of input layer, hidden layer and output layer. The simple MLP with one hidden layer shown in figure[8]. The processing of each layer of MLP connected with weight threshold and transfer function to transfer data from front to back to output layer. The processing of MLP work till time to achieve the desired output set by the hypothesis. If the error value is maximized, it calls training algorithm and set the desired output[25].

Figure 4: Show the layer connection of one hidden layer of MLP.

3. SELF-ORGANIZED MAP NETWORK (SOM)

The SOM organizes unknown data into groups of similar patterns, according to a similarity criterion (e.g.

Euclidean distance). Such networks can Learn to detect regularities and correlations in their input and adapt their future responses to that input accordingly. An important feature of this neural network is its ability to process noisy data[28]. The map preserves topological relationships between inputs in a way that neighbouring inputs in the input space are mapped to neighbouring neurons in the map space[11]. A graphical representation illustrating the architecture of the SOM is shown in below figure.

Figure 5: Shows SOM architecture.

(10)

The SOM method follows two basic equations: matching and finding the winner neuron determined by the minimum Euclidean distance to the input as (1), and the update of the Position of neurons inside the cluster as (2).

𝑑𝑖𝑗 = min𝑖 𝑥 𝑡 − 𝑤𝑖𝑗(𝑡) ……….(1) 𝑤𝑖𝑗 𝑡 + 1 = 𝑤𝑖𝑗 𝑡 + 𝛼 𝑡 𝑥 𝑡 − 𝑤𝑖𝑗 𝑡 𝑖 ∈ 𝑁𝑐

𝑤𝑖𝑗 𝑡 + 1 = 𝑤𝑖𝑗 𝑡 𝑖 ∉ 𝑁𝑐….(2) Where, for time t, and a network with n neurons:

𝑋is the input.

𝑁𝑐is the neighbour-hood of the winner, 1<Nc<n . 𝛼 is the gain sequence 0<α <1.

𝑤𝑖𝑗 w is any node weight,1<i<n.

𝑑𝑖𝑗 d is the Euclidean distance.

4. CONVOLUTIONAL NEURAL NETWORK (CNN)

The CNN algorithm is part of Deep learning and very appropriate to classification of image data and signal data.

The multiple stage of CNN network defines the propagation of feature set of motor imagery EEG signals. The design architecture of CNN is feedforwarded for the sequence of feature subset. The proposed CNN algorithm along with ensemble classifier and enhance the rate of classification[8. 28].

5. PULSE COUPLED NEURAL NETWORK (PCNN)

The processing of PCNN network simplified as in two section one is intersecting cortex model and other is minimal system. The both the system procced in better manner for the processing of image fusion. The expression of network is defined as[3, 12, 13, 14]

The derivation of ICM describe as

𝐹𝑖𝑗 𝑛 + 1 = 𝑓𝐹𝑖𝑗 𝑛 + 𝑆𝑖𝑗 + 𝑊{𝑌}𝑖𝑗 𝑌𝑖𝑗 𝑛 + 1 = 1, 𝐹𝑖𝑗 𝑛 + 1 > 𝜃𝑖𝑗

0, 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒 𝜃𝑖𝑗 𝑛 + 1 = 𝑔𝜃𝑖𝑗 𝑛 + 𝑕𝑌𝑖𝑗[𝑛 + 1]

𝑆𝑖𝑗 is the stimulus, 𝜃𝑖𝑗 is the threshold of the neuron, 𝑌𝑖𝑗 is the output and 𝑓, 𝑔 and 𝑕 are scalars. The parameters were reduced and the linking inputs were made uniform which resulted in another model known as Unit-Linking model. The processing of PCNN detect the cell of cancer of breast.

The process of image fusion applied all these neural network models for the analysis of medical imagery fusion.

The process of neural network models works in the scenario of feature-based image fusion methods.

IV EXPERIMENTAL ANALYSIS

This section describes the process of experiment of the medical imagery fusion by using different transform function such as DWT, contourlet transform and sharelet transform. These transform function applied for the process of feature extraction. The process of image fusion done by the different neural network models such as DNN, PCNN, CNN, SOM and MLP[2, 10, 21, 25, 26]. The process of fusion used various types of medical image such as CT, MRI and PET. We obtain dataset from the source http://www.med.harvard.edu/AANLIB/home.html. This medical image dataset contains different variety of image for the process of fusion. The resolution of image is 512 and 256. MATLAB software is used for the analysis of image fusion methods. For the evaluation of performance of these algorithm measure two parameters such as peak to signal noise ratio (PSNR) and similiter measure index (SSIM)[3, 4, 5].

(11)

DATASET OF MRI IMAGES

(12)

RESULT ANALYSIS Dataset

Images

PSNR SSIM

DWT[2] ANN[8] DNN[20] DWT- ANN[11]

DWT[2] ANN[8] DNN[20] DWT-

ANN[11]

MRI_1 40.68 42.42 48.04 49.11 0.60 0.62 0.68 0.69

MRI_2 56.42 52.55 55.61 57.67 0.66 0.62 0.65 0.67

MRI_3 80.21 78.71 78.38 84.25 0.70 0.78 0.78 0.89

MRI_4 81.14 88.76 87.50 91.94 0.71 0.78 0.77 0.81

MRI_5 66.86 67.55 65.43 70.93 0.86 0.87 0.85 0.90

MRI_6 72.34 75.26 78.91 83.24 0.82 0.85 0.88 0.93

MRI_7 53.25 54.94 58.32 62.61 0.83 0.84 0.88 0.92

MRI_8 69.52 65.35 76.89 90.37 0.89 0.95 0.96 0.97

MRI_9 82.47 85.41 85.64 86.95 0.62 0.65 0.65 0.76

Table 1: Result analysis of different techniques DWT, ANN, DNN and DWT-ANN with given parameters PSNR and SSIM.

Figure 6: Performance analysis of PSNR using different approaches DWT[2], ANN[8], DNN[20] and DWT- ANN[11] techniques with MRI_1, MRI_2, MRI_3, MRI_4, MRI_5, MRI_6, MRI_7, MRI_8, MRI_9 dataset images.

0 20 40 60 80 100

MRI_1 MRI_2 MRI_3 MRI_4 MRI_5 MRI_6 MRI_7 MRI_8 MRI_9

PSNR

DATASET IMAGE

PSNR

DWT ANN DNN DWT-ANN

(13)

Figure 7: Performance analysis of SSIM using different approaches DWT[2], ANN[8], DNN[20] and DWT- ANN[11] techniques with MRI_1, MRI_2, MRI_3, MRI_4, MRI_5, MRI_6, MRI_7, MRI_8, MRI_9 dataset images.

V CONCLUSION & FUTURE WORK

To evaluate the performance of medical image fusion based on transform function and neural network models.

The applied different models of neural network vary the results in terms of PSNR and SSIM. The variation of results indicates the process of feature extraction methods contains the value of noise. The value of noise bluer the fused image and degraded the value of SSIM. In the neural network methods, CNN and PCNN perform very well instead of other neural network models. CNN has supported the image processing methods in terms of classification and recognition. The dynamic nature of PCNN models influences the performance of medical image fusion methods. The MLP neural network processing also produces beneficial fusion results, but the increased value of the hidden layer increases the complexity of image fusion process. On the stand of complexity, the MLP is also the right choice for medical image fusion. The effectiveness of neural network models depends on the process of feature extraction methods. The wavelet-based feature extraction faces a common problem of noise and mapping of low to high range of data. This problem creates a non-fusion state during the process of image fusion. This processing the value of the parameter is decreased in terms of PSNR and SSIM. The better option of feature extraction process is contourlet transform and shearlet transform. The contourlet transform based feature extraction provides promising feature components for the image fusion process. The shearlet transform also moves on the better feature extraction process for medical image fusion.

The experimental results show that the medical imagery fusion based on neural network models has more scope for improvements in noise, selection of features and mapping of feature data. In future work in this direction for the betterment of medical imagery fusion.

References

[1]. Xiang, Tianyuan. "Multi-scale feature fusion based on swarm intelligence collaborative learning for full- stage anti-interference object tracking." Journal of Ambient Intelligence and Humanized Computing (2020):

1-10.

[2]. Xu, Lina, Yujuan Si, Saibiao Jiang, Ying Sun, and Homayoun Ebrahimian. "Medical image fusion using a modified shark smell optimization algorithm and hybrid wavelet-homomorphic filter." Biomedical Signal Processing and Control 59 (2020): 101885.

[3]. Panigrahy, Chinmaya, Ayan Seal, and Nihar Kumar Mahato. "Fractal dimension-based parameter adaptive dual channel PCNN for multi-focus image fusion." Optics and Lasers in Engineering 133 (2020): 106141.

[4]. Dolly, D. Raveena Judie, J. Dinesh Peter, G. JoseminBala, and D. J. Jagannath. "Image fusion for stabilized medical video sequence using multimodal parametric registration." Pattern Recognition Letters (2020).

[5]. Huang, Bing, Feng Yang, Mengxiao Yin, Xiaoying Mo, and Cheng Zhong. "A Review of Multimodal Medical Image Fusion Techniques." Computational and Mathematical Methods in Medicine 2020 (2020).

0 0.2 0.4 0.6 0.8 1 1.2

MRI_1 MRI_2 MRI_3 MRI_4 MRI_5 MRI_6 MRI_7 MRI_8 MRI_9

SSIM

DATASET IMAGE

SSIM

DWT ANN DNN DWT-ANN

(14)

[6]. Azmi, Kamil ZakwanMohd, Ahmad Shahrizan Abdul Ghani, Zulkifli Md Yusof, and Zuwairie Ibrahim.

"Natural-based underwater image color enhancement through fusion of swarm-intelligence algorithm." Applied Soft Computing 85 (2019): 105810.

[7]. Bashir, Rabia, Riaz Junejo, Nadia N. Qadri, Martin Fleury, and Muhammad Yasir Qadri. "SWT and PCA image fusion methods for multi-modal imagery." Multimedia Tools and Applications 78, no. 2 (2019):

1235-1263.

[8]. Chao, Zhen, Dohyeon Kim, and Hee-Joung Kim. "Multi-modality image fusion based on enhanced fuzzy radial basis function neural networks." Physica Medica 48 (2018): 11-20.

[9]. El-Hoseny, Heba M., Wael Abd El-Rahman, El-Sayed M. El-Rabaie, Fathi E. Abd El-Samie, and Osama S.

Faragallah. "An efficient DT-CWT medical image fusion system based on modified central force optimization and histogram matching." Infrared Physics & Technology 94 (2018): 223-231.

[10]. El-Hoseny, Heba M., Zeinab Z. El Kareh, Wael A. Mohamed, Ghada M. El Banby, Korany R. Mahmoud, Osama S. Faragallah, S. El-Rabaie, Essam El-Madbouly, and Fathi E. Abd El-Samie. "An optimal wavelet- based multi-modality medical image fusion approach based on modified central force optimization and histogram matching." Multimedia Tools and Applications 78, no. 18 (2019): 26373-26397.

[11]. He, Kangjian, Dongming Zhou, Xuejie Zhang, RencanNie, and Xin Jin. "Multi-focus image fusion combining focus-region-level partition and pulse-coupled neural network." Soft Computing 23, no. 13 (2019): 4685-4699.

[12]. Huang, Chenxi, Ganxun Tian, Yisha Lan, Yonghong Peng, Eddie Yin Kwee Ng, Yongtao Hao, Yongqiang Cheng, and Wenliang Che. "A new pulse coupled neural network (PCNN) for brain medical image fusion empowered by shuffled frog leaping algorithm." Frontiers in neuroscience 13 (2019): 210.

[13]. Jin, Xin, Dongming Zhou, Shaowen Yao, RencanNie, Qian Jiang, Kangjian He, and Quan Wang. "Multi- focus image fusion method using S-PCNN optimized by particle swarm optimization." Soft Computing 22, no. 19 (2018): 6395-6407.

[14]. Jin, Xin, Gao Chen, Jingyu Hou, Qian Jiang, Dongming Zhou, and Shaowen Yao. "Multimodal sensor medical image fusion based on nonsubsampled shearlet transform and S-PCNNs in HSV space." Signal Processing 153 (2018): 379-395.

[15]. Kanmani, Madheswari, and Venkateswaran Narasimhan. "Particle swarm optimisation aided weighted averaging fusion strategy for CT and MRI medical images." International Journal of Biomedical Engineering and Technology 31, no. 3 (2019): 278-291.

[16]. Kong, Weiwei, Qiguang Miao, and Yang Lei. "Multimodal sensor medical image fusion based on local difference in non-subsampled domain." IEEE Transactions on Instrumentation and Measurement 68, no. 4 (2018): 938-951.

[17]. Kou, Liang, Liguo Zhang, Kejia Zhang, Jianguo Sun, Qilong Han, and ZilongJin. "A multi-focus image fusion method via region mosaicking on Laplacian pyramids." PloS one 13, no. 5 (2018): e0191085.

[18]. Li, Weisheng, Jia Zhao, and Bin Xiao. "Multimodal medical image fusion by cloud model theory." Signal, Image and Video Processing 12, no. 3 (2018): 437-444.

[19]. Li, Yi, ZhihanLv, Junli Zhao, and Zhenkuan Pan. "Improving performance of medical image fusion using histogram, dictionary learning and sparse representation." Multimedia Tools and Applications 78, no. 24 (2019): 34459-34482.

[20]. Liu, Yu, Xun Chen, Zengfu Wang, Z. Jane Wang, Rabab K. Ward, and Xuesong Wang. "Deep learning for pixel-level image fusion: Recent advances and future prospects." Information Fusion 42 (2018): 158-173.

[21]. Singh, Rajiv, Swati Nigam, Amit Kumar Singh, and Mohamed Elhoseny. "An Overview of Medical Image Fusion in Complex Wavelet Domain." In Intelligent Wavelet Based Techniques for Advanced Multimedia Applications, pp. 31-50. Springer, Cham, 2020.

[22]. Sharma, ApooravMaulik, RenuVig, Ayush Dogra, Bhawna Goyal, and Sunil Agrawal. "A Comparative Analysis of Transforms for Infrared and Visible Image Fusion." In Intelligent Communication, Control and Devices, pp. 85-93. Springer, Singapore, 2020.

[23]. Patel, Ami, and Jayesh Chaudhary. "A Review on Infrared and Visible Image Fusion Techniques."

In Intelligent Communication Technologies and Virtual Mobile Networks, pp. 127-144. Springer, Cham, 2019.

[24]. Bruni, Vittoria, Alessandra Salvi, and Domenico Vitulano. "A wavelet-based image fusion method using local multiscale image regularity." In International Conference on Advanced Concepts for Intelligent Vision Systems, pp. 534-546. Springer, Cham, 2018.

[25]. Kong, Weiwei, and Jing Ma. "Medical image fusion using non-subsampled shearlet transform and improved PCNN." In International Conference on Intelligent Science and Big Data Engineering, pp. 635- 645. Springer, Cham, 2018.

[26]. Paramanandham, Nirmala, and Kishore Rajendiran. "Infrared and visible image fusion using discrete cosine transform and swarm intelligence for surveillance applications." Infrared Physics & Technology 88 (2018):

13-22.

(15)

[27]. Paramanandham, Nirmala, and Kishore Rajendiran. "Swarm intelligence-based image fusion for noisy images using consecutive pixel intensity." Multimedia Tools and Applications 77, no. 24 (2018): 32133- 32151.

[28]. Paramanandham, Nirmala, and Kishore Rajendiran. "Multi-focus image fusion using self-resemblance measure." Computers & Electrical Engineering 71 (2018): 13-27.

[29]. Parvathy, Velmurugan Subbiah, and Sivakumar Pothiraj. "Multi-modality medical image fusion using hybridization of binary crow search optimization." Health Care Management Science (2019): 1-9.

[30]. Pawar, Meenakshi M., and Sanjay N. Talbar. "Local entropy maximization-based image fusion for contrast enhancement of mammogram." Journal of King Saud University-Computer and Information Sciences (2018).

Referințe

DOCUMENTE SIMILARE

De¸si ˆın ambele cazuri de mai sus (S ¸si S ′ ) algoritmul Perceptron g˘ ase¸ste un separator liniar pentru datele de intrare, acest fapt nu este garantat ˆın gazul general,

Significant applications of texture analysis include medical image analysis [140], analysis of satellite images [141], segmentation, content-based image retrieval [142],

3, 4 we present the results of testing the model with an excitation of type step and with an excitation with sinusoid signal of different frequency.. The diagram of the response at

According to nonverbal communication literature, the words we use effect 7%, our voice qualities (tone, pitch etc) 38%, while body language effects 55% on our

On the thermal images, Laplacian pyramid fusion method based on deep dream technique is applied and it was observed that the fused image quantitative

Based on the functions of convolutional neural network layers, thermal images both normal and abnormal are taken as input image for segmentation of cancer

had suggested machine learning based automatic segmentation and hybrid feature analysis for Diabetic Retinopathy classification using fundus Image (2020).. This study used

Through pre- processing, image fusion and initial tumor strip classification, tumor segmentation based on the final hybrid intelligent fuzzy Hopfield neural network algorithm,