• Nu S-Au Găsit Rezultate

View of A Fusion of SA Rand Optical Remote Sensing Images for Urban Object Extraction Using Multiple Discrete Wavelet Families

N/A
N/A
Protected

Academic year: 2022

Share "View of A Fusion of SA Rand Optical Remote Sensing Images for Urban Object Extraction Using Multiple Discrete Wavelet Families"

Copied!
9
0
0

Text complet

(1)

2399 http://annalsofrscb.ro

A Fusion of SA Rand Optical Remote Sensing Images for Urban Object Extraction Using Multiple Discrete Wavelet Families

J. Thrisul kumar1, G.Venu Ratna Kumari2 ,M. Satish kumar2 , K. Pradeep Vinaik3 , P. Raju 4

1Associate Professor, Vignan’s Nirula institute of tech and science for women, Guntur, A.P.

2Assistant Professor, Dept of Civil Engg, Siddharta Institute of Technology, Vijayawada, A.P.

2 Professor, Dept of Civil Engg, Kallam Haranatha Reddy Inst of Tech, Guntur, A.P.

3,4Assistant Professor, EECE Department, GITAM (Deemed to be university), Visakhapatnam, A.P.

Corresponding author: [email protected]

Abstract: Remote sensing is defined as a technology for acquiring measurements and information about a given phenomenon without being in physical contact with it. Due to its capability to provide the information about the Earth surface through the use of passive and active sensors, mounted on the airborne or space-borne platforms, this technology received a constantly growing interest from public and private institutions in the last decades.Due to the large amount of data to process, the extraction of useful information from remotely sensed data requires the definition of automatic and possibly unsupervised analysis techniques.Sensors and satellites are acting a major role in remote sensing.In the last few decades, the change detection in SAR (Synthetic Aperture Radar) images has gained huge importance as they are being utilized in the field of optical ones. These optical images suffered from huge difficulties like the presence of the speckle noise. Whereas SAR images suffered with low spectral resolution and optical images are suffered with low spatial resolution. These two images are fused with DWT (Discrete Wavelet Transform) to increase the quality of two images. The advantage of DWT is it can represent the signal both in frequency domain and time domain. Consequently, various wavelet families have been applied for fusion process and the resulting images are compared with conventional Daubchies2 wavelets in terms of Mean Square Error (MSE) and Peak signal to Noise Ratio (PSNR).

Key words: SAR, Optical image, DWT, Daubechies2, MSE and PSNR

1. Introduction

It is well known fact that Earth surface can be observed with either optical sensors and SAR sensors. But both are having their own disadvantages as well as advantages.

Meanwhile, SAR can capture the object irrespective of weather conditions whereas optical sensors can capture the image in day time only. Consequently, SAR images consisting of

(2)

2400 http://annalsofrscb.ro

high spatial resolution and low spectral resolution. Optical images are having high spectral resolution and low spatial resolution. Therefore, to make compromise between this spectral resolution and spatial resolution image fusion is needed to increase the quality of the images by extracting all the information from both the images.

1.1 Framework of basic SAR system Model

In the SAR system, RADAR is installed on a moving stand that illuminates the objects and receives the backscattered EM waves in the form of resonance signals. On the other hand, the SAR performs transmission in a continuous manner with the suitable backscattered signals that merge the actual antenna length with aperture length. The outline of a SAR model is shown in Fig. 1

of a SAR model

Fig.1: Outline of a SAR model

The satellite sensors sense the backscattered waveforms and transmit it to ground stations for further processes. Therefore, the resolution cell of SAR involves numerous backscattered waveforms owing to the signal received from distributed targets and also from the scene. The scatters that are received generate a scene, which is formed from the incoherent signals as shown by Fig. 2.

Remote sensing data for future

use

Remote sensing applications

Process control unit

Operating interface

SAR system (Transmitter/

Receiver) Signal

processing unit

Analog/

Digital converter unit

Ground station Chirp pulse

signal generator Timing

and control

(3)

2401 http://annalsofrscb.ro

Figure 2: Model of Incoherent Backscattered Signal

In SAR images, the signal modelling is exploited for eliminating the speckle noise;

still it is complex to resolve the issues related to the frequency range of satellite sensor. This process can be carried out by “time-honoured gesture” that is illumined by means of Maxwell‟s conditions. For processing these kinds of functions, impedance is considered that offers important data to be monitored from the earth surface. In addition, the destructive or constructive analysis is carried out depending on the amplitude that is received from the randomly varying received signals. Accordingly, the reflective field is appeared uniformly due to the “granular noise effect” from the satellite image system. The existence of granular noise causes their impacts on certain appliances like pre-processing, mapping and so on, in which the visual form of SAR image is much significant. “The unusual variation of earth observed signals might be considered as an unsettling influence and is usually signified as speckle noise”. Therefore, the speckle is signified as noise that is acquired from the incoherent imaging system. Actually, it has to be handled as “radiometric process” so that it displays the data content with no interruptions.

2. Discrete Wavelet Transform

Typically, DWT [1-4] is any wavelet transform in numerical or functional evaluation where the wavelets are discretely evaluated. Among all the wavelet transforms, DWT is significant as it takes both frequencies as well as location data. Here in this research work, the images are processed by transmitting them using an AFB followed by a decimation function in DWT. The AFB contains an HPF and an LPF, which are normally utilized for image compression. The image is divided as 2 bands while it is transmitted through AFB. Moreover, the HPF is coupled with a differencing function, which extracts detailed information

Backscattered signals

(4)

2402 http://annalsofrscb.ro

regarding the image data. On contrast to HPF, LPF is associated with an averaging function that attains the whole data about the image. In the meantime, the filtering function output is continuously decimated by 2, Eq. (1) expresses the mother wavelet, in which v signifies coefficient of shifting and u specifiescoefficient of scaling. The whole other fundamental functions are taken as the mother wavelet‟s differentiation. [5-10]

𝛤𝑢,𝑣 𝑚 = 1

𝑢𝛤 𝑚 −𝑣

𝑢 (1)

Generally, ψ m, n is 2D scaling function and ΓA m. n , ΓB m. n , andΓC m, n are three 2D wavelets are required in 2D DWT process. Moreover, all the components are products of 1D scaling function  and its corresponding wavelet Γ. Despite the products that produce 1D result like Γ m , ψ m the resultant four products produce the independent scaling function as defined in Eq. (2), and Eq. (3). Eq. (4) and Eq. (5) states the independent, directional sensitive wavelets respectively. 𝜓 𝑚, 𝑛 = 𝜓 𝑚 𝜓 𝑛 (2)

𝛤𝐵 𝑚, 𝑛 = 𝜓 𝑚 𝛤 𝑛 (3)

𝛤𝐴 𝑚, 𝑛 = 𝛤 𝑚 𝜓 𝑛 (4)

𝛤𝐶 𝑚, 𝑛 = 𝛤 𝑚 𝛤 𝑛 (5)

Typically, the abovementioned wavelets determine functional differences,gray level deviations, or image intensity in different directions. Moreover, ΓBdetermines the variations with horizontal edges or columns, ΓAis related to differences with vertical edges or rows, and ΓCsignifies the variations with diagonals. In addition to this, translated as well as scaled basis functions are stated in Eq. (6) and Eq. (7) respectively, while index „a‟ identifies Directional wavelets and „b‟ specifies the scaling function. Here, Eq. (8) and Eq. (9) specifies the determination of DWT of image gi m. , n with size X × X. 𝜓𝑏,𝑥,𝑥 𝑚, 𝑛 = 2𝑏2𝜓 2𝑏𝑚 − 𝑥, 2𝑏𝑛 − 𝑥 (6)

𝛤𝑏,𝑥,𝑥𝑎 𝑚, 𝑛 = 2𝑏2𝛤𝑎 2𝑏𝑚 − 𝑥, 2𝑏𝑛 − 𝑥 , 𝑎 = 𝐵, 𝐴, 𝐶 (7)

𝛤𝜓 𝑏0, 𝑥, 𝑥 = 1 𝑋𝑋 𝑋−1𝑚 =0 𝑋−1𝑛 =0𝑔𝑖 𝑚, 𝑛 𝜓𝑏0,𝑥,𝑥 𝑚, 𝑛 (8) 𝛤𝜓𝑎 𝑏, 𝑥, 𝑥 = 1 𝑋𝑋 𝑋−1𝑚 =0 𝑋−1𝑛=0𝑔𝑖 𝑚, 𝑛 𝛤𝑏,𝑥,𝑥𝑎 𝑚, 𝑛 (9)

(5)

2403 http://annalsofrscb.ro

Eq. (10) shows the wavelet ΖD g i combined to the input image in DWT, and Mcrefers to the optimized filter coefficient. Eq. (9) and Eq. (10) are utilized to analyse the inverse DWT and its execution can be referred using Eq. (11).

𝛧𝐷 𝑔 𝑖 = 𝑀𝑗 𝑐𝜓 2𝑔 𝑖 − 𝐷 (10)

     

 

m n

XX

n m x

x b XX

n m gi

C A B

a b b x x

a x x b

x x b x x

1 ,

, ,

1 , ,

,

, , ,

, , 0

0

0

   

  

(11)

Fig. 3 specifies the overall architecture of the DWT-based image fusion model having two source images along with the average selection and maximum selection processes. The overall procedures included in DWT-based image fusion model are explained in the following steps.

 Attain the 2 source images i and jfor fusion procedure.

 Perform wavelet decomposition to the input imagesi and j and consequently, the high frequency lh , hl and hh as well as low-frequency components ll are attained.

Fig. 3: Overall Architecture of DWT-based Image Fusion Model

Source image i

Source image j lli lhi

hli hhi

llj lhj

hlj hhj

llf lhf

hlf hhf

Average selection

Maximum selection

Fused image

Inverse Wavelet Transform 2 Level Wavelet Decompositions

Fused Transform

(6)

2404 http://annalsofrscb.ro

 Compute the average selection procedure, in which the low-frequency images are given to the average selection.

 Compute minimum or maximum selection model, in which a selection procedure is performed to each corresponding pixel of the input images i and

j. Furthermore, the pixel having low of high intensity is selected respectively.

 Execute inverse DWT on fusedllf,lhf, hlfand hhf coefficients for obtaining the fused intensity image.

 Eventually, the novel intensity coordinate of the image is obtained.

3. Results and discussions

To perform the image fusion two images have been taken where one image is SAR image and second one is optical image. The input dataset is composed of an ENVISAT SAR image of urban area of Wuhan, Landsat-5 TM images (band 3, 4, 5) and a SPOT Pan image. All the data have been accurately co-registered to the SPOT Pan image.In this regard, to extract the urban information from these two images fusion is performed by using multiple wavelet families such as Daubechies2, Daubehies4, Haar wavelet and Symlets. The performance of this DWT is compared with Daubechies2 with other wavelet families and the performance of individual family measured by MSE and PSNR. Fig 4 illustrating the input images.

(a) Landsat-5 TM images (band 3, 4, 5) (b) ENVISAT SAR image Fig. 4: Input data (a) Landsat-5 TM images (band 3, 4, 5 (b) ENVISAT SAR image

(7)

2405 http://annalsofrscb.ro

(a) DWT_db_2 (b) DWT_db_4

(c) DWT_Haar (d)DWT_Symlet

Fig 5. Fused images (a) DWT_db_2 (b) DWT_db_4 (c) DWT_Haar (d) DWT_Symlet S.no parameter Db2 vs Db4 Db2 vs Symlet Db2 vs Haar

1 MSE 5.7472 23.1816 28.6419

2 PSNR 40.5702 34.5134 33.5948

Table1: performance of Daubechies2 vs other wavelet families

As shown in table 1 the Daubechies wavelet 2 is efficient when compare with other wavelet families especially when it considering as reference wavelet. The image fusion is

(8)

2406 http://annalsofrscb.ro

implemented by using multiple wavelet families and the corresponding results have been shown in table 1 in terms of MSE and PSNR.

Conclusion:

In this work image fusion is performed between two remote sensing images by using various wavelet families such as Daubechies2, Daubechies4, Haar and Symlet wavelet families. In this regard Db2 is taken as reference wavelet family and calculated corresponding MSE and PSNR between db2 and other fused images. Db4 is shown that 5.74 of MSE and 40.5702 of PSNR, Haar wavelet shown MSE of 2.1816 and 34.5134 of PSNR and Symlet shown MSE of 28.6419 and 33.5948 of PSNR. Finally, Daubechies given better PSNR and less MSE with respect to the remaining wavelet families.

References

[1] G. Pajares, J.M. de la Cruz, “A wavelet-based image fusion tutorial”, Pattern Recognition, vol. 37, no.9, pp. 1855–1872,2004.

[2] H. Li, B.S. Manjunath, S.K. Mitra, “Multisensor image fusion using the wavelet transform”, Graphical Models and Image Processing, vol.57, no.3, pp. 235–245, 1995.

[3] I. De, B. Chanda, “A simple and efficient algorithm for multifocus image fusion using morphological wavelets”, Signal Processing, vol. 86 no.5,pp.924–936, 2006.

[4] G.Ramesh Babu and K.Veera Swamy, "Image Fusion using various Transforms", IPASJ International Journal of Computer Science (IIJCS), vol.2, no.1, January 2014.

[5] Thrisul Kumar Jakka, Y. Mallikarjuna Reddy , B. Prabhakara Rao “GWDWT-FCM:

Change Detection in SAR Images Using Adaptive Discrete Wavelet Transform with Fuzzy C-Mean Clustering” Journal of the Indian Society of Remote Sensing (March 2019) 47(3):379–390.

[6] J. Thrisul Kumar, Y. Mallikarjuna Reddy, B. Prabhakara Rao “Image Fusion of Remote Sensing Images using ADWT with ABC Optimization Algorithm”, International Journal of Innovative Technology and Exploring Engineering (IJITEE) ISSN: 2278-3075, Volume-8 Issue-11, September 2019

(9)

2407 http://annalsofrscb.ro

[7] J. Thrisul Kumar, Y. Mallikarjuna Reddy, B. Prabhakara Rao “Change Detection in Sarimages Based on Artificial Bee Colony Optimization With fuzzy C - Means Clustering” International Journal

[8] J. Thrisul Kumar, Y. Mallikarjuna Reddy, B. Prabhakara Rao “WHDA-FCM: Wolf Hunting-Based Dragonfly With Fuzzy C-Mean Clustering For Change Detection In SAR Images” Section B: Computer and Communications Networks and Systems [9] J. Thrisul Kumar, Y. Mallikarjuna Reddy, B. Prabhakara Rao “Change Detection in

Sar images Based on Artificial Bee Colony Optimization with Fuzzy C - Means Clustering”, International Journal of Recent Technology and Engineering (IJRTE), ISSN: 2277-3878, Vol -7 Issue-4, Nov 2018, pp- (156- 160)

[10] .J.Thrisul Kumar, N.Durgarao, E.T.Praveen, M.Kranthi Kumar “ Modified Image Fusion TechniqueFor Dual-Tree Complex Wavelet Transform” International Journal Of Advanced Science And Technology , Vol. 29, No. 5s, (2020), pp. 895-901

Referințe

DOCUMENTE SIMILARE

Using OpenCv deep neural networks the model gets good results and by using MobileNetV2 which is an image classifier for the classification of images is processed accurately..

the invisible watermarking through Discrete Wavelet Transform (DWT) protects the image on social network by restricting the unauthorized distribution of image..

A method known as the dual-tree complex wavelet transform is included in the proposed method to turn the clinical image into a frequency field.This DTCWT

[6] proposed an approach where the automatic classification of medical X-ray images was performed using different feature extraction techniques such as Gray Level

In this paper, using the chromatic polynomial of some specific graphs, we compute the chromatic polynomials for certain families of dendrimer nanostars.. In chemical graphs,

Further, the families Limacodidae, Tortricidae and Uraniidae were represented by two species in two genera each while, Eupterotidae had one genus with two species

Convolutional neural network for convective storm nowcasting using 3-d doppler weather radar data. IEEE Transactions on Geoscience and Remote Sensing,

This manuscript focuses towards studies and analysis of ECG signal by extracting the signal features using wavelet transform and classification using neural