• Nu S-Au Găsit Rezultate

View of Fixed Angle Video Frame Diminution Technique for Vehicle Speed Detection

N/A
N/A
Protected

Academic year: 2022

Share "View of Fixed Angle Video Frame Diminution Technique for Vehicle Speed Detection"

Copied!
7
0
0

Text complet

(1)

Fixed Angle Video Frame Diminution Technique for Vehicle Speed Detection

Vinay Jha Pillai1*, Kukatlapalli Pradeep Kumar 2, Boppuru Rudra Prathap 3,Sarath Chandra4,Rajanish M5

1Department of Electronics and Communication Engineering, School of Engineering and Technology, CHRIST (Deemed to be University), Bangalore.

2Department of Computer Science and Engineering, School of Engineering and Technology, CHRIST (Deemed to be University), Bangalore

3Department of Computer Science and Engineering, School of Engineering and Technology, CHRIST (Deemed to be University), Bangalore

4 Department Civil Engineering,School of Engineering and Technology, CHRIST (Deemed to be University), Bangalore.

5Department of Mechanical Engineering, DSATM , Bangalore

*Corresponding author: [email protected]

(Received:DayMonthYear; Accepted: Day Month Year; Published on-line: Day Month Year)

ABSTRACT:Objectives: To estimate the speed of vehicle using a novel video image processing technique based on background subtraction method. Method: In this study, background subtraction method known as Video Frame Diminution Technique (VDFT) is used to estimated vehicle speed using a stationary camera. It is a recursive technique which calculates speed by determining the area moving property of the moving object and comparing with the calibrated speed. Experimental setup using fixed camera was deployed and five test trails were carried out. Findings: Vehicle speeds at 10, 15, 20 and 25 km/hour were successfully estimated with 93% accuracy and which is comparable with other studies reported so far. This new technique has 23% of low computational cost compared to the pre-existing techniques and also the setup is easy to implement especially in the closed area like apartment, malls and factories.

KEY WORDS:video frame, vehicle speed detection, video processing, frame diminution, background subtraction

1. INTRODUCTION

The development of a fool proof traffic system has been one of the main concerns of most developing countries considering the hazards involved with day-to-day transport. The most common methods used in modern day speed monitoring systems are RADAR and LIDAR.

RADAR is a signal that is shot at a signal with a particular frequency and wavelength at a vehicle and bounces off the vehicle only to be verified for changes in the signal properties so as to determine the speed of the moving vehicle. Unfortunately, there are several limitations such as displacement in beam, region of view and effects of gaps. Also, it lacked in processing multiple vehicle speed, vehicle detection and classification due to obvious reasons.

The modern solution to this is an IT enabled monitoring and video information collection system. Various methods and techniques are in place for video sequence background subtraction.

One of the popular techniques is deep convolution neural network (CNN). This is mainly used for segmentation of the video frames. By training the CNN for treating several video scenes; network parameters such as feature engineering and parameter tuning would be needless. Randomly picked video frames and their ground truth segmentations are considered for proposing a new model in Babaee et al 2018 [1]. As deep learning has seen a good amount of success towards computer vision, the same is considered for subtracting background in videos. Application of 3D convolutions can fetch accurate results for tracking temporal and time-based changes. This reduces the usage of background model and fine tuning of scenes [2]. An open source based innovative standard framework is made available for assessing the video scene-based background modeling methods [3]. This is especially used for moving object detection over RGBD videos.

Color and depth information is analyzed in the video inputs related to background subtraction

(2)

focusing mainly on moving object detection scenarios. Convolution neural networks in connection with training and learning spatial features provides a better improvement in background subtraction. Background models are used which focuses on single background image relating to scene specific dataset [4]. This method was experimented in ChangeDetection.net dataset and showed a significant positive result compared to the existing algorithms. Gaussian Mixture Model known as GMM is a popular used technique for foreground detection objects in video surveillance classifications [5]. An effort was taken to compare the existing GMM based algorithms using quantitative assessment measures. It includes their performance analysis in order to understand the suitability of best background subtraction algorithm based on application parameters in real world scenarios. Some video surveillance-based applications will have analysis as a vital component with variations in both foreground and background. A definite combination of Gaussians (MoG) scattering is used for modeling the foreground features of objects. It evolves in a positive direction by learning from foreground/background information in preceding frames.

This is found effective in the background subtractions of various applications. Affine transformation operator is embedded in to the proposed model [6] which can accurately adapt to the camera movements associated to varied array of video background transformations. Multiple issues related to video change or movement detection are addressed through multimode background deduction [7]. All the above studies address various models for background subtraction and foreground enhancement. However, such techniques require huge training data sets and heavy computational loads.

This paragraph discusses some of the studies which influence and gave way forward for our present study. In [8], the object is first detected with high accuracy and avoiding false negatives as much as possible and the pixels of the object are extracted by avoiding the detection of static objects, shadows or noise of any kind. The data collected was then subjected to statistical correlation giving this approach the name SAKBOT which stood for STATISTICAL AND KNOWLEDGE BASED OBJECT TRACKING (Figure 1). The importance of a static background which would reduce the probability of noise and any residual ghost imagery. In [9], mentioned image segmentation and object detection as crucial parts of video image processing.

The video sequence is first given as input upon which it is pre- processed frame by frame and background subtraction is conducted. Background subtraction is the subtraction of each frame from the first background frame which is considered to be a static background. The subtraction is done pixel by pixel until the moving object is tracked and extracted. The study mostly stresses on the need for a static and less reactive background. It also speaks of image segmentation which in Figure 3 is obtained through the two methods which are namely thresholding and edge detection.

However once extracted the boundaries of the object seem to be unclear proving the need for a more precise set of boundaries laden for the background. Poonam Kumari et al briefly discussed the importance of morphology in digital image processing [10]. In their study, basic theory image morphology is introduced followed by different morphological operations involved.

In our present study, we define morphology as a process in which a structuring element is applied to an input image, creating an output image of the same size. It is used to analyse shapes and form of objects in images. proposed base technique for said system is background subtraction.

It involves taking a calibrated video of a vehicle, taken from a camera whose view is perpendicular to the direction of motion of the moving at a particular speed. Then a real time video of a speeding vehicle is taken. Both videos undergo background subtraction after which the videos are converted to binary and undergo processes such as edge detection, blob detection and thresholding to obtain the threshold value ie; the pixel values of both videos. The videos are then split frame by frame and the threshold values are correlated giving rise to the number of times the object appears in the screen and for how long in accordance with the frame per second rating of the recording camera. This combined with the appropriate formulae gives the speed of the moving vehicle.

(3)

2. METHOD, RESULTS AND DISCUSSION

In this section we present the novel background subtraction techniques that we used in our present study. We outline the algorithm used, experimental set up and discuss the results.

2.1. Novel Video Frame Diminution Technique (VFDT) and Experimental Setup

In this study, we propose a new methodology for vehicle speed detection called as Video Frame Diminution Technique (VFDT). The basic algorithm of the proposed model is shown figure 1. Following are the steps followed for the same:

Calculations Frames with Object Convert to

Binary Back ground

Substraction Frame

Extraction Acquire Video

Speed Process

Involved

Fig. 1.Video Frame Diminution Technique (VFDT) Algorithm.

2.1.1. Video acquisition

First, we take a calibrated camera with a defined frame per second (fps) rating. The rating taken for the experiment was 30fps. The camera is mounted on a stand and the field of view is measured from two fixed points as shown in figure 2 as point „a‟ and „b‟. The line of sight of the camera must be perpendicular to the direction of motion of the vehicle. The distance „d‟ between vehicle movement and camera is constant for multiple trails.

Fig. 2.Camera setup for video recording used in the present study [11].

(4)

2.1.2. Frame Extraction

Frame Extraction: This step holds the most crucial part of the experiment. The selection of an ideal reference frame from the video is required so as to perform an accurate background subtraction. Consider the following steps.

1. Initialise the background frame to 0.

2. Initialise a loop from 1 to N frames.

3. Divide the video by the quotient of N frames and the background down sample.

4. Now add the background frame with the resultant value to obtain the reference frame.

The MATLAB pseudo code for this step is given by Background frame=frame*0;

For k=1:bk_downsample: N frames;

Background frame = background frame + read (vob, k) / ( N frames/ bk_downsample);

Disp (k/ (N Frames)*100);

End

Now, the above code shows the selection of the reference frame. The concept of the downsample is used so as to group a certain set of frames together to reduce the computing time and hence make the subtraction quicker in case the length of the video is long. In this case the downsample value was assumed to be 20. The video is read and is then divided by the quotient of the number of frames and the downsample value. This is then added with the previously initialised 0 frame so as to give the desired low intensity matrix reference frame.

2.1.3. Background Subtraction

In this step the image is loaded with the static background. We know that an image is a set of rows and columns of pixels. We set a variable [ i ] to represent M rows and a variable [ j ] to represent N columns of pixels in the image.

X(i,j) = F(i,j) – B(i,j) (1)

The equation-1 is a representation of the background subtraction operation where F represents the current frame and B represents the background frame. The operation is looped for different values of i and j varying from 0 to the maximum values that is M and N respectively. After the subtraction is completed the image is displayed.

2.1.4. Conversion to Binary

The conversion to binary is not a mandatory field however, working with RGB images or Gray- Scale images has proved complicated in the scenario and so the RGB image is first converted to gray scale and then then the resulting image is converted into binary. The conversions were done using standard MATLAB functions.

2.1.5. Number of Frames displaying the object

Number of Frames displaying the object.:Using a counter, the number of frames in which the object is present is detected using the area property. This works on the premise that for a given area of say a particular number of pixel variations that is 1 or 0, if the threshold value is met then the object is present in that particular frame. This number is vital to the calculating the speed of the object when comparing it to the calibrated speed.

2.1.6. Calculation of Speed

Speed= (Total number of frames of calibrated speed* Calibrated speed) / (Number of frames in which object is present for uncalibrated speed)

This formula is the theoretical relation for the speed of a moving object when compared to a

(5)

calibrated speed. We know that as the speed of an object increases, the number of frames in which the object is present reduces hence giving rise to the above relation. Results are discussed in the next section.

2.2 EXPERIMENTAL RESULTS AND OBSERVATIONS DISCUSSION

In this section, experimental results and observations are recorded and discussed. We choose a motor cycle as vehicle for the experiment. Vehicle moved in a perpendicular direction to the camera vision at a distance of around 8 meter. The vehicle was moved at a speed of 5km/hour and its was observed the vehicle was detected in 123 video frames. This value was used as calibrated measurement which we used to compare with other test trials. The experiment gave a close estimation of the speed of the vehicle as depicted in Table 1.

Sl No

Actual Speed from Speedometer (km/hour)

Object frames for test video

Speed Estimated through Video Frame Diminution Technique

(km/hour)

1 5 (calibrated) 123 (calibrated) 5

2 10 65 9.46

3 15 45 13.667

4 20 34 18.088

5 25 26 23.653

Table 1: Results of Speed Estimation using VFDT Technique

Upon giving an input video with an uncalibrated speed of 10 km/h it was found that in 65 frames object was detected and hence the calculated speed was found to be 9.46 km/hour.

Similarly, the validity of VFDT was checked for other speeds as well. It can be seen that the vehicle speed was detected with an accuracy of 93% on an average. The mean absolute error of 1.283 Km/hour and average relative error of 7.3% was obtained using VFDT method which were comparable with the existing algorithms [12,14,15]. We used Intel core i5-4210U p[rocessor with 8 GB RAM. This new technique has 23% of low computational cost relatively to the pre-existing techniques. Vehicle speed beyond 25km/hour led to high percentage of error due to limited frame per second (FPS) capacity of used camera. This issue could be overcome by using a sophisticated camera with high FPS.

3. CONCLUSION REMARKS (HEADING 1)

The present study demonstrates a novel VFDT algorithm for estimation of vehicle speed which could be implemented using existing cameras in the road, apartment, malls, factories etc. The inference of the experiment using novel algorithm shows that the current technique can be a potentially viable replacement for the current speed estimation techniques due to its low computation complexity and easy implementation. In the present study, the camera was fixed and the vehicle movement was strictly restricted to the set path. However, the current program works only for vehicles moving perpendicular to the view of the camera. Improvements can be made in estimation and determination of the speed of a vehicle with head on movement in the direction of the camera and with time even for multiple vehicles. Future scope of the work involves

 Automatic segmentation of each vehicle from the background and from other vehicles so that all vehicles are detected.

 Correctly detect all types of road vehicles - motorcycles, passenger cars, buses, construction equipment, trucks, etc.

(6)

 Function under a wide range of traffic conditions - light traffic, congestion, varying speeds in different lanes.

 Function under a wide variety of lighting conditions - sunny, overcast, twilight, night, rainy, etc. and

 Finally, to operate in real-time.

REFERENCES

[1] Babaee, Mohammadreza, Duc Tung Dinh, and Gerhard Rigoll. "A deep convolutional neural network for video sequence background subtraction." Pattern Recognition 76 (2018): 635-649.

DOI: https://doi.org/10.1016/j.patcog.2017.09.040

[2] Sakkos, Dimitrios, Heng Liu, Jungong Han, and Ling Shao. "End-to-end video background subtraction with 3d convolutional neural networks." Multimedia Tools and Applications 77, no. 17 (2018): 23023-23041.DOI: https://doi.org/10.1007/s11042-017- 5460-9

[3] Camplani, Massimo, Lucia Maddalena, Gabriel MoyáAlcover, Alfredo Petrosino, and Luis Salgado. "A benchmarking framework for background subtraction in RGBD videos."

In International Conference on Image Analysis and Processing, pp. 219-229. Springer, Cham, 2017.DOI: https://doi.org/10.1007/978-3-319-70742-6_21

[4] Braham, Marc, and Marc Van Droogenbroeck. "Deep background subtraction with scene- specific convolutional neural networks." 2016 international conference on systems, signals and image processing (IWSSIP). IEEE, 2016.DOI:

10.1109/IWSSIP.2016.7502717.

[5] Goyal, Kalpana, and JyotiSinghai. "Review of background subtraction methods using Gaussian mixture model for video surveillance systems." Artificial Intelligence Review 50.2 (2018): 241-259.DOI: https://doi.org/10.1007/s10462-017-9542-x

[6] H. Yong, D. Meng, W. Zuo and L. Zhang, "Robust Online Matrix Factorization for Dynamic Background Subtraction," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 7, pp. 1726-1740, 1 July 2018.DOI:

10.1109/TPAMI.2017.2732350.

[7] Sajid, Hasan, and Sen-Ching Samson Cheung. "Universal multimode background subtraction." IEEE Transactions on Image Processing 26.7 (2017): 3249-3260.DOI:

10.1109/TIP.2017.2695882

[8] [8] Cucchiara, Rita, Costantino Grana, Gianni Neri, Massimo Piccardi, and Andrea Prati.

"The Sakbot system for moving object detection and tracking." In Video-based surveillance systems, pp. 145-157. Springer, Boston, MA, 2002.DOI:

https://doi.org/10.1007/978-1-4615-0913-4_12

[9] Mohan, Anaswara S., and R. Resmi. "Video image processing for moving object detection and segmentation using background subtraction." In 2014 First International Conference on Computational Systems and Communications (ICCSC), pp. 288-292.

IEEE, 2014.DOI: 10.1109/COMPSC.2014.7032664.

[10] Kumari, Poonam, and Sanjeev Kumar Gupta. "Morphological Image Processing GUI using MATLAB." Trends Journal of Sciences Research 2, no. 3 (2015): 90-94.DOI:

10.31586/ImageProcesses.0203.02

[11] Vinay Jha Pillai, “Vehicle Speed Estimation using Video-Image Processing: State of the Art and Challenges”, Working Paper, Centre for Publications, CHRIST (Deemed to be University), 2014, ISBN 978-93-82305-51-4.

[12] Vakili, Elnaz, Maryam Shoaran, and Mohammad R. Sarmadi. "Single–camera vehicle speed measurement using the geometry of the imaging system." Multimedia Tools and

(7)

Applications (2020): 1-21.DOI: https://doi.org/10.1007/s11042-020-08761-5

[13] Meng, Qiao, Huansheng Song, Yu‟an Zhang, Xiangqing Zhang, Gang Li, and Yanni Yang. "Video-Based Vehicle Counting for Expressway: A Novel Approach Based on Vehicle Detection and Correlation-Matched Tracking Using Image Data from PTZ Cameras." Mathematical Problems in Engineering 2020 (2020). DOI:

https://doi.org/10.1155/2020/1969408

[14] Tourani, Ali, AsadollahShahbahrami, AlirezaAkoushideh, Saeed Khazaee, and Ching Y.

Suen. "Motion-based Vehicle Speed Measurement for Intelligent Transportation Systems."

International Journal of Image, Graphics & Signal Processing 11, no. 4 (2019).DOI:

10.5815/ijigsp.2019.04.04

[15] Moazzam, Md Golam, Mohammad ReduanulHaque, and Mohammad Shorif Uddin.

"Image-based vehicle speed estimation." Journal of Computer and Communications 7, no.

6 (2019): 1-5.DOI: 10.4236/jcc.2019.76001

[16] Li, Jing, Shuo Chen, Fangbing Zhang, Erkang Li, Tao Yang, and Zhaoyang Lu. "An adaptive framework for multi-vehicle ground speed estimation in airborne videos."

Remote Sensing 11, no. 10 (2019): 1241.DOI: https://doi.org/10.3390/rs11101241

[17] Cheng, Genyuan, YubinGuo, Xiaochun Cheng, Dongliang Wang, and Jiandong Zhao.

"Real-Time Detection of Vehicle Speed Based on Video Image." In 2020 12th International Conference on Measuring Technology and Mechatronics Automation (ICMTMA), pp. 313-317. IEEE, 2020.DOI: 10.1109/ICMTMA50254.2020.00076

[18] Sonth, Akash, HarshavardhanSettibhaktini, and AnkushJahagirdar. "Vehicle speed determination and license plate localization from monocular video streams." In Proceedings of 3rd International Conference on Computer Vision and Image Processing, pp. 267-277. Springer, Singapore, 2020.DOI: https://doi.org/10.1007/978-981-32-9088- 4_23

[19] Kim, HyungJun. "Multiple vehicle tracking and classification system with a convolutional neural network." Journal of Ambient Intelligence and Humanized Computing (2019): 1- 12.DOI: https://doi.org/10.1007/s12652-019-01429-5

[20] Eom, Jung Hum. "Apparatus and method for image processing according to vehicle speed." U.S. Patent 10,783,665, issued September 22, 2020.URL:

https://patents.google.com/patent/US10783665B2/en.

[21] Manikandan, R and Dr.R.Latha (2017). “A literature survey of existing map matching algorithm for navigation technology. International journal of engineering sciences &

research technology”, 6(9), 326-331.Retrieved September 15, 2017.

[22] A.M. Barani, R.Latha, R.Manikandan, "Implementation of Artificial Fish Swarm Optimization for Cardiovascular Heart Disease" International Journal of Recent Technology and Engineering (IJRTE), Vol. 08, No. 4S5, 134-136, 2019.

[23] Manikandan, R., Latha, R., & Ambethraj, C. (1). An Analysis of Map Matching Algorithm for Recent Intelligent Transport System. Asian Journal of Applied Sciences, 5(1).

Retrieved from https://www.ajouronline.com/index.php/AJAS/article/view/4642

[24] R. Sathish, R. Manikandan, S. Silvia Priscila, B. V. Sara and R. Mahaveerakannan, "A Report on the Impact of Information Technology and Social Media on Covid–19," 2020 3rd International Conference on Intelligent Sustainable Systems (ICISS), Thoothukudi, India, 2020, pp. 224-230, doi: 10.1109/ICISS49785.2020.9316046.

[25] Manikandan, R and Dr.R.Latha (2018). “Map Matching Algorithm Based on a Hidden Markov Model for Vehicle Navigation" International Journal of Advanced Technology in Engineering and Science, 6(6), 36-42.

[26] Manikandan, R and Dr.R.Latha (2018). “GLOBAL POSITIONING SYSTEM FOR VEHICLE NAVIGATION" International Journal of Advances in Arts, Sciences and Engineering (IJOAASE), 6(13), 1-9.

Referințe

DOCUMENTE SIMILARE