Retracted Article: Road Traffic Monitoring System with Self-Learning Function using the Raspberry Pi Platform

This paper present principle of a traffic management and road monitoring application using the latest generation of IT and mobile telecommunication systems based on an intelligent system with selflearning function for urban traffic junctions. This system will allow automatic adjustment of green times depending on road intersections traffic. For the implementation of this IoT project, we use a Raspberry Pi, a webcam and ThingSpeak server to analyse traffic on a busy highway using image processing. With Simulink we design and deploy a traffic monitoring algorithm to the Raspberry Pi, and we analyse and visualize the traffic patterns using ThingSpeak, an IoT analytics platform. A remote road monitoring system principle is also described. This system uses modern communications equipment for periodically reading and transmitting parameters such as road temperature, humidity, wind intensity and vehicle weight using different type of sensors.


Introduction
The result of the increase in vehicle traffic has caused many problems. For example, traffic accidents, traffic congestion, air pollution caused by traffic and so on. Traffic congestion has been a significant problem. It has been widely noticed that the growth of preliminary transport infrastructure, several sidewalks and extensive roads have failed to reduce congestion in cities. As a result, many researchers have turned their attention to the Intelligent Transport System (ITS), which can use traffic flow data based on traffic monitoring at traffic junctions for congestion detection. In order to process the information and to monitor the results, in order to better understand the flow of traffic, an increasing dependence on traffic surveillance requires better detection of vehicles in an extended area. Automated detection of vehicles in video surveillance data is a topical and extremely important issue for computers with important practical applications such as traffic analysis and security. [1][2] Vehicle detection and counting are important in calculating traffic R e t r a c t e d congestion on highways. The main purpose of vehicle detection and counting in the traffic video project is to develop the methodology for automatic vehicle detection and counting on motorways. A system for efficient detection and counting of dynamic vehicles has been developed. Intelligent visual surveillance for road vehicles is a key component for the development of intelligent transport systems. The entropy masking method does not require prior knowledge of extracting road features on static images. Vehicle detection and tracking in video surveillance using segmentation with initial background decline using a morphological operator to determine the visible regions in a video frame sequence. Edges are counted showing how many areas are of a certain size, especially in the areas where the cars are located are the points and the vehicle counting in the field of traffic monitoring on highways. [2][3] Automated detection and tracking of video surveillance data is a very difficult issue in computer vision with important practical applications such as traffic analysis and security. Cameras are a relatively inexpensive surveillance tool. Manual review of the large amount of data it generates is often impossible. Thus, video analysis algorithms that require little or no human involvement is a good solution. Video surveillance systems focus on background modeling, vehicle classification and tracking. Increasing availability of video sensors and high-performance video processing hardware offers great possibilities to address many issues of video understanding, among which vehicle highlighting and target classification are very important. A vehicle tracking and classification system is described as one that can classify moving vehicles and further classifies vehicles in different classes. [4] 2 Proposed system architecture The structure of the system presented in this paper is computationally efficient and can work in real time, while maintaining very respectable detection rates. However, these types of systems contain some inevitable problems caused by obstruction of the object if a larger vehicle with a smaller, partially closed vehicle is usually considered an object, since the first plane detection methods are not projected in inherent way to segregate multiple vehicles. In another case, the appearance of a larger vehicle or a shadow of the vehicle that bypasses adjacent bands is also known to trigger false detection. Consequently, the merit of using computer vision as a surveillance tool has been limited, focusing strictly on building reliable systems over time.
The system uses an existing video sequence. The entire process is described in Figure 1. The first frame is considered to be the reference framework. Subsequent frames are taken as input frames. They are compared and the background is removed. If a vehicle is present in the entry frame, it will be retained. The detected vehicle is thus tracked by various techniques, namely the adaptive base method and the block analysis method.  This algorithm gets superior accuracy when using above-average performance cameras and a customized computing processor. The major advantage of such a system is portability, but among the main disadvantages we can enumerate the environmental factors that can influence the performance of the system (atmospheric conditions). Also it is necessary in some locations to create an infrastructure suitable for the entire system in terms of power supply from one source of energy as well as to create possibilities for the effective fixation of the entire system in order to allow the data to be safely recorded. 2 R e t r a c t e d

Internal detection
Open CV is an open source project, an important part of the library that implements those wrong data structures and algorithms that you can find in your OpenCV. Therefore, source tutorials are part of the library. Computer vision is a rapidly growing field, partly due to both cheaper and most capable cameras, largely due to affordable processing power and partly due to the fact that vision algorithms begin to mature. Opening up the resume itself has played a role in increasing computer vision, allowing thousands of people to do more productive work in vision. Focusing on real-time vision, OpenCV helps students and professionals effectively implement projects and initiate research, giving them a computer vision and a machine learning infrastructure that was previously available in just a few labs mature research. [1,5,8] A general detection approach is to extract the characteristic regions from the video clip using a background modeling technique learned. This involves subtracting each image from the background scene. The first frame is considered the initial background and the threshold of the resultant difference image to determine the foreground image. A vehicle is a group of pixels that move in a coherent manner, either as a lighter region on a darker background or vice versa. Often, the vehicle may have the same color as the background, or may be part of it may be aging with the background, because of which vehicle detection becomes difficult. This leads to a wrong number of vehicles. [7] Detection of information can be used to refine the vehicle type and also to correct errors caused by occlusions. Once the static vehicles are recorded, the background image is low in the video frames to get the dynamic vehicles in the foreground. Post-processing is performed on top-of-the-line dynamic vehicles to reduce noise interference. [4] Image segmentation is done as follows: • Segmentation of vehicle interest regions. At this stage, regions that may contain an unknown object must be detected.
• The next step focuses on extracting appropriate features and then extracting vehicles. The primary purpose of feature extraction is to reduce data by measuring certain features that distinguish input patterns. • Final classification. He assigns a label to a vehicle based on the information provided by his descriptors. The analysis is performed on mathematical morphology operators for segmentation of a gray image.

Detection of vehicles
The adapted background surface uses the current frame and the reference image. The difference between the current frame and the reference frame exceeds the threshold is considered to be the moving vehicle. The optical flow method can detect the moving vehicle even when the camera moves, but it takes more time for its computational complexity and is very sensitive to noise. The motion area usually appears rather noisy in real images, and the optical flow estimate involves only local computing. Thus, the optical flow method can't detect the exact contour of the moving vehicle. By using OpenCV's specific functions and procedures, it is possible to classify each element that may be of interest in the entire algorithm. So, after removing the background, you can catalog elements such as traffic lanes, sidewalks, cars and road markings. Finally, the entire image analysis process can result in visual marking of a vehicle crossing a certain point of a traffic road. This visual marking is made by placing a border of a certain color over the area in the image where a vehicle was detected. This technique uses predefined detection functions. [8] Figure 2 shows the vehicle detection algorithm using OpenCv image processing techniques. 3 R e t r a c t e d Fig. 2. The algorithm of a vehicle detection process By introducing the self-learning elements, we can create a system that can be continuously adapted and can create a personal database with different images that can then be compared with new frames taken in order to reduce the effective response time of the system to a certain demand calculation.

Tracking of vehicles
Vehicle tracking involves the continuous identification of the vehicle detected in the video sequence and is achieved by specifically marking the limit around the detected vehicle.
Tracking vehicles is a difficult problem. Difficulties related to vehicle tracking can occur due to steep vehicle movement, changing vehicle and scene layouts, rigid vehicle structures, vehicle and vehicle scenes, and camera shake.

Counting of vehicles
This system captures vehicle images via the webcam connected to the microcontroller via the USB host and the images are processed by the image processing technique. Here we use the OpenCV library to detect a frontal image using a Haar-Cascade function. If the geometry of a vehicle is recognized, a rectangular box will appear on the monitor, and another value will be incremented in a counter. The identified images are sent to Raspberry PI and we can perform various operations on them. In this way, we implement a web monitoring system using a video camera and image processing algorithms to monitor traffic fluency by counting the number of vehicles described in Figure 4.

System implementation and data analysis
In this IoT project, we use a Raspberry Pi, a webcam and ThingSpeak to analyze traffic on a busy highway. We deploy a traffic monitoring algorithm to the Raspberry Pi device, and we analyze and visualize the traffic patterns with ThingSpeak, a cloud data aggregator. This project stores data in channel 48629 on ThingSpeak. [9] For this project, we show how to develop analytics for the edge device and how to perform exploratory analysis on data collected on the cloud. We also illustrate a simple example of how to perform automated online analysis in the cloud. The example uses ThingSpeak and MATLAB to perform the analyses. We constructed the traffic monitor using a Raspberry Pi 3, and a USB webcam. The webcam, a Tracer HD Pro, was mounted on a flexible mini tripod. We placed the camera near a window on the 2th floor of our university building that overlooks the street. We angled the camera to have a clear view of both sides of the street. The camera was connected to one USB port of the Raspberry Pi. We then connected the Raspberry Pi to the wireless network in the building. The complete parts list is shown below:  Raspberry Pi 3 Model B ARM v7 with 1 GB RAM  5 V 3 A switching Power Supply with MicroUSB cable  Tracer HD Pro  Monoprice flexible mini tripod  WiFi for internet connectivity Because we did not want to send high-bandwidth video images to the cloud, we chose to detect the vehicles at the edge using the processor on the Raspberry Pi 3. We then send the count value to the data aggregator at an update rate of once every 15 seconds, the maximum data rate allowed by ThingSpeak. To develop the traffic monitoring algorithm, 5 R e t r a c t e d we used Simulink, Image Processing Toolbox, Computer Vision System Toolbox and the Simulink Support Package for Raspberry Pi Hardware. Simulink is a modeling environment that can to automatically generate code that can run on an embedded controller. In this example, Simulink generates code that runs on the Raspberry Pi 3.
The Simulink model for the traffic monitoring algorithm is shown below in Figure 5.

Fig. 5. Simulink traffic monitoring algorithm
To develop the algorithm, we used the external mode capability of Simulink. In this mode, Simulink gathers the video stream from the Raspberry Pi, and the user can view the video on an external monitor using the SDL Video Display block while the algorithm is running.
In the context of efforts to reduce the impact of new technologies on the environment, it is proposed to use regenerative energy as a source of power. Figure 6 shows the entire vehicle counting system using the alternative energy (solar energy in this case) as the power source of the Raspberry Pi system. The block diagram of the system prototype shows the connection of a Raspberry PI development board powered by a photovoltaic power source to a webcam using a USB port. Using the classic internet connection infrastructure, this development board connects to a ThingSpeak server and transmits data in real time. Data can be viewed as graphics on another device connected anywhere in the world like in Figure 7. 6 R e t r a c t e d . This detector will not be 100% accurate, so a classifier based on a convoluted neural network can be introduced to improve the quality of detection due to the ability of this class of algorithms to capture strong nonlinear relationships between inputs and outputs. The network will have an empirically chosen architecture and will be trained with images extracted from data sets published by researchers who have studied the problem of vehicle detection in real-time environmental and real-time conditions. The decision component for the detection of a vehicle will use camera images, and they will be subjected to a series of transformations so that the information in the image heads a classifier template. The image resulting from the transformation string will be provided to a convolutional neural network for classification and, finally, the decision is made to mark a particular vehicle in an image. In order to facilitate the testing of various architectures for the convolutional neural network, a TensorFlow-based micro-infrastructure can be built to train some such networks using a proprietary data set. The part that effectively triggers the testing process is to define the neural network using two native Python-defined dictionaries without making any changes to the Python code responsible for defining the network architecture. Based on these two dictionaries, the network is actually built for training. This micro-infrastructure is capable of generating graphs with reports from the training process. [10 -12] In this paper, we try to demonstrate how to develop a traffic monitoring that can be deployed onto Raspberry Pi edge node and sends data to ThingSpeak. We showed how to use MATLAB for offline analysis by retrieving data from ThingSpeak and analyzing and visualizing daily and weekly traffic patterns. And we also showed how to perform custom online visualizations inside the ThingSpeak web service by using the MATLAB Visualizations App to create a color-coded live traffic indicator that updates continuously as traffic data arrives. Data retrieved from the ThingSpeak server can be used to create a knowledge base for the self-learning system. Due to the integration of all Matlab functionalities within this service, various simulations can be made and a complex database can be managed. It can be asserted by tests that the Raspberry Pi development platform is suitable for a self-learning system offering enough computational power and versatility. [13] , 0 7 R e t r a c t e d

Conclusion
A Raspberry Pi system and USB camera is used to detect, track and count real-time vehicles. The density of vehicles circulating on a particular road is determined in real time.
The results of the proposed method with regard to the accuracy and time required are better compared to other methods. Due to the static IP address attributed to the Raspberry system, we can communicate with other remote computers. The performance of the proposed system is superior to 5% to 10% compared to other methods or systems used. It has been found that the cost of the proposed system is much lower than the existing systems. Vehicle detection and tracking by the system are reliable. The proposed method only considered the color characteristics of the vehicle and made it replace existing systems. The number of vehicles present in the video is calculated in real time. The experimental results demonstrated on a preexisting set of data considered at different camera angles, rear view of the vehicles and different camera heights show flexibility and good precision of the proposed method. This system uses modern communications equipment for periodically reading and transmitting parameters such as road temperature, humidity, wind intensity and vehicle weight using different type of sensors. Introducing all the functionalities specific to the field of self-learning will have the main effect of improving the results of the entire system, offering opportunities for adaptability to some possible changes.