Data-Fusion-Based Detect Equipment of Tank Initiative Defense System

With change of the battlefield and the development of armor technology, the detect ability of the tank initiative defense system meets a huge challenge. And the multi-sensor application can improve the detect ability of the detect equipment. But the different type data of different sensor cannot be dealt with by the processor until use the data-fusion technology to the different type data. Finally, the detect equipment with multi-sensor can improve the anti-jamming ability of the tank initiative defense system.


Introduction
With the constant changes in the battlefield and the continuous development of anti-tank weapons, tank armored vehicles relying solely on passive protection systems have been greatly challenged. In order to enhance the protection abilities, active protection has become a new research direction for tank armored vehicles. According to the different ways of counterattack, the active protection system can be divided into two categories: soft killing system and hard killing system: The soft killing system derails incoming missiles by means of interference, decoy, etc., but it cannot damage missiles, especially kinetic energy missiles; The hard killing system greatly reduces or completely loses its power by hurting or destroying incoming ammunition. Compared with the soft kill system, the hard kill system has more reliable protective performance, which represents the development direction of the future active protection system.

Target information identification
The main types of missiles used to attack tanks on the modern battlefield include kinetic and Armor piercing missiles fired by tank guns, various types of anti-tank missiles, rockets and terminally guided shells. For active protection systems, these ammunition are the incoming targets of attack. Since the hard kill system uses a protective bullet to attack the incoming target directly, its detection device must have the ability to detect, identify and track the incoming target, measure its speed, diameter, and impact point, and then transmit the target's information to the control device for counterattack. The identification of information on these targets becomes a prerequisite for active protection, and it is an important task of the detection device.

Target speed identification
The flight speed of the protective bullets fired by the active protection system at the scheduled explosion point is unchanged, and the speed of the incoming target varies depending on the distance between the type of bullet and the attack.
The detection distance of the detection device can be written as The detection advance time ( t ) is the sum of the signal processing time, the time during which the protective bullet moves in the launch tube or the pod, the time of the trajectory of the protective bullet, the detonation time of the protective bullet, and the flight time before the warhead collided with the incoming target. Therefore, the detection advance time is a fixed value. Since the interception distance ( 1 l ) must be limited to the range specified by the combat technical indicators, the detector should immediately adjust the detection distance after detecting the speed of the incoming target.
At present, most of the detectors used in the active protection system are millimeter wave radar, which usually only responds to the target of a typical anti-tank missile, while the high speed kinetic energy missile cannot be detected.

Target bullet diameter identification
At present, the diameters of all kinds of ammunition that threaten the main battle tank are in the range of 45~300 mm, and the difference is relatively large. The commonly used bullet diameters are 45 to 60 mm (kinetic energy missile) and 100 to 170 mm (armor piercing missile). For armor piercing missile, the diameter of the bullet is related to penetration and detonation strength: With a small bullet diameter, small armor and detonation power, the interception distance is reduced to increase the section density and the target interception probability is increased; With large diameter, large armor and explosive detonation, interception distance should be adjusted. Although the interception density is reduced, the target engagement surface becomes larger and the impact on the interception probability is not significant.
The currently used millimeter-wave radar detector can identify the bullet diameter of the armor piercing missile. However, the strength of the target reflection signal does not depend entirely on the bullet diameter, but is also affected by the shape and material of the missile, the incident angle of the missile, and the nature of the coating on the missile surface. This poses great difficulties in the identification of bullets and therefore cannot be intercepted at optimal distances. For kinetic energy missiles, the radar reflection signal is weak due to the small diameter of the bullet, so that it cannot be detected and the effect of active protection cannot be achieved.

Target incident angle identification
The identification of the target incident angle involves the reliability of the active protection system. When the incoming target passes through the effective detection area with different trajectories and does not directly hit the tank, although there is no threat to the tank, the detector may also produce strong enough target information. In order to avoid unnecessary losses, in this case, the detector is of great importance to ensure that the system does not launch protective munitions by recognizing, judging, calculating, and processing the incident angle.
The above target identification information is mutually restricted and affects each other. The detection of a single millimeter-wave radar sensor has certain limitations in the interception, identification, and tracking accuracy of the target, and it is difficult to achieve the optimization of the detection effect. At the same time, it is greatly affected by clouds, rain and fog, and its ability to work all day is poor. Therefore, using multi-sensor composites can significantly improve the system's ability to resist interference. In addition, the reliability of the detection device can be improved by using redundant information of different sensors. The signal error of one sensor in multi-sensor does not cause the error of the entire device. Therefore, multi-sensor data fusion has become an important research topic in the research of target identification and tracking technology.

Data fusion technology
Data fusion technology, also known as multi-sensor information fusion technology, originated from the needs of C3I system construction. The definition recommended by the US Department of Defense Data Fusion Test Team is: Data fusion is a multilevel and multiple level data processing process, which mainly completes the automatic detection, correlation, estimation and combination of data from multiple information sources. Due to redundancy, complementation and timeliness of multi-sensor information, the multi-sensor information fusion system has strong robustness. According to the different levels of information abstraction, data fusion can be divided into three levels: data layer fusion, feature layer fusion, and decision-making level fusion.

Data layer fusion
Data layer fusion, also known as pixel layer fusion, is a direct fusion of the acquired raw data layers. Data synthesis and analysis are performed before multisource data is processed, which is the lowest level of fusion. The main advantage of this fusion is that it is rich in original information and can provide detailed information that cannot be provided by the other two fusion levels, so the accuracy is highest. However, the rich original information also means that the amount of sensor data to be processed by data layer fusion is huge, the processing cost is high, the time consuming is long, and the real time is poor. Data layer fusion is usually used for: Multisource image composition, image analysis and understanding, and direct synthesis of similar radar wave-forms. The main methods of data layer fusion are: HIS transform, PCA transform and wavelet transform.

Feature layer fusion
Feature layer fusion belongs to the intermediate level. It first extracts the original information from the sensor, then analyzes and processes the feature information synthetically. Generally, the extracted feature information should be the sufficient representation quantity or statistic quantity of the original data information. The advantage is that it achieves considerable information compression and facilitates real-time processing. And because the extracted features are directly related to decision analysis, the results can give the maximum information required for decision analysis. At present, most data fusion research of C3I system is carried out at this level. The methods of feature layer fusion include: D-S method, voting method, and neural network method.

Decision-making layer fusion
Decision-making layer fusion is a high-level fusion. The results can provide basis for command control and decision making. Therefore, decision-making fusion often starts from the needs of specific decision issues, and makes full use of various types of feature information extracted from the feature layer fusion, and adopts appropriate fusion technology to achieve it. The fusion of decision layer is the final result of the three layer fusion, which is directly aimed at specific decision objectives, and the result of fusion directly affects the level of decision making. At present, the main methods of data fusion in decision-making layer are as follows: Bias estimation, expert system, neural network, fuzzy set theory, reliability theory and logical template method.

Feature layer data fusion for millimeter wave radar / infrared imaging detector
For the vast majority of radar detection systems, the information in the data layer can be regarded as the Doppler signal of the target, and does not indicate the imaging information of the target. The information of the infrared imaging sensor in the data layer is expressed as its response band and the gray data sequence of the target. Therefore, the information obtained by the two sensors in the data layer of radar and infrared imaging does not have the basic conditions for the fusion of complementarity and comparability information, so it is impossible to carry out the fusion processing on the data layer.
The information on the feature layer of radar system can be characterized as target location information, distance information, line of sight rate target tracking information. Besides the distance information, the information of the infrared imaging system on the feature layer is basically the same as the characteristic quantity represented by the radar system. The information represented by both of them in the decision level is the line of sight rate target tracking information. According to the above analysis, the signal fusion processing of radar / infrared imaging dual mode sensor system only satisfies the basic conditions of the complementarity and comparability of the information fusion processing on the feature layer and the decision-making layer. This paper only discusses the data fusion processing of the feature layer.
Feature layer data fusion processing has the following characteristics:

The target identification and tracking feature information obtained by radar detection system is used to improve the identification and tracking ability of infrared imaging system
The radar detection system can get the predicted signal, distance signal and the high and low angle and azimuth angle of the target trajectory. Infrared imaging system can effectively improve the efficiency of target search based on these signals acquired by radar detection system: The target in infrared imaging is identified according to the trajectory prediction of the target, and the target that does not conform to the trajectory prediction is considered to be the interference source; According to the distance signal of the target, the number of pixels of the target in infrared imaging is estimated, and the identification model of infrared imaging analysis module is used to distinguish the target or other interference source; According to the signal of high and low angle and azimuth, the position of opposed projectile relative to the target can be identified, and the best attack point can be determined.

The target identification and tracking information obtained by infrared imaging system is used to improve the target identification and tracking ability of radar system
Transferring the target feature signal from infrared imaging sensor system to radar target identification and tracking module is helpful to provide heuristic information of target search and identification for radar target identification and tracking. The velocity information of the target is obtained from the target trajectory information in the infrared sequence image. When the radar information of the target is lost, the target velocity signal obtained by the infrared system can still be utilized.
When the radar and infrared imaging sensors are tracking the target at the same time, if the target launches the interference at this time, and the radar and infrared imaging sensors can detect the target and the interference, but the radar receives the point target information, so it is difficult to distinguish the true and false targets in a short time. The infrared imaging sensor can quickly distinguish the true and false targets from the image features of the real targets, and can guide the radar to identify and track targets.

Target identification method based on feature fusion
The target identification method based on feature fusion includes the target identification method based on intelligent model and the method of target identification based on artificial neural network. Both of them use the feature information provided by radar and infrared imaging sensors. The whole target identification process consists of three parts: signal preprocessing, image segmentation and segmentation object identification.
Image segmentation is the segmentation of the bright areas in infrared imaging. It can be roughly divided into (1) gray threshold method, (2) region growing method, (3) edge detection method: such as Roberts operator, (4) relaxation method, (5) algorithm based on mathematical morphology. The identification of segmentation objects is divided into point target identification and surface target identification according to the pixel area of the segmentation object. The pixel area of a normal segmentation object is less than or equal to 3 * 3, which can be regarded as a point target, otherwise it can be regarded as a surface target.

Point target identification method
For point targets, because of the limited feature information obtained from image analysis, target identification is mainly based on intelligent models. The intelligent model mainly includes: the empirical relationship between target distance and imaging area; the prediction of the moving trajectory of the target; and the continuity of the target trajectory.
When the target distance is far away, there is a certain relationship between the target distance and the number of imaging pixels. This empirical relationship can be obtained through practical tests. Therefore, in the case of knowing the target distance, if the pixel area of the segmentation object is greater than the empirical threshold, the segmentation object is considered to be a false target, otherwise the tracking identification is continued.
For real targets, the trajectory estimation of the target obtained from the radar tracking module should be consistent with the target trajectories predicted by the infrared imaging tracking module. Considering the complexity of space relative conversion and calculation, the motion trajectory prediction of the segmented object in infrared imaging is simplified to cross division (upper left, lower left, upper right and lower right). According to the orientation information of the target tracking and the angle relation between the elastic axis, the electric axis and the optical axis, the motion trajectory of the target in infrared imaging can be estimated. In addition, according to the position change of the segmented object in the two frames before and after the infrared imaging, the motion trajectory of the target in the infrared imaging can also be estimated. If the two motion trajectory prediction directions are inconsistent (such as one is upper right and the other is upper left), then the segmentation object is derived as a false target, otherwise tracking identification is continued.
According to the sampling frequency of the infrared imaging sensor (100 frames per second) and the continuity of the target motion trajectory, the position change of the segmented object in the two frames before and after the infrared imaging should be relatively small (less than a certain threshold). If the position change of the segmented object in the two frames before and after the infrared imaging is greater than a certain threshold, the segmentation object is derived as a false target (signal noise); otherwise, the tracking identification is continued.

Surface target identification method
When the target is relatively close to the infrared imaging sensor, the relative distance and motion posture of the target(head-on, tail, side) are in the image topology shape and area for the target in the infrared imaging. The position change and position change direction of the two frames before and after the image have a great influence, and the relationship between them is complex. It is very difficult to manually learn and establish an intelligent model for the identification of surface targets.
Using the training and learning of multi-layer forward network, the classifier with fault tolerant surface identification is realized, and the infrared imaging image of different target relative distance and motion posture is analyzed. The following features are used: Relative distance, pixel area of the segmentation object, position change of the segmentation object in two frames before and after infrared imaging, the input of the shape topology information of the segmentation object as a training sample, the output of identification result as expectation. A multi-layer feed-forward network is used to learn the statistical features of training samples. When the multi-layer feed forward network is completed, the multi-layer feed forward network can be used to calculate the identification result according to the actual feature description of the segmentation.

Conclusion
The millimeter wave/infrared imaging detection device using data fusion technology has the following features as compared with the detection device using only a single sensor:

Simplify the implementation difficulty and workload of the target identification and tracking system
The detection device based on feature layer fusion can identify the location of the target with radar and infrared information, so as to determine the best attack point.
5.2 It can improve target detection probability and reduce false alarm probability.
The feature layer fusion of radar and infrared sensor signals can improve the target detection probability of each sensor module and detection device. For targets that can be correctly detected by radar and infrared sensor modules, the detection device can also detect the target correctly. For targets that cannot be detected by radar or infrared sensor modules due to interference, etc., the detection device utilizes the complementary and redundant information of different sensors, and the feature layer fusion can still correctly detect the target in most cases. Therefore, the false alarm probability of each sensor module and the detection device can be reduced, and the tracking decision accuracy can be improved.

Improve the anti-interference and reliability of the detection device
The decision layer fusion of the signals of the radar and the infrared sensor can improve the target tracking accuracy of the detection device, and correct the servo tracking loop of the sensor module that loses the target tracking capability due to interference and the like, so as to restore the target tracking capability. When a sensor is disturbed and the target identification and tracking capability is lost, the fusion decision controller can still correctly track the target based on the target identification and tracking decision signal of the other sensor. It also guides the interfered sensor to track the target correctly, so that the target identification and tracking ability can be restored once the interference is disappeared.