Improved step tracking algorithm based on gradient method

The basic principle of the satcom on the move antenna is to detect the error angle between the antenna beam pointing and the satellite direction according to the received satellite signal, and convert it into the corresponding error electrical signal. The controller uses the error electrical signal to control the antenna to make the antenna move in the direction where the error signal decreases. The tracking accuracy of traditional step tracking is not high, and the response speed is very slow, and the signal amplitude fluctuation affects the tracking accuracy. Since the gain graph of the antenna can be regarded as a non-linear stepped objective function, an improved step tracking algorithm based on the gradient method is proposed, which combines the step tracking with the optimization algorithm to achieve precise tracking. The algorithm optimizes the antenna gain pattern as the objective function to find the azimuth and elevation coordinates corresponding to the maximum point of the antenna gain function. The gain function maximization is transformed into the objective function minimization problem, and the minimum value of the objective function is found through the optimization algorithm, which corresponds to the direction of the antenna when the satellite is accurately aligned. The gradient method is selected for optimization, which is stable as a whole and simple iterative process. Theoretical analysis and simulation results show that the algorithm can obtain higher tracking accuracy while maintaining a lower program complexity.


Introduction
Step-by-step tracking is divided into two types: "searching" and "scanning". The "searching" stepping is to make the antenna input the control voltage at a certain step interval in azimuth axe and pitch axe, respectively, under the control of the controller, so as to make the antenna run step by step. Finally, the antenna beam is aligned in the maximum direction to the target satellite [1]."Scanning" stepping is to make the antenna do rectangular scan around its position within the antenna beam width, and get the level value of signals received from four corner positions. Through comparison processing, obtain the relative position of the maximum direction of the antenna beam against direction of the target, and determine the direction of the next scanning rectangle. Once the level value of signals received from the four corners of the rectangular is equal, then the target is in the center of the rectangle, thus move the antenna beam to the central position of the rectangleto complete a tracking and measurement of angle. [2] The two methods above are simple and easy to implement, but the search path is more cumbersome, and the antenna does not carry out the step-by-step search according to the optimal path. In this paper, we put out an improved step-tracking algorithm based on gradient method, which combines step-tracking with optimization algorithm to realize precise tracking.

Problem formulation
The AGC output from the satellite beacon receiver is used to reflect the signal strength, and the gain pattern of the antenna can describe how the AGC strength varies with the pointing error. The pointing error can be decomposed into two vertical angular components, the azimuth Az and the elevation El. The total pointing error of the antenna can be simply expressed by azimuth and elevation as 2 2 Az El + = ∆ (1) The azimuth and elevation constitute a rectangular coordinate system in which the gain pattern of an antenna is a nonlinear function determined by two pointing error components. Take the antenna gain diagram as the objective function and optimize it. To optimize objective function is to find the azimuth and pitch coordinates corresponding to the maximum point of the antenna gain function. By designing the objective function, transform the problem of maximizing the gain function into minimizing the objective function, and find the minimum value of the target function through gradient method, and which corresponds to the precise alignment of the antenna towards the satellite [3].
The nonlinear function optimization algorithm is an iterative algorithm, which iterates from one point to the next in n dimensions space until the program terminates. If the objective function contains n dimensions independent variables, then any point coordinates in n dimensions space is an n dimensions position vector. In general, optimization algorithms must satisfy: In formula, k F is the function value iterated by algorithm at the k time. The equation shows that each iteration of the optimization algorithm can produce a continuously decreasing value. For the nonlinear objective function optimization algorithm, there is an iterative formula: In the formula, k x is the current position, 1 + k x is the next measurement point to be estimated by the objective function. k p is the next step direction of the function, α is the step length in the direction of k p , with the following constraints: In the formula, k p is the negative gradient (first derivative) direction of the function, k G is the value of the second-order partial derivative matrix of the objective function at the k experimental point.
In order to find the fastest asymptotic convergence direction, the nonlinear target function tracking algorithm should satisfy the following constraints: In the formula, * x is the minimum value point of the objective function. In this system, it is the best quadratic convergence, so the value of ο is 2.

Algorithm description
Gradient method is also called steepest descent method. The objective function of this algorithm is quadratic function, which has a general form as following: In the formula, A is a symmetric matrix. The gradient and the second derivative of ( ) x F are as follows: In the formula, g is the gradient vector, G is the second derivative matrix.
The gradient k g of the objective function ( ) The above formula establishes an iterative algorithm, from 0 x can obtain a sequence: x , , it can be proved that this sequence will converge to a solution * x that minimal ( ) x F under certain constraints. The formula is the iterative formula of the gradient method.
One of the key problems of step tracking algorithm based on gradient method is the selection of step length k a . In the gradient method, it is better to go to the minimum point of ( ) x F just along the direction of k p at the point of k x , in a mathematical sense, is to take k a where the derivative of ( ) x F with respect to k a is zero, which is called onedimensional search along the direction of k p .
It can be deduced that the best step length is However, the calculation of second-order partial derivative matrix G of ( ) x F is large, and whether it is necessary to calculate the optimal step size * k a is determined by conditions. By virtue of When you don't have to calculate the optimal step length, the k g in the formula is just change the scale of k α , and k α actually is chosen by people, so you can remove k g ,thus the iteration formula for the gradient method is changed as.
Use mathematical software to program and simulate the model and the corresponding algorithm established in this paper . Figure 1 and Figure 2 are the side view and top view of the azimuth and elevation axis of the stepping path, respectively, in the searchingstepping automatic tracking; Figure 3 and Figure 4 are the side view and top view of the stepping path of the elevation axis in the gradient method stepping automatic tracking; Table 1 is the comparison of the step numbers and the level values of two methods after completion of thetracking (under the same decision condition). From the simulation results we can see that the step-tracking algorithm based on gradient method can search the target according to the best path with fewer steps, and the algorithm is simple and easy to implement.

Conclusion
When the gradient method is applied to the tracking program, it is necessary to determine the step length of the search direction by means of accurate one-dimensional search.When completing the algorithm iteration of each step, precise linear search will greatly increase the search points of each step of the function, which is contrary to the requirement of convergence speed. The process of approximating the minimum point of the function by uber long gradient method is "Z" shape, and the closer to the minimum, the smaller the step length is, causing the slower the iteration point moves. [4]Although with the disadvantage mentioned above which is only significant in the vicinity of the minimum, at the same time, the gradient method is generally stable , the iterative process is simple, the program complexity is low, and it can converge to a better effect even from a bad initial point.