Research on software credibility algorithm based on deep convolutional sparse coding

Based on the author's research time, this paper studies the software credibility algorithm based on deep convolutional sparse coding. Firstly, it summarizes the convolutional sparse coding and trust classification system, and then constructs the algorithm from two aspects: factor processing based on deep convolution neural network and trust classification based on sparse representation.


Introduction to related technologies 2.1 Convolutional sparse coding
Convolutional sparse coding is an unsupervised learning method based on convolutional neural network structure, which can be divided into two parts: convolutional neural network structure and sparse coding. The first is sparse coding. The main feature of sparse coding is to map the meta feature space to sparse representation, so as to improve the performance of computer vision. However, when sparse coding is applied to software credibility, it needs to input a specific sample to complete the over complete dictionary representation of linear combination. This requires the convolution neural network to provide the corresponding spatial location information of the input data; secondly, the convolution neural network structure includes three different types of layers: pooling sampling layer, convolution layer and full connection layer. Convolution neural network has a good performance for spatial location information of input data. The three layers of this structure can complete the three steps of feature representation of input samples, translation of feature representation and extraction of abstract features. At the same time, when it is applied to factor processing, it can improve the speed by virtue of the deep neural network unit function advantage of the linear correction unit. At the same time, the existence of the convolution layer can also construct two different modes of nonlinear mapping, and then complete the feature conversion. At the same time, the concept of time can also be introduced to process the data of multiple platforms of convolutional neural network, so as to complete the complementarity of time domain and space domain.

Trusted rating system
The trust classification system can be divided into detection module, alignment module and identification module. The detection module is mainly used to test the reliability, collect the basic information of reliability, and lay the foundation for subsequent alignment and identification; the alignment module is to work on the reliability factor and implementation template, and avoid the appearance of reliability factor due to a series of spatial variation factors The recognition module is to identify the reliability factor and then complete the recognition. Based on the above three modules, we can see that the task of trust classification is mainly reliability identification and reliability verification, and the trust classification system based on convolutional sparse coding can complete this kind of trust classification task well. The test library of this experiment is an international open test library, which can ensure the comparison between different algorithms, and then achieve an intuitive comparison effect. The most important test platform is AR data set, and then the robustness and recognition performance of trust classification algorithm based on deep convolution sparse coding are studied.

Factor processing based on deep convolution neural network
The typical convolution neural network can be divided into three parts: feature extraction, class label prediction and cross entropy supervision function. For the traditional classification task, the category of the sample to be tested will not exceed the scope of the training data, and the label prediction part is available, so it is a closed set test. Therefore, the typical convolution neural network structure is particularly suitable for classification tasks, belonging to an end-to-end model. In this kind of model, the learned depth features are separable and can be distinguished by classifiers.

3.1.1Trustworthiness rating
All the factors are tested by reliability test and reliability key point detection, and then five key points are calibrated by rigid body change. If the detection fails, the training factor will be simply discarded, and the test factor will be used with the key points provided. At the same time, for the trusted classification task, the prediction class label module is no longer available. The discriminability of depth features learned by convolution neural network is very important. However, in the existing research, there is still not much work to put forward this concept clearly.

Separability and discriminability
The separability of features means that different types of features can be distinguished by classifiers. The classifiers mentioned here can be linear or nonlinear. The discriminability of features means that the classification can be judged by the relationship between features (such as distance). In the ideal state, when all the intra class distances are less than the inter class distances, only the nearest neighbor classifier is needed to distinguish the features.

Trust classification based on sparse representation
Shenlian recognition based on sparse representation is mainly due to its good performance in feature extraction and occlusion robustness. This also leads to more and more in-depth discussion on trust classification of sparse representation. This study is based on such research to summarize and analyze

Identification process
Trust classification based on sparse representation needs to give a corresponding reliability gray factor, and then arrange it into corresponding vectors. The vector includes N training sets and k-class classification. Each training sample is combined to be the corresponding vector. Finally, a specific sub dictionary is obtained, and all sub dictionaries are arranged to get the global dictionary. At the same time, the object model of human factors is special, which can be expressed by linear combination of test factors. Reliability presents different states in different time periods, so we should pay attention to the selection of construction methods in the selection of related algorithms. The commonly used construction methods include Delphi method, neural network, time series, machine learning, regression analysis, elastic coefficient method and grey prediction method, because sparse representation has good performance in feature extraction and occlusion robustness Sparse representation is used to build the corresponding model of the trust classification process.

Feature extraction and construction
The experimental data is based on the large-scale international open reliability test set, which is mainly related to the operation after the dimensionality reduction of the high-dimensional feature space of the reliability factor. The specific performance is that the feature vector is made into a projection matrix, and the sample time is the recognition effect of AR reliability database under different factor resolutions. The operation layer includes full connection layer, convolution layer, nonlinear element layer, penalty function layer, local response normalization layer and pooling layer. The network result is AlexNet, which includes three layers of full connection, five layers of convolution, five layers of pooling, five layers of nonlinear element layer and five layers of local response normalization layer.
Based on the above operation layers, the trust classification steps of sparse representation in this experiment are as follows: firstly, it is necessary to analyze and set the topological structure of convolutional neural network structure prediction model between the construction models. The structural characteristics of convolutional neural network structure prediction model are that its input layer, hidden layer and output layer can complete the mapping from N-dimension to M-dimension; secondly, it is necessary to analyze and set the topological structure of convolution neural network structure prediction model The number of neurons in the hidden layer and the maximum number of iterations are set. According to the reliability prediction model of environmental pollution, the maximum number of iterations is set to 300, and the number of hidden layer neurons is set to 150. In terms of numerical calculation, the L-BFGS algorithm with good performance in large-scale modulus calculation is selected for calculation. Finally, parameters are substituted according to the relevant topological structure Where Xn represents the corresponding characteristic variable, Y represents the gray factor of sparse recognition, and Wn represents the relevant threshold.

Forecast results
The activation function is responsible for mapping the input of neurons to the output. The commonly used activation functions include Tanh function,Logistic function, Relu function and Identiey function. In this study, the four activation functions are used to evaluate the models in turn, so as to find the most suitable prediction model. The following four methods are used to evaluate the model: RMSE root mean square error, MSE mean square error, MAE average absolute error, R2 model fitting degree. RMSE root mean square error method calculates the deviation between the observed value and its real value, or between the observed value and the simulated value.

Summary
In a word, sparse representation and convolution network are important tools in the field of trust classification. The combination of them can not only mine the potential relationship between the existing data in the process of trust classification, but also construct the related prediction model quickly. Of course, there are many deficiencies in this experiment. Although the problems of sparse representation and convolutional neural network in the application of trust classification are analyzed, it is still not deep enough. I hope that we can continue to study the problems in the field of trust classification in the future, and make contributions to the development of related fields in China.