An new method to collaborative filtering recommendation based on DBN and HMM

The main problems of collaborative filtering are initial rating, data sparsity and recommendation in time. A recommendation approach based on HMM model, which creates nearest neighbour set by simulating the user behaviours of web browsing, is a good way to solve the above problems. However, the HMM or model parameters constantly vary with customer's changing preference. When there is a new type of data to join, the HMM can only be discovered by relearn, which will affect real time of recommendation. Therefore a recommendation approach based on DBN and HMM is proposed. The approach will improve real time recommendation, and experiments shows that it has high recommendation quality.


Introduction
The main problems of collaborative filtering [1] are data sparsity, initial rating, recommendation in time and the expansion of data space.In order to solve these problems, an approach to collaborative filtering recommendation based on HMM is one of the good solution [2] .User behaviour changes 2 A collaborative filtering prediction model based on HMM [3] The HMM collaborative filtering model with users preference makes great improvements both in efficiency and accuracy recommendation results.The HMM collaborative filtering model is defined as following Where , A ij is the preferences of usr i for commodity j; Commodity j is what the path j represents; P aj is the references probability of target user a for commodity j;.P i (O| ) is the similarity of the nearest user i.

A collaborative filtering recommendation model based on DBN
3.1 Definition of DBN model [5,6] The probability equation of observation Equation of state: The initial prior: 0 0 first order polynomial model with the discount factor is called a first order polynomial discount model, the specific implementation is as follows:

E E E
.Where, i t P is the level of the sequence at time t i , i t E is the growth of sequence level from time The state equation: ( / ) r r w r r w The initial information: Where v is a term of observation error, which follows normal distribution with zero mean and

The model of reasoning and learning algorithm
1) Model reasoning algorithm [7] In the process of model reasoning, we mainly calculate two parameters: Ȝand ʌ, which are Įserved as ȕ and in HMM.
Parameter Ȝ and ʌ can be calculated by the following formula:(assume the observation vector is e ) ( ) ( ) The specific reasoning algorithm is as follows(algorithm 1) For each variable X i ( poster order traversal) If j are the values of observation vectors,

O
For each variable Xi(preordering traverse) (where X p is the father of X i ) 2) Model learning algorithm [8,9] When the vector is incomplete with any given network structure, the classical EM algorithm is suitable.The training process of the algorithm is:(Algorithm 2) Step E: Assume the observation vector is e, the conditional probability of node C i will be shown in equation (10): The probability of every solidification node can be calculated according to equation(10), using the derived algorithm based on connected-tree algorithm.
Then calculate N ijk --the number of times each variable X i appeared, assuming X i =k, parents(X i )=j.First choose one solidification node, which contains both X i and its father node.

Let i jk
V be the collection of solidification nodes which satisfy X i =k and parents (X i )=j, then N ijk increases with the addition of the training sample.
Step M: After calculating N ijk , the conditional probability of each variable can be reevaluate as shown in formula (12):

The update model based on DBN
Where A ij is the preference of user i for commodity j; Commodity j is what the path j represents.P aj is the probability of which target user a like commodity j. S i is the similarity of the nearest user i. \ and ] are balance index which are used to define how HMM and DBN training results influence the model.

Nearest neighbour collaborative filtering method after updating
There are three parts of the updated collaborative filtering recommendation model: data pre-processing, HMM filtration, DBN updating model, collaborative filtering prediction model.
Figure 1 depicts its structure: In nearest neighbour recommended phase, obviously, calculate P i (O|Ȝ) and P i (X T |Y 1:T ) separately, and choose X T to maximize P(X T |Y 1:T ) and P i (O|Ȝ).S i can be considered as the similarity between user i and the target user, wherever Ȝ is HMM parameter for a target user.
N uers of the most similarity will join the nearest neighbor set.Second, the recommendation process of user preference commodities after updating model:

Evaluation of experiment
We find the commodity 1001 meets the greatest degree of preferences of user3, and the recommend probability of target users is: P user1, 1001 = 0.9894.Add user2 to the nearest neighbour set, calculate formula (14), we still find the commodity 1001 meets the greatest degree of preferences of user3.

Conclusion
We mainly introduces the method of updating the model of collaborative filtering recommendation based on access path of users, which takes user's other characteristics into consideration, such as the user's static ratings, by using the high integration of dynamic Bayesian.
In this way, we update the model, and achieve the dynamic behaviour data combined with static characteristics of the data, then acquire a more accurate recommendation.
constantly with the diversity of online consumer products, by which model or model parameters should be modified.Previous recommendation model once formed, model parameters cannot be changed arbitrarily, thus when there is a new type of data to join, the HMM can only be discovered by relearn, which will affect real time of user behaviour recommendation.It is necessary to study how to improve the adaptive capability of recommend model on the basis of using previous model by updating its structure.Short for dynamic Bayesian networks model (DBN), because of its flexibility of modelling, is widely used in many fusion algorithms.We can create recommendation model based on DBN to implement network structure learning--adding new features on the basis of previous collaborative filtering recommendation DOI: 10.1051/ C Owned by the authors, published by EDP Sciences, 201 combining all previous training set and new sample for leaning, which will both save time and optimize the network structure, then makes recommendation model meets user's needs well.

2 :
Perform the collaborative filtering prediction model based on HMM (Model 1), training model based on the standard characteristics; Combining the previous model with the nearest neighbour similarity recommended by HMM, the original forecast model of collaborative filtering updating for formula:

Figure 1 .
Figure 1.The updated collaborative filtering process Experimental process is:(1) Calculate the similarity between the users by training set, and find the nearest neighbours of users in various scenarios.(2)Predict all of the items of the whole users, then figure out the collection of recommended results.(3)Using recommendation results and real record in test set, calculate recommended efficiency according to the evaluation standards.Five parameters are used in the experiment, the access time of queries, IP address, URL, browse number and page load time.We take experiment with five users(for user1, user2, and user3, user5) data.User1 is a target user randomly generated the original 20000 feature vectors(the feature vector has been treated, so there was little obvious mistake in the random data), 10000 of which are used for training network, the format is as follows:(2012-01-2222 10 15 192 168 1 23 http://www.

Figure 2 .
Figure 2.Results of experiment

Table 1 .
The similarity of users after model updating