Title: Relational Markov Models and their Application to Adaptive Web Navigation
1Relational Markov Models and their Application to
Adaptive Web Navigation
- Corin Anderson, Pedro Domingos, Daniel Weld
2Review Hidden Markov Models
- Sequence of States
- Transition distribution to next state dependant
on evidence variable and current state only (in
first-order case)
3DBNs A Step Up
- Multiple nodes at each state in the sequence
- Object-oriented view
4Complaint 1 Too General!
- Absolute restriction imposed on structure of
state - Could individual state structure be exploited?
- Adopt Polymorphic view
5Complaint 2 Not General Enough!
- Accurate probability estimation difficult with
sparse data - P(Mac_instockmainpage) 0
- P(applemainpage) 0.375
Web log for pages linked from main_page.html
Ipod_instock.html 1
Ipod_backorder.html 1
Mac_instock.html 0
Mac_backorder.html 1
Dell_instock.html 2
Dell_backorder.html 0
Compaq_instock.html 2
Compaq_backorder.html 1
6Solution RMM
- Represent sets of states as a relation
(predicate) with instantiations of the relation
defining specific states - E.g. product_page(mac, in_stock) main_page()
- Transitions can now occur from sets to sets and
sets to states
7Ta-Da!
8Going further . . .
- Using predicates defines a basic hierarchy over
the state space - Why not impose further structure within each
predicate argument? - Tree leaves correspond to unique arguements
9Again, Ta-Da!
10Sets of sets
- Define the set of abstractions of a given state q
R( . . .) to be a set of states such that each
argument to R in the abstraction is an ancestor
to the corresponding argument in q. - More Formally (scientists love fancy math)
R The relation, Q set of all possible
instantiations of R d possible arguments, D
the tree of a particular type of argument, delta
the arguments to the predicate for the given
state q.
11Learning with Abstractions
- Transitions between a state and an abstraction
- In practice, we can just count instances in the
data.
Where a is a transition matrix, q is a ground
state (individual state), and alpha and beta are
abstracted arguments.
12Defining a mixture model
- Estimating the transition probability between
ground states qs and qd - Note that choosing lambda properly is crucial
Where lambda is a mixing coefficient based on
alpha and beta in the range 0,1 such that the
sum of all lambdas is 1.
13Choosing Lambda
- There is a bias-variance tradeoff
- Possibly high variance at lowest abstraction
level - High bias at highest abstraction level
- A good choice will have two properities
- Gets higher as abstraction depth increases
- Gets lower as available training data decreases
Where n possible transitions from alpha to beta
in the data, k is a design parameter to penalize
lack of data, and rank(a) is
where each d is an argument to the relation
defining a.
14Probability Estimation Tree
- With deep hierarchies and lots of arguments,
considering every possible abstraction may not be
feasible - Learn a decision tree over possible abstractions
- To the left is the tree for the page
Product_page(mac, in_stock)
15Empirical Evaluation
- How well do the various flavors of RMM
(RMM-uniform, RMM-Rank, RMM-PET) compare to
traditional MMs? - Analyzed several web logs in various domains.
- Tried to predict transition probabilities between
pages using varied amounts of training data.
16Results 1 KDDCup 2000
17Results 2 CSE 142
18Related Work The Cube
- Exploiting structure within RMM states (DPRM)
19Fin.