Title: Information Retrieval and Data Mining (AT71.07) Comp. Sc. and Inf. Mgmt. Asian Institute of Technology
1Information Retrieval and Data Mining
(AT71.07)Comp. Sc. and Inf. Mgmt.Asian
Institute of Technology
 Instructor Dr. Sumanta Guha
 Slide Sources Introduction to Information
Retrieval book slides from Stanford
University, adapted and supplemented  Chapter 6 Scoring, term weighting, and the
vector space model
2 CS276 Information Retrieval and Web Search
 Christopher Manning and Prabhakar Raghavan
 Lecture 6 Scoring, term weighting, and the
vector space model
3This lecture IIR Sections 6.26.4.3
 Ranked retrieval
 Scoring documents
 Term frequency
 Collection statistics
 Weighting schemes
 Vector space scoring
4Ranked retrieval
Ch. 6
 Thus far, our queries have all been Boolean.
 Documents either match or dont.
 Good for expert users with precise understanding
of their needs and the collection.  Also good for applications Applications can
easily consume 1000s of results.  Not good for the majority of users.
 Most users incapable of writing Boolean queries
(or they are, but they think its too much work).  Most users dont want to wade through 1000s of
results.  This is particularly true of web search.
5Problem with Boolean searchfeast or famine
Ch. 6
 Boolean queries often result in either too few
(0) or too many (1000s) results.  Query 1 standard user dlink 650 ? 200,000 hits
 Query 2 standard user dlink 650 no card found
0 hits  It takes a lot of skill to come up with a query
that produces a manageable number of hits.  AND gives too few OR gives too many
6Ranked retrieval models
 Rather than a set of documents satisfying a query
expression, in ranked retrieval models, the
system returns an ordering over the (top)
documents in the collection with respect to a
query  Free text queries Rather than a query language
of operators and expressions, the users query is
just one or more words in a human language  In principle, there are two separate choices
here, but in practice, ranked retrieval models
have normally been associated with free text
queries and vice versa
7Feast or famine not a problem in ranked retrieval
Ch. 6
 When a system produces a ranked result set, large
result sets are not an issue  Indeed, the size of the result set is not an
issue  We just show the top k ( 10) results
 We dont overwhelm the user
 Premise the ranking algorithm works
8Scoring as the basis of ranked retrieval
Ch. 6
 We wish to return in order the documents most
likely to be useful to the searcher  How can we rankorder the documents in the
collection with respect to a query?  Assign a score say in 0, 1 to each document
 This score measures how well document and query
match.
9Parametric and zone indexes
 Consider query find docs authored by
Shakespeare in 1601 containing the phrase alas
poor Yorick  Fields can have welldefined set of values, e.g.,
numeric or character strings of fixed max length.
date field
author field
Parametric search interface to enter parameter
values (field values)
10Parametric and zone indexes
 Zone are similar to fields, except contents of a
zone can be arbitrary free text. E.g., title
zone, abstract zone, body zone,  Indexes for fields and zones can be
 Separate (drawback larger dictionary)
 Combined (drawback larger postings, but
dictionary is not enlarged preferable to get a
compact dictionary, e.g., to fit in main)
william.author
11
177
244
255
william.title
11
134
244
255
william.body
4
134
213
255
william
4.body
11.author.title
134.title.body
11Weighted zone scoring
 Given a Boolean query q and doc d, weighted zone
scoring assigns each pair (q, d) a score between
0 and 1 by computing a linear combination of zone
scores.  Suppose docs each have l zones.
 Let g1, g2, , gl be the respective zone
weights, s.t., ?i1..l gi 1.  Let s1, s2, , sl be a Boolean score for the
respective zones, being 1/0 according as q
occurs/does not occur in the i th zone of doc d.  Then, weighted zone score is
 ?i1..l gisi
 E.g., for indexes of previous slide, if zone
weights of author, title and body are, resp., g1
0.2, g2 0.3, g3 0.5, then  WEIGHTEDZONE(william, 11) 10.2 10.3
00.5 0.5
Here and in the following, by zone we mean zone
and fields.
12Weighted zone scoring
 Algorithm to compute weighted zone score from two
postings lists given an AND query.  ZONESCORE(q1, q2) // Score for query q1 AND q2
 // Boolean
score 1 if both queries present in zone 0,
otherwise  1 float scoresN 0
 2 constant gl
 3 p1 ? postings(q1)
 4 p2 ? postings(q2)
 5 // scores is an array with a score entry
for each document, initialized to zero.  6 //p1 and p2 are initialized to point to the
beginning of their respective postings.  7 //Assume g is initialized to the respective
zone weights.  8 while p1 ? NIL AND p2 ? NIL
 9 do if docID(p1) docID(p2)
 10 then scoresdocID(p1) ? WEIGHTEDZONE(p1,
p2, g)  11 p1 ? next(p1)
 12 p2 ? next(p2)
 13 else if docID(p1) lt docID(p2)
 14 then p1 ? next(p1)
 15 else p2 ? next(p2)
 16 return scores
?i1..l gisi, where si is 1 if both
queries present in zone i 0, otherwise.
13Learning weights (simple machine learning)
 Assume only two zones title (T) and body (B) with
zone weights g and 1 g, respectively.  Example DocID d Query sT sB
Judgment (human expert) r (quantized
judgment)  f1 37 linux 1
1 Relevant
1  f2 37 penguin 0
1 Nonrelevant
0  f3 238 system 0
1 Relevant
1  f4 238 penguin 0
0 Nonrelevant
0  f5 1741 kernel 1
1 Relevant
1  f6 2094 driver 0
1 Relevant
1  f7 3191 driver 1
0 Nonrelevant
0  Training examples
 sT sB Score
 0 0 0
 0 1 1 g
 1 0 g
 1 1 1
 Four possible combinations of sT and sB and the
corresponding  score(d, q) g sT(d, q) (1 g)
sB(d, q)
14Learning weights (simple machine learning)
 Squared error of the scoring function with weight
g on example f is  e(g, f) ( r(d, q) score(d, q) )2
 Example d Query sT sB
Score r e
e assuming g 0.4  f1 37 linux 1
1 1 1 0
0  f2 37 penguin 0 1
1 g 0 (1 g)2
0.36  f3 238 system 0 1
1 g 1 g2
0.16  f4 238 penguin 0 0
0 0 0
0  f5 1741 kernel 1 1
1 1 0
0  f6 2094 driver 0 1
1 g 1 g2
0.16  f7 3191 driver 1 0
g 0 g2
0.16 
Tot_ e Tot_ e 0.84  sT sB Score Score
assuming g 0.4  0 0 0
0  0 1 1 g
0.6  1 0 g
0.4  1 1 1
1 
15Learning weights (simple machine learning)
 Total error of a set of training examples Tot_ e
?j e(g, fj) ?j ( r(dj, q) score(dj, q) )2  Goal is to choose g to minimize the total error.
 Note Our example has only two zones with weights
g and 1 g, respectively! Generally, there will
be l zones with weights g1, , gl. Same
principles!  sT sB Score r No.
e  0 0 0 0
n00n 0  0 0 0 1
n00r 1  0 1 1 g 0 n01n
(1 g)2  0 1 1 g 1 n01r
g2  1 0 g 0
n10n g2  1 0 g 1
n10r (1 g)2  1 1 1 0
n11n 1  1 1 1 1
n11r 0 
Total error Tot_ e (n01r n10n )g2
(n10r n01n)(1 g)2 n00r n11n
16Learning weights (simple machine learning)
 Want to minimize total error Tot_ e (n01r
n10n )g2 (n10r n01n)(1 g)2
n00r n11n  Differentiating w.r.t. g d(Tot_ e)/dg

2(n01r n10n )g 2(n10r n01n)(1 g)  Find minimum by solving

2(n01r n10n )g 2(n10r n01n)(1 g) 0  ?
(n10r n10n n01r n01n )g n10r n01n  ?
g (n10r n01n )/ (n10r n10n n01r
n01n ) 
17Exercises
 Exercise 6.1 When using weighted scoring, is it
necessary for all zones to use the same match
function?  Exercise 6.2 If author, title and body zones
have weights g1 0.2, g2 0.31 and g3 0.49,
what are all the distinct score values a doc may
get?  Exercise 6.3 Rewrite the algorithm in Fig. 6.4
to the case of more than two queries, viz., q1
AND q2 AND AND qm.  Exercise 6.4 Write pseudocode for the function
WeightedZone for the case of two postings lists
in Fig. 6.4.  Exercise 6.5 Apply Eq. 6.6 to the sample
training set in Fig. 6.5 to estimate the best
value of g for this example.  Exercise 6.6 For the value of g estimated in Ex.
6.5, compute the weighted zone score of each
(query, doc) example. How do these scores relate
to the relevance judgments of Fig. 6.4 (quantized
to 0/1)?  Exercise 6.7 Why does the expression for g in
Eq. 6.6 not involve the training examples in
which sT(d, q) and sB(d, q) have the same value?
18Querydocument matching scores
Ch. 6
 We need a way of assigning a score to a
query/document pair  Lets start with a oneterm query
 If the query term does not occur in the document
score should be 0  The more frequent the query term in the document,
the higher the score (should be)  We will look at a number of alternatives for this.
19Take 1 Jaccard coefficient
Ch. 6
 A commonly used measure of overlap of two sets A
and B  jaccard(A,B) A n B / A ? B
 jaccard(A, A) 1
 jaccard(A, B) 0 if A n B 0
 jaccard(B, A) jaccard(A, B)
 A and B dont have to be the same size.
 Always assigns a number between 0 and 1.
 Exercise A 1, 2, 3, 4, B 1, 2, 4, C
1, 2, 4, 5.  Calculate jaccard(A, B), jaccard(B, C),
jaccard(A, C).
20Jaccard coefficient Scoring example
Ch. 6
 Exercise What is the querydocument match score
that the Jaccard coefficient computes for each of
the two documents below?  Query ides of march
 Document 1 caesar died in march
 Document 2 the long march
21Issues with Jaccard for scoring
Ch. 6
 It doesnt consider term frequency (how many
times a term occurs in a document)  Rare terms in a collection are more informative
than frequent terms. Jaccard doesnt consider
this information  We need a more sophisticated way of normalizing
for length
22Recall (Lecture 1) Binary termdocument
incidence matrix
Sec. 6.2
Each document is represented by a binary vector ?
0,1V!
23Termdocument count matrices
Sec. 6.2
 Consider the number of occurrences of a term in a
document
Each document is represented by a count vector ?
NV!
Recall from Ch. 1 that term frequency can be
stored with a document in the inverted index.
24Bag of words model
 Vector representation doesnt consider the
ordering of words in a document  John is quicker than Mary and Mary is quicker
than John have the same vectors  This is called the bag of words model.
 In a sense, this is a step back The positional
index was able to distinguish these two
documents.  We will look at recovering positional
information later in this course.  For now bag of words model
25Term frequency tf
 The term frequency tft,d of term t in document d
is defined as the number of times that t occurs
in d.  We want to use tf when computing querydocument
match scores. But how?  Raw term frequency is not what we want
 A document with 10 occurrences of the term is
more relevant than a document with 1 occurrence
of the term.  But not 10 times more relevant!
 Relevance does not increase proportionally with
term frequency.
26Logfrequency weighting
Sec. 6.2
 The log frequency weight of term t in d is
 0 ? 0, 1 ? 1, 2 ? 1.3, 10 ? 2, 1000 ? 4, etc.
 Score for a documentquery pair sum over terms t
in both q and d  score
 The score is 0 if none of the query terms is
present in the document.
27Document frequency
Sec. 6.2.1
 Rare terms are more informative than frequent
terms  Recall stop words
 Consider a term in the query that is rare in the
collection (e.g., capricious) vs. a term that is
frequent (e.g., person)  A document containing this term capricious is
very likely to be relevant to a query containing
capricious  ? We want a high weight for rare terms like
capricious.
28Document frequency, continued
Sec. 6.2.1
 Frequent terms are less informative than rare
terms  Consider a query term that is frequent in the
collection (e.g., high, increase, line)  A document containing such a term is more likely
to be relevant than a document that doesnt  But its not a sure indicator of relevance.
 ? For frequent terms, we want high positive
weights for words like high, increase, and line  But lower weights than for rare terms.
 We will use document frequency (df) to capture
this.
29idf weight
Sec. 6.2.1
 dft is the document frequency of t the number of
documents that contain t  dft is an inverse measure of the informativeness
of t  dft ? N
 We define the idf (inverse document frequency) of
t by  We use log (N/dft) instead of N/dft to dampen
the effect of idf.
Will turn out the base of the log is immaterial.
30idf example, suppose N 1 million
Sec. 6.2.1
term dft idft
calpurnia 1 6
animal 100 4
sunday 1,000 3
fly 10,000 2
under 100,000 1
the 1,000,000 0
There is one idf value for each term t in a
collection.
31Effect of idf on ranking
 Question Does idf have an effect on ranking for
oneterm queries, like  iPhone
 idf has no effect on ranking one term queries
 idf affects the ranking of documents for queries
with at least two terms  For the query capricious person, idf weighting
makes occurrences of capricious count for much
more in the final document ranking than
occurrences of person.
32Collection vs. Document frequency
Sec. 6.2.1
 The collection frequency of t is the number of
occurrences of t in the collection, counting
multiple occurrences.  Example
 Which word is a better search term (and should
get a higher weight)?
Word Collection frequency Document frequency
insurance 10440 3997
try 10422 8760
33tfidf weighting
Sec. 6.2.2
 The tfidf weight of a term is the product of its
tf weight and its idf weight.  Best known weighting scheme in information
retrieval  Note the  in tfidf is a hyphen, not a minus
sign!  Alternative names tf.idf, tf x idf
 Increases with the number of occurrences within a
document  Increases with the rarity of the term in the
collection
34Final ranking of documents for a query
Sec. 6.2.2
35Binary ? count ? weight matrix
Sec. 6.3
Each document is now represented by a realvalued
vector of tfidf weights ? RV
36 Exercise 6.8 Why is the idf of a term always
finite?  Exercise 6.9 What is the idf of a term that
occurs in every document? Compare this with the
use of stop word lists.  Exercise 6.10 Consider the table of term
frequencies for 3 docs  term dft
idft Doc1 Doc2 Doc3  car 18,165 1.65
27 4 24  auto 6,723 2.08
3 33 0  insurance 19,241 1.62
0 33 29  best 25,235 1.50
14 0 17  Compute the tfidf weights for the terms
car, auto, insurance and best for each doc using
the given idf values.  Exercise 6.11 Can the tfidf weight of a term in
a doc exceed 1?  Exercise 6.12 How does the base of the logarithm
in affect
the score calculation by
? How does it affect
the relative scores of two docs on a given query?  Exercise 6.13 If the logarithm in
is computed base 2, suggest
a simple approximation of the idf of a term.
37Documents as vectors
Sec. 6.3
 So we have a Vdimensional vector space
 Terms are axes of the space
 Documents are points or vectors in this space
 Very highdimensional tens of millions of
dimensions when you apply this to a web search
engine  These are very sparse vectors most entries are
zero.
38Queries as vectors
Sec. 6.3
 Key idea 1 Do the same for queries represent
them as vectors in the space  Key idea 2 Rank documents according to their
proximity to the query in this space  proximity similarity of vectors
 proximity inverse of distance
 Recall We do this because we want to get away
from the youreeitherinorout Boolean model.  Instead rank more relevant documents higher than
less relevant documents
39Formalizing vector space proximity
Sec. 6.3
 First cut distance between two points
 ( distance between the end points of the two
vectors)  Euclidean distance?
 Euclidean distance is a bad idea . . .
 . . . because Euclidean distance is large for
vectors of different lengths.
40Why distance is a bad idea
Sec. 6.3
 The Euclidean distance between q
 and d2 is large even though the
 distribution of terms in the query q and the
distribution of  terms in the document d2 are
 very similar.
41Use angle instead of distance
Sec. 6.3
 Thought experiment take a document d and append
it to itself. Call this document d'.  Semantically d and d' have the same content
 The Euclidean distance between the two documents
can be quite large  The angle between the two documents is 0,
corresponding to maximal similarity.  Key idea Rank documents according to angle with
query.
42From angles to cosines
Sec. 6.3
 The following two notions are equivalent.
 Rank documents in decreasing order of the angle
between query and document  Rank documents in increasing order of
cosine(query,document)  Cosine is a monotonically decreasing function for
the interval 0o, 180o
43From angles to cosines
Sec. 6.3
 But how and why should we be computing
cosines?
44Length normalization
Sec. 6.3
 A vector can be (length) normalized by dividing
each of its components by its length for this
we use the L2 norm  Dividing a vector by its L2 norm makes it a unit
(length) vector (on surface of unit hypersphere)  Effect on the two documents d and d' (d appended
to itself) from earlier slide they have
identical vectors after lengthnormalization.  Long and short documents now have comparable
weights
45cosine(query,document)
Sec. 6.3
Dot product
qi is the tfidf weight of term i in the query di
is the tfidf weight of term i in the
document cos(q,d) is the cosine similarity of q
and d or, equivalently, the cosine of the angle
between q and d.
46Cosine for lengthnormalized vectors
 For lengthnormalized vectors, cosine similarity
is simply the dot product (or scalar product)  for q, d
lengthnormalized.
47Cosine similarity illustrated
48Cosine similarity amongst 3 documents
Sec. 6.3
 How similar are
 the novels
 SaS Sense and
 Sensibility
 PaP Pride and
 Prejudice, and
 WH Wuthering
 Heights?
term SaS PaP WH
affection 115 58 20
jealous 10 7 11
gossip 2 0 6
wuthering 0 0 38
Term frequencies (counts)
Note To simplify this example, we dont do idf
weighting.
493 documents example contd.
Sec. 6.3
 After length normalization
term SaS PaP WH
affection 3.06 2.76 2.30
jealous 2.00 1.85 2.04
gossip 1.30 0 1.78
wuthering 0 0 2.58
term SaS PaP WH
affection 0.789 0.832 0.524
jealous 0.515 0.555 0.465
gossip 0.335 0 0.405
wuthering 0 0 0.588
cos(SaS,PaP) 0.789 0.832 0.515 0.555
0.335 0.0 0.0 0.0 0.94 cos(SaS,WH)
0.79 cos(PaP,WH) 0.69
Why do we have cos(SaS,PaP) gt cos(SAS,WH)? SaS
and PaP were both written by Jane Austen, WH by
Emily Brontë.
50Computing cosine scores
Sec. 6.3
Can traverse posting lists one term at time
which is called termatatime scoring. Or
can traverse them concurrently as in the
INTERSECT algorithm of Ch. 1 which is called
documentatatime scoring
No need to store these per doc per posting list.
Can be computed onthefly from the dft value at
the head of the postings list and the tft,d
value in the doc.
Priority queueheap!
51tfidf weighting has many variants
Sec. 6.4
Why is the base of the log in idf immaterial?
52Weighting may differ in queries vs documents
Sec. 6.4
 Many search engines allow for different
weightings for queries vs. documents  SMART Notation denotes the combination in use in
an engine, with the notation ddd.qqq, using the
acronyms from the previous table  A very standard weighting scheme is lnc.ltc
 Document logarithmic tf (l first character), no
idf (n second character), cosine normalization  Query logarithmic tf (l first character), idf (t
second character), cosine normalization
A bad idea?
53tfidf example lnc.ltc
Sec. 6.4
Document car insurance auto insurance Query
best car insurance
Term Query Query Query Query Query Query Document Document Document Document Prod
tfraw tfwt df idf wt nlize tfraw tfwt wt nlize
auto 0 0 5000 2.3 0 0 1 1 1 0.52 0
best 1 1 50000 1.3 1.3 0.34 0 0 0 0 0
car 1 1 10000 2.0 2.0 0.52 1 1 1 0.52 0.27
insurance 1 1 1000 3.0 3.0 0.78 2 1.3 1.3 0.68 0.53
N, the number of docs 1,000,000
Score 000.270.53 0.8
54 Exercise 6.14 If we were to stem jealous and
jealousy to a common stem before setting up the
vector space, detail how the definitions of tf
and idf should be modified.  Exercise 6.15 Recall the tfidf weights computed
in Exercise 6.10. Compute the Euclidean
normalized document vectors for each of the docs,
where each has four components, one for each of
the four terms.  Exercise 6.16 Verify that the sum of the squares
of the components of each of the document vectors
in Exercise 6.15 is 1 (to within rounding error).
Why?  Exercise 6.17 With term weights as computed in
Exercise 6.15, rank the three documents by
computed score for the query car insurance for
each of the following cases of term weight in the
query  The weight of the term is 1 if present in the
query, 0 otherwise.  Euclidean normalized idf.
 Exercise 6.18 One measure of the similarity of
two vectors x and y is the Euclidean (or L2)
distance between them  x y sqrt( ?i1..m (xi yi)2 )
 Given a query q and docs d1, d2, , we may
rank the docs di in order of increasing distance
from q. Show that if q and the di are all
normalized to unit vectors, then the rank
ordering produced by Euclidean distance is
identical to that produced by cosine similarity.
55 Exercise 6.19 Compare the vector space
similarity between the query digital cameras
and the document digital cameras and video
cameras by filling out the empty columns in the
table below.  query
document  word tf wf df
idf qiwfidf tf wf
dinormalized wf qi di  digital 10,000
 video 100,000
 cameras 50,000
 Assume N 10,000,000, logarithmic term weighting
(wf columns) for query and doc, idf weighting for
the query only and cosine normalization for the
doc only. Treat and as a stop word. Enter term
counts in the tf columns. What is the final
similarity score?
56 Exercise 6.20 Show that for the query affection,
the relative ordering of the scores of the three
docs in the table below is the reverse of the
ordering for the query jealous gossip.  term SaS
PaP WH  affection 0.996 0.993
0.847  jealous 0.087
0.120 0.466  gossip 0.017 0
0.254  Exercise 6.21 In turning a query into a unit
vector in the table above, we assigned equal
weights to each of the query terms. What other
principled approaches are plausible?  Exercise 6.22 Consider the case of a query term
that is not in the set of M indexed terms. thus,
our standard construction of the query vector
results in V(q) not being in the vector space
created from the collection. How would one adapt
the vector space construction to handle this
case?  Exercise 6.23 Refer to the tf and idf values for
the four terms and three docs in Exercise 6.10.
Compute the two topscoring docs on the query
best car insurance for each of the following
weighting schemes (i) nnn.atc (ii) ntc.atc.  Exercise 6.24 Suppose the word coyote does not
occur in the collection used in Exercises 6.10
and 6.23. How would one compute ntc.atc scores
for the query coyote insurance?
57Summary vector space ranking
 Represent the query as a weighted tfidf vector
 Represent each document as a weighted tfidf
vector  Compute the cosine similarity score for the query
vector and each document vector  Rank documents with respect to the query by score
 Return the top K (e.g., K 10) to the user
58Resources for todays lecture
Ch. 6
 IIR 6.2 6.4.3
 http//www.miislita.com/informationretrievaltuto
rial/cosinesimilaritytutorial.html  Term weighting and cosine similarity tutorial for
SEO folk!