Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions PV211: Introduction to Information Retrieval https://www.fi.muni.cz/~sojka/PV211 IIR 11: Probabilistic Information Retrieval Handout version Petr Sojka, Hinrich Schütze et al. Faculty of Informatics, Masaryk University, Brno Center for Information and Language Processing, University of Munich 2023-05-?? (compiled on 2023-03-14 22:04:19) Sojka, IIR Group: PV211: Probabilistic Information Retrieval 1 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Overview 1 Recap 2 Probabilistic Approach to IR 3 Basic Probability Theory 4 Probability Ranking Principle 5 Appraisal&Extensions Sojka, IIR Group: PV211: Probabilistic Information Retrieval 2 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Take-away today Probabilistically grounded approach to IR Probability Ranking Principle Models: BIM, BM25 Assumptions these models make Sojka, IIR Group: PV211: Probabilistic Information Retrieval 3 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Relevance feedback from last lecture Previous lecture: in relevance feedback, the user marks documents as relevant/irrelevant Given some known relevant and irrelevant documents, we compute weights for non-query terms that indicate how likely they will occur in relevant documents. Today: develop a probabilistic approach for relevance feedback and also a general probabilistic model for IR. Sojka, IIR Group: PV211: Probabilistic Information Retrieval 5 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Probabilistic Approach to Retrieval Given a user information need (represented as a query) and a collection of documents (transformed into document representations), a system must determine how well the documents satisfy the query An IR system has an uncertain understanding of the user query, and makes an uncertain guess of whether a document satisfies the query Probability theory provides a principled foundation for such reasoning under uncertainty Probabilistic models exploit this foundation to estimate how likely it is that a document is relevant to a query Sojka, IIR Group: PV211: Probabilistic Information Retrieval 6 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Probabilistic IR Models at a Glance Classical probabilistic retrieval model Probability ranking principle Binary Independence Model, BestMatch25 (Okapi) Bayesian networks for text retrieval Language model approach to IR Important recent work, will be covered in the next lecture Probabilistic methods are one of the oldest but also one of the currently hottest topics in IR Sojka, IIR Group: PV211: Probabilistic Information Retrieval 7 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Exercise: Probabilistic model vs. other models Boolean model Probabilistic models support ranking and thus are better than the simple Boolean model. Vector space model The vector space model is also a formally defined model that supports ranking. Why would we want to look for an alternative to the vector space model? Sojka, IIR Group: PV211: Probabilistic Information Retrieval 8 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Probabilistic vs. vector space model Vector space model: rank documents according to similarity to query. The notion of similarity does not translate directly into an assessment of “is the document a good document to give to the user or not?” The most similar document can be highly relevant or completely irrelevant. Probability theory is arguably a cleaner formalization of what we really want an IR system to do: give relevant documents to the user. Sojka, IIR Group: PV211: Probabilistic Information Retrieval 9 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Basic Probability Theory For events A and B Joint probability P(A ∩ B) of both events occurring Conditional probability P(A|B) of event A occurring given that event B has occurred Chain rule gives fundamental relationship between joint and conditional probabilities: P(AB) = P(A ∩ B) = P(A|B)P(B) = P(B|A)P(A) Similarly for the complement of an event P(A): P(AB) = P(B|A)P(A) Partition rule: if B can be divided into an exhaustive set of disjoint subcases, then P(B) is the sum of the probabilities of the subcases. A special case of this rule gives: P(B) = P(AB) + P(AB) Sojka, IIR Group: PV211: Probabilistic Information Retrieval 11 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Basic Probability Theory Bayes’ Rule for inverting conditional probabilities: P(A|B) = P(B|A)P(A) P(B) = P(B|A) X∈{A,A} P(B|X)P(X) P(A) Can be thought of as a way of updating probabilities: Start off with prior probability P(A) (initial estimate of how likely event A is in the absence of any other information) Derive a posterior probability P(A|B) after having seen the evidence B, based on the likelihood of B occurring in the two cases that A does or does not hold Odds of an event provide a kind of multiplier for how probabilities change: Odds: O(A) = P(A) P(A) = P(A) 1 − P(A) Sojka, IIR Group: PV211: Probabilistic Information Retrieval 12 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions The Document Ranking Problem Ranked retrieval setup: given a collection of documents, the user issues a query, and an ordered list of documents is returned Assume binary notion of relevance: Rd,q is a random dichotomous variable, such that Rd,q = 1 if document d is relevant w.r.t query q Rd,q = 0 otherwise Probabilistic ranking orders documents decreasingly by their estimated probability of relevance w.r.t. query: P(R = 1|d, q) Assume that the relevance of each document is independent of the relevance of other documents Sojka, IIR Group: PV211: Probabilistic Information Retrieval 14 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Probability Ranking Principle (PRP) PRP in brief If the retrieved documents (w.r.t a query) are ranked decreasingly on their probability of relevance, then the effectiveness of the system will be the best that is obtainable PRP in full If [the IR] system’s response to each [query] is a ranking of the documents [...] in order of decreasing probability of relevance to the [query], where the probabilities are estimated as accurately as possible on the basis of whatever data have been made available to the system for this purpose, the overall effectiveness of the system to its user will be the best that is obtainable on the basis of those data Sojka, IIR Group: PV211: Probabilistic Information Retrieval 15 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Binary Independence Model (BIM) Traditionally used with the PRP Assumptions: ‘Binary’ (equivalent to Boolean): documents and queries represented as binary term incidence vectors E.g., document d represented by vector x = (x1, . . . , xM), where xt = 1 if term t occurs in d and xt = 0 otherwise Different documents may have the same vector representation ‘Independence’: no association between terms (not true, but practically works - ‘naive’ assumption of Naive Bayes models) Sojka, IIR Group: PV211: Probabilistic Information Retrieval 16 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Binary incidence matrix Anthony Julius The Hamlet Othello Macbeth . . . and Caesar Tempest Cleopatra Anthony 1 1 0 0 0 1 Brutus 1 1 0 1 0 0 Caesar 1 1 0 1 1 1 Calpurnia 0 1 0 0 0 0 Cleopatra 1 0 0 0 0 0 mercy 1 0 1 1 1 1 worser 1 0 1 1 1 0 . . . Each document is represented as a binary vector ∈ {0, 1}|V |. Sojka, IIR Group: PV211: Probabilistic Information Retrieval 17 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Binary Independence Model To make a probabilistic retrieval strategy precise, need to estimate how terms in documents contribute to relevance Find measurable statistics (term frequency, document frequency, document length) that affect judgments about document relevance Combine these statistics to estimate the probability P(R|d, q) of document relevance Next: how exactly we can do this Sojka, IIR Group: PV211: Probabilistic Information Retrieval 18 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Binary Independence Model P(R|d, q) is modeled using term incidence vectors as P(R|x, q) P(R = 1|x, q) = P(x|R = 1, q)P(R = 1|q) P(x|q) P(R = 0|x, q) = P(x|R = 0, q)P(R = 0|q) P(x|q) P(x|R = 1, q) and P(x|R = 0, q): probability that if a relevant or irrelevant document is retrieved, then that document’s representation is x Use statistics about the document collection to estimate these probabilities Sojka, IIR Group: PV211: Probabilistic Information Retrieval 19 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Binary Independence Model P(R|d, q) is modeled using term incidence vectors as P(R|x, q) P(R = 1|x, q) = P(x|R = 1, q)P(R = 1|q) P(x|q) P(R = 0|x, q) = P(x|R = 0, q)P(R = 0|q) P(x|q) P(R = 1|q) and P(R = 0|q): prior probability of retrieving a relevant or irrelevant document for a query q Estimate P(R = 1|q) and P(R = 0|q) from percentage of relevant documents in the collection Since a document is either relevant or irrelevant to a query, we must have that: P(R = 1|x, q) + P(R = 0|x, q) = 1 Sojka, IIR Group: PV211: Probabilistic Information Retrieval 20 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Deriving a Ranking Function for Query Terms (1) Given a query q, ranking documents by P(R = 1|d, q) is modeled under BIM as ranking them by P(R = 1|x, q) Easier: rank documents by their odds of relevance (gives same ranking) O(R|x, q) = P(R = 1|x, q) P(R = 0|x, q) = P(R=1|q)P(x|R=1,q) P(x|q) P(R=0|q)P(x|R=0,q) P(x|q) = P(R = 1|q) P(R = 0|q) · P(x|R = 1, q) P(x|R = 0, q) P(R=1|q) P(R=0|q) is a constant for a given query - can be ignored Sojka, IIR Group: PV211: Probabilistic Information Retrieval 21 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Deriving a Ranking Function for Query Terms (2) It is at this point that we make the Naive Bayes conditional independence assumption that the presence or absence of a word in a document is independent of the presence or absence of any other word (given the query): P(x|R = 1, q) P(x|R = 0, q) = M t=1 P(xt|R = 1, q) P(xt|R = 0, q) So: O(R|x, q) = O(R|q) · M t=1 P(xt|R = 1, q) P(xt|R = 0, q) Sojka, IIR Group: PV211: Probabilistic Information Retrieval 22 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Exercise Naive Bayes conditional independence assumption: the presence or absence of a word in a document is independent of the presence or absence of any other word (given the query). Why is this wrong? Good example? PRP assumes that the relevance of each document is independent of the relevance of other documents. Why is this wrong? Good example? Sojka, IIR Group: PV211: Probabilistic Information Retrieval 23 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Deriving a Ranking Function for Query Terms (3) Since each xt is either 0 or 1, we can separate the terms: O(R|x, q) = O(R|q)· t:xt =1 P(xt = 1|R = 1, q) P(xt = 1|R = 0, q) · t:xt =0 P(xt = 0|R = 1, q) P(xt = 0|R = 0, q) Sojka, IIR Group: PV211: Probabilistic Information Retrieval 24 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Deriving a Ranking Function for Query Terms (4) Let pt = P(xt = 1|R = 1, q) be the probability of a term appearing in relevant document Let ut = P(xt = 1|R = 0, q) be the probability of a term appearing in a irrelevant document Can be displayed as contingency table: document relevant (R = 1) irrelevant (R = 0) Term present xt = 1 pt ut Term absent xt = 0 1 − pt 1 − ut Sojka, IIR Group: PV211: Probabilistic Information Retrieval 25 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Deriving a Ranking Function for Query Terms Additional simplifying assumption: terms not occurring in the query are equally likely to occur in relevant and irrelevant documents If qt = 0, then pt = ut Now we need only to consider terms in the products that appear in the query: O(R|x, q) = O(R|q) · t:xt=qt =1 pt ut · t:xt=0,qt =1 1 − pt 1 − ut The left product is over query terms found in the document and the right product is over query terms not found in the document Sojka, IIR Group: PV211: Probabilistic Information Retrieval 26 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Deriving a Ranking Function for Query Terms Including the query terms found in the document into the right product, but simultaneously dividing by them in the left product, gives: O(R|x, q) = O(R|q) · t:xt =qt =1 pt(1 − ut) ut(1 − pt) · t:qt =1 1 − pt 1 − ut The left product is still over query terms found in the document, but the right product is now over all query terms, hence constant for a particular query and can be ignored. → The only quantity that needs to be estimated to rank documents w.r.t a query is the left product Hence the Retrieval Status Value (RSV) in this model: RSVd = log t:xt=qt =1 pt(1 − ut) ut(1 − pt) = t:xt =qt =1 log pt(1 − ut) ut(1 − pt) Sojka, IIR Group: PV211: Probabilistic Information Retrieval 27 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Deriving a Ranking Function for Query Terms Equivalent: rank documents using the log odds ratios for the terms in the query ct: ct = log pt(1 − ut) ut(1 − pt) = log pt (1 − pt) − log ut 1 − ut The odds ratio is the ratio of two odds: (i) the odds of the term appearing if the document is relevant (pt/(1 − pt)), and (ii) the odds of the term appearing if the document is irrelevant (ut/(1 − ut)) ct = 0: term has equal odds of appearing in relevant and irrelevant docs ct positive: higher odds to appear in relevant documents ct negative: higher odds to appear in irrelevant documents Sojka, IIR Group: PV211: Probabilistic Information Retrieval 28 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Term weight ct in BIM ct = log pt (1−pt ) − log ut 1−ut functions as a term weight. Retrieval status value for document d: RSVd = xt=qt =1 ct. So BIM and vector space model are identical on an operational level . . . . . . except that the term weights are different. In particular: we can use the same data structures (inverted index, etc.) for the two models. Sojka, IIR Group: PV211: Probabilistic Information Retrieval 29 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions How to compute probability estimates For each term t in a query, estimate ct in the whole collection using a contingency table of counts of documents in the collection, where dft is the number of documents that contain term t: documents relevant irrelevant Total Term present xt = 1 s dft − s dft Term absent xt = 0 S − s (N − dft) − (S − s) N − dft Total S N − S N pt = s/S ut = (dft − s)/(N − S) ct = K(N, dft, S, s) = log s/(S − s) (dft − s)/((N − dft) − (S − s)) Sojka, IIR Group: PV211: Probabilistic Information Retrieval 30 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Avoiding zeros If any of the counts is a zero, then the term weight is not well-defined. Maximum likelihood estimates do not work for rare events. To avoid zeros: add 0.5 to each count (expected likelihood estimation = ELE) For example, use S − s + 0.5 in formula for S − s Sojka, IIR Group: PV211: Probabilistic Information Retrieval 31 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Exercise Query: Obama health plan Doc1: Obama rejects allegations about his own bad health Doc2: The plan is to visit Obama Doc3: Obama raises concerns with US health plan reforms Estimate the probability that the above documents are relevant to the query. Use a contingency table. These are the only three documents in the collection Sojka, IIR Group: PV211: Probabilistic Information Retrieval 32 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Simplifying assumption Assuming that relevant documents are a very small percentage of the collection, approximate statistics for irrelevant documents by statistics from the whole collection Hence, ut (the probability of term occurrence in irrelevant documents for a query) is dft/N and log[(1 − ut)/ut ] = log[(N − dft)/dft] ≈ log N/dft This should look familiar to you . . . The above approximation cannot easily be extended to relevant documents Sojka, IIR Group: PV211: Probabilistic Information Retrieval 33 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Probability estimates in relevance feedback Statistics of relevant documents (pt) in relevance feedback can be estimated using maximum likelihood estimation or ELE (add 0.5). Use the frequency of term occurrence in known relevant documents. This is the basis of probabilistic approaches to relevance feedback weighting in a feedback loop The exercise we just did was a probabilistic relevance feedback exercise since we were assuming the availability of relevance judgments. Sojka, IIR Group: PV211: Probabilistic Information Retrieval 34 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Probability estimates in adhoc retrieval Ad-hoc retrieval: no user-supplied relevance judgments available In this case: assume that pt is constant over all terms xt in the query and that pt = 0.5 Each term is equally likely to occur in a relevant document, and so the pt and (1 − pt) factors cancel out in the expression for RSV Weak estimate, but doesn’t disagree violently with expectation that query terms appear in many but not all relevant documents Combining this method with the earlier approximation for ut, the document ranking is determined simply by which query terms occur in documents scaled by their idf weighting For short documents (titles or abstracts) in one-pass retrieval situations, this estimate can be quite satisfactory Sojka, IIR Group: PV211: Probabilistic Information Retrieval 35 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions History and summary of assumptions Among the oldest formal models in IR Maron & Kuhns, 1960: Since an IR system cannot predict with certainty which document is relevant, we should deal with probabilities Assumptions for getting reasonable approximations of the needed probabilities (in the BIM): Boolean representation of documents/queries/relevance Term independence Out-of-query terms do not affect retrieval Document relevance values are independent Sojka, IIR Group: PV211: Probabilistic Information Retrieval 37 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions How different are vector space and BIM? They are not that different. In either case you build an information retrieval scheme in the exact same way. For probabilistic IR, at the end, you score queries not by cosine similarity and tf-idf in a vector space, but by a slightly different formula motivated by probability theory. Next: how to add term frequency and length normalization to the probabilistic model. Sojka, IIR Group: PV211: Probabilistic Information Retrieval 38 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Okapi BM25: Overview Okapi BM25 is a probabilistic model that incorporates term frequency (i.e., it’s nonbinary) and length normalization. BIM was originally designed for short catalog records of fairly consistent length, and it works reasonably in these contexts For modern full-text search collections, a model should pay attention to term frequency and document length BestMatch25 (a.k.a BM25 or Okapi) is sensitive to these quantities BM25 is one of the most widely used and robust retrieval models Sojka, IIR Group: PV211: Probabilistic Information Retrieval 39 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Okapi BM25: Starting point The simplest score for document d is just idf weighting of the query terms present in the document: RSVd = t∈q log N dft Sojka, IIR Group: PV211: Probabilistic Information Retrieval 40 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Okapi BM25 basic weighting Improve idf term [log N/df] by factoring in term frequency and document length. RSVd = t∈q log N dft · (k1 + 1)tftd k1((1 − b) + b × (Ld /Lave)) + tftd tftd : term frequency in document d Ld (Lave): length of document d (average document length in the whole collection) k1: tuning parameter controlling the document term frequency scaling b: tuning parameter controlling the scaling by document length Sojka, IIR Group: PV211: Probabilistic Information Retrieval 41 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Exercise Interpret BM25 weighting formula for k1 = 0 Interpret BM25 weighting formula for k1 = 1 and b = 0 Interpret BM25 weighting formula for k1 → ∞ and b = 0 Interpret BM25 weighting formula for k1 → ∞ and b = 1 Sojka, IIR Group: PV211: Probabilistic Information Retrieval 42 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Okapi BM25 weighting for long queries For long queries, use similar weighting for query terms RSVd = t∈q log N dft · (k1 + 1)tftd k1((1 − b) + b × (Ld /Lave)) + tftd · (k3 + 1)tftq k3 + tftq tftq: term frequency in the query q k3: tuning parameter controlling term frequency scaling of the query No length normalization of queries (because retrieval is being done with respect to a single fixed query) The above tuning parameters should ideally be set to optimize performance on a development test collection. In the absence of such optimization, experiments have shown reasonable values are to set k1 and k3 to a value between 1.2 and 2 and b = 0.75 Sojka, IIR Group: PV211: Probabilistic Information Retrieval 43 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Which ranking model should I use? I want something basic and simple → use vector space with tf-idf weighting. I want to use a state-of-the-art ranking model with excellent performance → use language models or BM25 with tuned parameters In between: BM25 or language models with no or just one tuned parameter Sojka, IIR Group: PV211: Probabilistic Information Retrieval 44 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Take-away today Probabilistically grounded approach to IR Probability Ranking Principle Models: BIM, BM25 Assumptions these models make Sojka, IIR Group: PV211: Probabilistic Information Retrieval 45 / 51 Recap Probabilistic Approach to IR Basic Probability Theory Probability Ranking Principle Appraisal&Extensions Resources Chapter 11 of IIR Resources at https://www.fi.muni.cz/~sojka/PV211/ and http://cislmu.org, materials in MU IS and FI MU library Weka: A data mining software package that includes an implementation of Naive Bayes Reuters-21578 – the most famous text classification evaluation set (but now it’s too small for realistic experiments) Sojka, IIR Group: PV211: Probabilistic Information Retrieval 46 / 51