Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? PV211: Introduction to Information Retrieval https://www.fi.muni.cz/~sojka/PV211 IIR 16: Flat Clustering Handout version Petr Sojka, Hinrich Schütze et al. Faculty of Informatics, Masaryk University, Brno Center for Information and Language Processing, University of Munich 2024-05-01 (compiled on 2024-05-14 23:03:33) Sojka, IIR Group: PV211: Flat Clustering 1 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Overview 1 Recap 2 Clustering: Introduction 3 Clustering in IR 4 K-means 5 Evaluation 6 How many clusters? Sojka, IIR Group: PV211: Flat Clustering 2 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Support vector machines Binary classification problem Simple SVMs are linear classifiers. criterion: being maximally far away from any data point → determines classifier margin linear separator position defined by support vectors Support vectors Margin is maximized Maximum margin decision hyperplane Sojka, IIR Group: PV211: Flat Clustering 4 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Optimization problem solved by SVMs Find w and b such that: 1 2wTw is minimized (because |w| = √ wTw), and for all {(xi , yi )}, yi (wTxi + b) ≥ 1 Sojka, IIR Group: PV211: Flat Clustering 5 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Which machine learning method to choose? Is there a learning method that is optimal for all text classification problems? No, because there is a tradeoff, a dilemma between bias and variance. Factors to take into account: How much training data is available? How simple/complex is the problem? (linear vs. nonlinear decision boundary) How noisy is the problem? How stable is the problem over time? For an unstable problem, it’s better to use a simple and robust classifier. See Fig. 15 in Geman et al. Sojka, IIR Group: PV211: Flat Clustering 6 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Take-away today What is clustering? Applications of clustering in information retrieval K-means algorithm Evaluation of clustering How many clusters? Sojka, IIR Group: PV211: Flat Clustering 7 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Clustering: Definition (Document) clustering is the process of grouping a set of documents into clusters of similar documents. Documents within a cluster should be similar. Documents from different clusters should be dissimilar. Clustering is the most common form of unsupervised learning. Unsupervised = there are no labeled or annotated data. Hard clustering vs. soft clustering. Cardinality of clustering. Sojka, IIR Group: PV211: Flat Clustering 9 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Exercise: Data set with clear cluster structure 0.0 0.5 1.0 1.5 2.0 0.00.51.01.52.02.5 Propose algorithm for finding the cluster structure in this example Sojka, IIR Group: PV211: Flat Clustering 10 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Classification vs. Clustering Classification: supervised learning Clustering: unsupervised learning Classification: Classes are human-defined and part of the input to the learning algorithm. Clustering: Clusters are inferred from the data without human input. However, there are many ways of influencing the outcome of clustering: number of clusters, similarity measure, representation of documents, . . . Sojka, IIR Group: PV211: Flat Clustering 11 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? The cluster hypothesis Cluster hypothesis. Documents in the same cluster behave similarly with respect to relevance to information needs. All applications of clustering in IR are based (directly or indirectly) on the cluster hypothesis. Van Rijsbergen’s original wording (1979): “closely associated documents tend to be relevant to the same requests”. Sojka, IIR Group: PV211: Flat Clustering 13 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Applications of clustering in IR application what is benefit clustered? search result clustering search res- ults more effective information presentation to user Scatter-Gather (subsets of) collection alternative user interface: “search without typing” collection clustering collection effective information presentation for exploratory browsing cluster-based retrieval collection higher efficiency: faster search Sojka, IIR Group: PV211: Flat Clustering 14 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Search result clustering for better navigation Sojka, IIR Group: PV211: Flat Clustering 15 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Scatter-Gather Sojka, IIR Group: PV211: Flat Clustering 16 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Global navigation: Yahoo Sojka, IIR Group: PV211: Flat Clustering 17 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Global navigation: Medical Subject Headings MESH (upper level) Sojka, IIR Group: PV211: Flat Clustering 18 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Global navigation: MESH (lower level) Sojka, IIR Group: PV211: Flat Clustering 19 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Navigational hierarchies: Manual vs. automatic creation Note: Yahoo/MESH are not examples of clustering. But they are well known examples for using a global hierarchy for navigation. Some examples for global navigation/exploration based on clustering: Arxiv’s LDAExplore: https://arxiv.lateral.io/ Cartia Themescapes Google News Sojka, IIR Group: PV211: Flat Clustering 20 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Global navigation combined with visualization (1) Sojka, IIR Group: PV211: Flat Clustering 21 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Global navigation combined with visualization (2) Sojka, IIR Group: PV211: Flat Clustering 22 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Global clustering for navigation: Google News http://news.google.com Sojka, IIR Group: PV211: Flat Clustering 23 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Clustering for improving recall To improve search recall: Cluster docs in collection a priori When a query matches a doc d, also return other docs in the cluster containing d Hope: if we do this: the query “car” will also return docs containing “automobile” Because the clustering algorithm groups together docs containing “car” with those containing “automobile”. Both types of documents contain words like “parts”, “dealer”, “mercedes”, “road trip”. Sojka, IIR Group: PV211: Flat Clustering 24 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Exercise: Data set with clear cluster structure 0.0 0.5 1.0 1.5 2.0 0.00.51.01.52.02.5 Propose algorithm for finding the cluster structure in this example Sojka, IIR Group: PV211: Flat Clustering 25 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Desiderata for clustering General goal: put related docs in the same cluster, put unrelated docs in different clusters. We’ll see different ways of formalizing this. The number of clusters should be appropriate for the data set we are clustering. Initially, we will assume the number of clusters K is given. Later: Semiautomatic methods for determining K Secondary goals in clustering Avoid very small and very large clusters Define clusters that are easy to explain to the user Many others . . . Sojka, IIR Group: PV211: Flat Clustering 26 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Flat vs. Hierarchical clustering Flat algorithms Usually start with a random (partial) partitioning of docs into groups Refine iteratively Main algorithm: K-means Hierarchical algorithms Create a hierarchy Bottom-up, agglomerative Top-down, divisive Sojka, IIR Group: PV211: Flat Clustering 27 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Hard vs. Soft clustering Hard clustering: Each document belongs to exactly one cluster. More common and easier to do Soft clustering: A document can belong to more than one cluster. Makes more sense for applications like creating browsable hierarchies You may want to put sneakers in two clusters: sports apparel shoes You can only do that with a soft clustering approach. This class: flat, hard clustering; next: hierarchical, hard clustering then: latent semantic indexing, a form of soft clustering We won’t have time for soft clustering. See IIR 16.5, IIR 18 Non-exhaustive clustering: some docs are not assigned to any cluster. See references in IIR 16. Sojka, IIR Group: PV211: Flat Clustering 28 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Flat algorithms Flat algorithms compute a partition of N documents into a set of K clusters. Given: a set of documents and the number K Find: a partition into K clusters that optimizes the chosen partitioning criterion Global optimization: exhaustively enumerate partitions, pick optimal one Not tractable Effective heuristic method: K-means algorithm Sojka, IIR Group: PV211: Flat Clustering 29 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? K-means Perhaps the best known clustering algorithm Simple, works well in many cases Use as default / baseline for clustering documents Sojka, IIR Group: PV211: Flat Clustering 31 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Document representations in clustering Vector space model As in vector space classification, we measure relatedness between vectors by Euclidean distance . . . . . . which is almost equivalent to cosine similarity. Almost: centroids are not length-normalized. Sojka, IIR Group: PV211: Flat Clustering 32 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? K-means: Basic idea Each cluster in K-means is defined by a centroid. Objective/partitioning criterion: minimize the average squared difference from the centroid Recall definition of centroid: µ(ω) = 1 |ω| x∈ω x where we use ω to denote a cluster. We try to find the minimum average squared difference by iterating two steps: reassignment: assign each vector to its closest centroid recomputation: recompute each centroid as the average of the vectors that were assigned to it in reassignment Sojka, IIR Group: PV211: Flat Clustering 33 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? K-means pseudocode (µk is centroid of ωk) K-means({x1, . . . , xN}, K) 1 (s1, s2, . . . , sK ) ← SelectRandomSeeds({x1, . . . , xN}, K) 2 for k ← 1 to K 3 do µk ← sk 4 while stopping criterion has not been met 5 do for k ← 1 to K 6 do ωk ← {} 7 for n ← 1 to N 8 do j ← arg minj′ |µj′ − xn| 9 ωj ← ωj ∪ {xn} (reassignment of vectors) 10 for k ← 1 to K 11 do µk ← 1 |ωk | x∈ωk x (recomputation of centroids) 12 return {µ1, . . . , µK } Sojka, IIR Group: PV211: Flat Clustering 34 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Worked Example: Set of points to be clustered Exercise: (i) Guess what the optimal clustering into two clusters is in this case; (ii) compute the centroids of the clusters Sojka, IIR Group: PV211: Flat Clustering 35 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Worked Example: Random selection of initial centroids Sojka, IIR Group: PV211: Flat Clustering 36 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Worked Example: Assign points to closest center Sojka, IIR Group: PV211: Flat Clustering 37 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Worked Example: Assignment 2 1 1 2 1 1 1 11 1 1 1 1 1 1 2 11 2 2 Sojka, IIR Group: PV211: Flat Clustering 38 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Worked Example: Recompute cluster centroids 2 1 1 2 1 1 1 11 1 1 1 1 1 1 2 11 2 2 Sojka, IIR Group: PV211: Flat Clustering 39 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Worked Example: Assign points to closest centroid Sojka, IIR Group: PV211: Flat Clustering 40 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Worked Example: Assignment 2 2 1 2 1 1 1 11 1 1 2 1 1 1 2 11 2 2 Sojka, IIR Group: PV211: Flat Clustering 41 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Worked Example: Recompute cluster centroids 2 2 1 2 1 1 1 11 1 1 2 1 1 1 2 11 2 2 Sojka, IIR Group: PV211: Flat Clustering 42 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Worked Example: Assign points to closest centroid Sojka, IIR Group: PV211: Flat Clustering 43 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Worked Example: Assignment 2 2 2 2 1 1 1 11 1 1 2 1 1 1 2 11 2 2 Sojka, IIR Group: PV211: Flat Clustering 44 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Worked Example: Recompute cluster centroids 2 2 2 2 1 1 1 11 1 1 2 1 1 1 2 11 2 2 Sojka, IIR Group: PV211: Flat Clustering 45 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Worked Example: Assign points to closest centroid Sojka, IIR Group: PV211: Flat Clustering 46 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Worked Example: Assignment 2 2 2 2 1 1 1 12 1 1 2 1 1 1 2 11 2 2 Sojka, IIR Group: PV211: Flat Clustering 47 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Worked Example: Recompute cluster centroids 2 2 2 2 1 1 1 12 1 1 2 1 1 1 2 11 2 2 Sojka, IIR Group: PV211: Flat Clustering 48 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Worked Example: Assign points to closest centroid Sojka, IIR Group: PV211: Flat Clustering 49 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Worked Example: Assignment 2 2 2 2 1 1 1 12 2 1 2 1 1 1 1 11 2 1 Sojka, IIR Group: PV211: Flat Clustering 50 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Worked Example: Recompute cluster centroids 2 2 2 2 1 1 1 12 2 1 2 1 1 1 1 11 2 1 Sojka, IIR Group: PV211: Flat Clustering 51 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Worked Example: Assign points to closest centroid Sojka, IIR Group: PV211: Flat Clustering 52 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Worked Example: Assignment 2 2 2 2 1 1 1 12 2 1 2 1 1 1 1 11 1 1 Sojka, IIR Group: PV211: Flat Clustering 53 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Worked Example: Recompute cluster centroids 2 2 2 2 1 1 1 12 2 1 2 1 1 1 1 11 1 1 Sojka, IIR Group: PV211: Flat Clustering 54 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Worked Example: Assign points to closest centroid Sojka, IIR Group: PV211: Flat Clustering 55 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Worked Example: Assignment 2 2 2 2 1 1 1 12 2 1 1 1 1 1 1 11 1 1 Sojka, IIR Group: PV211: Flat Clustering 56 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Worked Example: Recompute cluster centroids 2 2 2 2 1 1 1 12 2 1 1 1 1 1 1 11 1 1 Sojka, IIR Group: PV211: Flat Clustering 57 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Worked Ex.: Centroids and assignments after convergence 2 2 2 2 1 1 1 12 2 1 1 1 1 1 1 11 1 1 Sojka, IIR Group: PV211: Flat Clustering 58 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? K-means is guaranteed to converge: Proof RSS (Residual Sum of Squares) = sum of all squared distances between document vector and closest centroid RSS decreases during each reassignment step. because each vector is moved to a closer centroid RSS decreases during each recomputation step. see next slide There is only a finite number of clusterings. Thus: We must reach a fixed point. Assumption: Ties are broken consistently. Finite set & monotonically decreasing → convergence Sojka, IIR Group: PV211: Flat Clustering 59 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Recomputation decreases average distance RSS = K k=1 RSSk – the residual sum of squares (the “goodness” measure) RSSk(v) = x∈ωk v − x 2 = x∈ωk M m=1 (vm − xm)2 ∂RSSk(v) ∂vm = x∈ωk 2(vm − xm) = 0 vm = 1 |ωk| x∈ωk xm The last line is the componentwise definition of the centroid! We minimize RSSk when the old centroid is replaced with the new centroid. RSS, the sum of the RSSk, must then also decrease during recomputation. Sojka, IIR Group: PV211: Flat Clustering 60 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? K-means is guaranteed to converge But we don’t know how long convergence will take! If we don’t care about a few docs switching back and forth, then convergence is usually fast (< 10–20 iterations). However, complete convergence can take many more iterations. Sojka, IIR Group: PV211: Flat Clustering 61 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Optimality of K-means Convergence = optimality Convergence does not mean that we converge to the optimal clustering! This is the great weakness of K-means. If we start with a bad set of seeds, the resulting clustering can be horrible. Sojka, IIR Group: PV211: Flat Clustering 62 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Exercise: Suboptimal clustering 0 1 2 3 4 0 1 2 3 d1 d2 d3 d4 d5 d6 What is the optimal clustering for K = 2? Do we converge on this clustering for arbitrary seeds di , dj ? Sojka, IIR Group: PV211: Flat Clustering 63 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Initialization of K-means Random seed selection is just one of many ways K-means can be initialized. Random seed selection is not very robust: It’s easy to get a suboptimal clustering. Better ways of computing initial centroids: Select seeds not randomly, but using some heuristic (e.g., filter out outliers or find a set of seeds that has “good coverage” of the document space) Use hierarchical clustering to find good seeds Select i (e.g., i = 10) different random sets of seeds, do a K-means clustering for each, select the clustering with lowest RSS Sojka, IIR Group: PV211: Flat Clustering 64 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Time complexity of K-means Computing one distance of two vectors is O(M). Reassignment step: O(KNM) (we need to compute KN document-centroid distances) Recomputation step: O(NM) (we need to add each of the document’s < M values to one of the centroids) Assume number of iterations bounded by I Overall complexity: O(IKNM) – linear in all important dimensions However: This is not a real worst-case analysis. In pathological cases, complexity can be worse than linear. Sojka, IIR Group: PV211: Flat Clustering 65 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? What is a good clustering? Internal criteria Example of an internal criterion: RSS in K-means But an internal criterion often does not evaluate the actual utility of a clustering in the application. Alternative: External criteria Evaluate with respect to a human-defined classification Sojka, IIR Group: PV211: Flat Clustering 67 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? External criteria for clustering quality Based on a gold standard data set, e.g., the Reuters collection we also used for the evaluation of classification Goal: Clustering should reproduce the classes in the gold standard (But we only want to reproduce how documents are divided into groups, not the class labels.) First measure for how well we were able to reproduce the classes: purity Sojka, IIR Group: PV211: Flat Clustering 68 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? External criterion: Purity purity(Ω, C) = 1 N k max j |ωk ∩ cj | Ω = {ω1, ω2, . . . , ωK } is the set of clusters and C = {c1, c2, . . . , cJ } is the set of classes. For each cluster ωk: find class cj with most members nkj in ωk Sum all nkj and divide by total number of points Sojka, IIR Group: PV211: Flat Clustering 69 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Example for computing purity x o x x x x o x o o ⋄ o x ⋄ ⋄ ⋄ x cluster 1 cluster 2 cluster 3 To compute purity: 5 = maxj |ω1 ∩ cj | (class x, cluster 1); 4 = maxj |ω2 ∩ cj | (class o, cluster 2); and 3 = maxj |ω3 ∩ cj| (class ⋄, cluster 3). Purity is (1/17) × (5 + 4 + 3) ≈ 0.71. Sojka, IIR Group: PV211: Flat Clustering 70 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Another external criterion: Rand index Purity can be increased easily by increasing K – a measure that does not have this problem: Rand index. Definition: RI = TP+TN TP+FP+FN+TN Based on 2x2 contingency table of all pairs of documents: same cluster different clusters same class true positives (TP) false negatives (FN) different classes false positives (FP) true negatives (TN) TP+FN+FP+TN is the total number of pairs. TP+FN+FP+TN = N 2 for N documents. Example: 17 2 = 136 in o/⋄/x example Each pair is either positive or negative (the clustering puts the two documents in the same or in different clusters) . . . . . . and either “true” (correct) or “false” (incorrect): the clustering decision is correct or incorrect. Sojka, IIR Group: PV211: Flat Clustering 71 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Rand Index: Example As an example, we compute RI for the o/⋄/x example. We first compute TP + FP. The three clusters contain 6, 6, and 5 points, respectively, so the total number of “positives” or pairs of documents that are in the same cluster is: TP + FP = 6 2 + 6 2 + 5 2 = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster 3, and the x pair in cluster 3 are true positives: TP = 5 2 + 4 2 + 3 2 + 2 2 = 20 Thus, FP = 40 − 20 = 20. FN and TN are computed similarly. Sojka, IIR Group: PV211: Flat Clustering 72 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Rand measure for the o/⋄/x example same cluster different clusters same class TP = 20 FN = 24 different classes FP = 20 TN = 72 RI is then (20 + 72)/(20 + 20 + 24 + 72) ≈ 0.68. Sojka, IIR Group: PV211: Flat Clustering 73 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Two other external evaluation measures Two other measures Normalized mutual information (NMI) How much information does the clustering contain about the classification? Singleton clusters (number of clusters = number of docs) have maximum MI Therefore: normalize by entropy of clusters and classes F measure Like Rand, but “precision” and “recall” can be weighted Sojka, IIR Group: PV211: Flat Clustering 74 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Evaluation results for the o/⋄/x example purity NMI RI F5 lower bound 0.0 0.0 0.0 0.0 maximum 1.0 1.0 1.0 1.0 value for example 0.71 0.36 0.68 0.46 All four measures range from 0 (really bad clustering) to 1 (perfect clustering). Sojka, IIR Group: PV211: Flat Clustering 75 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? How many clusters? Number of clusters K is given in many applications. E.g., there may be an external constraint on K. Example: In the case of Scatter-Gather, it was hard to show more than 10–20 clusters on a monitor in the 90s. What if there is no external constraint? Is there a “right” number of clusters? One way to go: define an optimization criterion Given docs, find K for which the optimum is reached. What optimization criterion can we use? We can’t use RSS or average squared distance from centroid as criterion: always chooses K = N clusters. Sojka, IIR Group: PV211: Flat Clustering 77 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Exercise Your job is to develop the clustering algorithms for a competitor to news.google.com You want to use K-means clustering. How would you determine K? Sojka, IIR Group: PV211: Flat Clustering 78 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Simple objective function for K: Basic idea Start with 1 cluster (K = 1) Keep adding clusters (= keep increasing K) Add a penalty for each new cluster Then trade off cluster penalties against average squared distance from centroid Choose the value of K with the best tradeoff Sojka, IIR Group: PV211: Flat Clustering 79 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Simple objective function for K: Formalization Given a clustering, define the cost for a document as (squared) distance to centroid Define total distortion RSS(K) as sum of all individual document costs (corresponds to average distance) Then: penalize each cluster with a cost λ Thus for a clustering with K clusters, total cluster penalty is Kλ Define the total cost of a clustering as distortion plus total cluster penalty: RSS(K) + Kλ Select K that minimizes (RSS(K) + Kλ) Still need to determine good value for λ . . . Sojka, IIR Group: PV211: Flat Clustering 80 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Finding the “knee” in the curve 2 4 6 8 10 17501800185019001950 number of clusters residualsumofsquares Pick the number of clusters where curve “flattens”. Here: 4 or 9. Sojka, IIR Group: PV211: Flat Clustering 81 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Take-away today What is clustering? Applications of clustering in information retrieval K-means algorithm Evaluation of clustering How many clusters? Sojka, IIR Group: PV211: Flat Clustering 82 / 83 Recap Clustering: Introduction Clustering in IR K-means Evaluation How many clusters? Resources Chapter 16 of IIR Resources at https://www.fi.muni.cz/~sojka/PV211/ and http://cislmu.org, materials in MU IS and FI MU library Keith van Rijsbergen on the cluster hypothesis (he was one of the originators) Bing/Carrot2/Clusty: search result clustering systems Stirling number: the number of distinct k-clusterings of n items Sojka, IIR Group: PV211: Flat Clustering 83 / 83