Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants PV211: Introduction to Information Retrieval https://www.fi.muni.cz/~sojka/PV211 IIR 17: Hierarchical clustering Handout version Petr Sojka, Hinrich Schütze et al. Faculty of Informatics, Masaryk University, Brno Center for Information and Language Processing, University of Munich 2024-05-01 (compiled on 2024-06-10 08:55:35) Sojka, IIR Group: PV211: Hierarchical clustering 1 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Overview 1 Introduction 2 Single-link/Complete-link 3 Centroid/GAAC 4 Labeling clusters 5 Variants Sojka, IIR Group: PV211: Hierarchical clustering 2 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Take-away today Introduction to hierarchical clustering Single-link and complete-link clustering Centroid and group-average agglomerative clustering (GAAC) Bisecting K-means How to label clusters automatically Sojka, IIR Group: PV211: Hierarchical clustering 3 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Hierarchical clustering Our goal in hierarchical clustering is to create a hierarchy like the one we saw earlier in Reuters: coffee poultry oil & gasFranceUKChinaKenya industriesregions TOP We want to create this hierarchy automatically. We can do this either top-down or bottom-up. The best known bottom-up method is hierarchical agglomerative clustering. Sojka, IIR Group: PV211: Hierarchical clustering 5 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Hierarchical agglomerative clustering (HAC) HAC creates a hierarchy in the form of a binary tree. Assumes a similarity measure for determining the similarity of two clusters. Up to now, our similarity measures were for documents. We will look at four different cluster similarity measures. Sojka, IIR Group: PV211: Hierarchical clustering 6 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants HAC: Basic algorithm Start with each document in a separate cluster Then repeatedly merge the two clusters that are most similar Until there is only one cluster. The history of merging is a hierarchy in the form of a binary tree. The standard way of depicting this history is a dendrogram. Sojka, IIR Group: PV211: Hierarchical clustering 7 / 62 IntroductionSingle-link/Complete-linkCentroid/GAACLabelingclustersVariants Adendrogram 1.0 0.8 0.6 0.4 0.2 0.0 Ag trade reform. Back−to−school spending is up Lloyd’s CEO questioned Lloyd’s chief / U.S. grilling Viag stays positive Chrysler / Latin America Ohio Blue Cross Japanese prime minister / Mexico CompuServe reports loss Sprint / Internet access service Planet Hollywood Trocadero: tripling of revenues German unions split War hero Colin Powell War hero Colin Powell Oil prices slip Chains may raise prices Clinton signs law Lawsuit against tobacco companies suits against tobacco firms Indiana tobacco lawsuit Most active stocks Mexican markets Hog prices tumble NYSE closing averages British FTSE index Fed holds interest rates steady Fed to keep interest rates steady Fed keeps interest rates steady Fed keeps interest rates steady Thehistoryofmergers canbereadofffrom bottomtotop. Thehorizontallineof eachmergertellsus whatthesimilarityof themergerwas. Wecancutthe dendrogramata particularpoint(e.g., at0.1or0.4)togeta flatclustering. Sojka,IIRGroup:PV211:Hierarchicalclustering8/62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Divisive clustering Divisive clustering is top-down. Alternative to HAC (which is bottom up). Divisive clustering: Start with all docs in one big cluster Then recursively split clusters Eventually each node forms a cluster on its own. → Bisecting K-means at the end For now: HAC (= bottom-up) Sojka, IIR Group: PV211: Hierarchical clustering 9 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Naive HAC algorithm SimpleHAC(d1, . . . , dN ) 1 for n ← 1 to N 2 do for i ← 1 to N 3 do C[n][i] ← Sim(dn, di ) 4 I[n] ← 1 (keeps track of active clusters) 5 A ← [] (collects clustering as a sequence of merges) 6 for k ← 1 to N − 1 7 do i, m ← arg max{ i,m :i=m∧I[i]=1∧I[m]=1} C[i][m] 8 A.Append( i, m ) (store merge) 9 for j ← 1 to N 10 do (use i as representative for < i, m >) 11 C[i][j] ← Sim(< i, m >, j) 12 C[j][i] ← Sim(< i, m >, j) 13 I[m] ← 0 (deactivate cluster) 14 return A Sojka, IIR Group: PV211: Hierarchical clustering 10 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Computational complexity of the naive algorithm First, we compute the similarity of all N × N pairs of documents. Then, in each of N iterations: We scan the O(N × N) similarities to find the maximum similarity. We merge the two clusters with maximum similarity. We compute the similarity of the new cluster with all other (surviving) clusters. There are O(N) iterations, each performing a O(N × N) “scan” operation. Overall complexity is O(N3). We’ll look at more efficient algorithms later. Sojka, IIR Group: PV211: Hierarchical clustering 11 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Key question: How to define cluster similarity Single-link: Maximum similarity Maximum similarity of any two documents Complete-link: Minimum similarity Minimum similarity of any two documents Centroid: Average “intersimilarity” Average similarity of all document pairs (but excluding pairs of docs in the same cluster) This is equivalent to the similarity of the centroids. Group-average: Average “intrasimilarity” Average similarity of all document pairs, including pairs of docs in the same cluster Sojka, IIR Group: PV211: Hierarchical clustering 12 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Cluster similarity: Example 0 1 2 3 4 5 6 7 0 1 2 3 4 Sojka, IIR Group: PV211: Hierarchical clustering 13 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Single-link: Maximum similarity 0 1 2 3 4 5 6 7 0 1 2 3 4 Sojka, IIR Group: PV211: Hierarchical clustering 14 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Complete-link: Minimum similarity 0 1 2 3 4 5 6 7 0 1 2 3 4 Sojka, IIR Group: PV211: Hierarchical clustering 15 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Centroid: Average intersimilarity intersimilarity = similarity of two documents in different clusters 0 1 2 3 4 5 6 7 0 1 2 3 4 Sojka, IIR Group: PV211: Hierarchical clustering 16 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Group average: Average intrasimilarity intrasimilarity = similarity of any pair, including cases where the two documents are in the same cluster 0 1 2 3 4 5 6 7 0 1 2 3 4 Sojka, IIR Group: PV211: Hierarchical clustering 17 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Cluster similarity: Larger Example 0 1 2 3 4 5 6 7 0 1 2 3 4 Sojka, IIR Group: PV211: Hierarchical clustering 18 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Single-link: Maximum similarity 0 1 2 3 4 5 6 7 0 1 2 3 4 Sojka, IIR Group: PV211: Hierarchical clustering 19 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Complete-link: Minimum similarity 0 1 2 3 4 5 6 7 0 1 2 3 4 Sojka, IIR Group: PV211: Hierarchical clustering 20 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Centroid: Average intersimilarity 0 1 2 3 4 5 6 7 0 1 2 3 4 Sojka, IIR Group: PV211: Hierarchical clustering 21 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Group average: Average intrasimilarity 0 1 2 3 4 5 6 7 0 1 2 3 4 Sojka, IIR Group: PV211: Hierarchical clustering 22 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Single link HAC The similarity of two clusters is the maximum intersimilarity – the maximum similarity of a document from the first cluster and a document from the second cluster. Once we have merged two clusters, how do we update the similarity matrix? This is simple for single link: sim(ωi , (ωk1 ∪ ωk2 )) = max(sim(ωi , ωk1 ), sim(ωi , ωk2 )) Sojka, IIR Group: PV211: Hierarchical clustering 24 / 62 IntroductionSingle-link/Complete-linkCentroid/GAACLabelingclustersVariants Thisdendrogramwasproducedbysingle-link 1.0 0.8 0.6 0.4 0.2 0.0 Ag trade reform. Back−to−school spending is up Lloyd’s CEO questioned Lloyd’s chief / U.S. grilling Viag stays positive Chrysler / Latin America Ohio Blue Cross Japanese prime minister / Mexico CompuServe reports loss Sprint / Internet access service Planet Hollywood Trocadero: tripling of revenues German unions split War hero Colin Powell War hero Colin Powell Oil prices slip Chains may raise prices Clinton signs law Lawsuit against tobacco companies suits against tobacco firms Indiana tobacco lawsuit Most active stocks Mexican markets Hog prices tumble NYSE closing averages British FTSE index Fed holds interest rates steady Fed to keep interest rates steady Fed keeps interest rates steady Fed keeps interest rates steady Notice:manysmall clusters(1or2 members)beingadded tothemaincluster Thereisnobalanced 2-clusteror3-cluster clusteringthatcanbe derivedbycuttingthe dendrogram. Sojka,IIRGroup:PV211:Hierarchicalclustering25/62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Complete link HAC The similarity of two clusters is the minimum intersimilarity – the minimum similarity of a document from the first cluster and a document from the second cluster. Once we have merged two clusters, how do we update the similarity matrix? Again, this is simple: sim(ωi , (ωk1 ∪ ωk2 )) = min(sim(ωi , ωk1 ), sim(ωi , ωk2 )) We measure the similarity of two clusters by computing the diameter of the cluster that we would get if we merged them. Sojka, IIR Group: PV211: Hierarchical clustering 26 / 62 IntroductionSingle-link/Complete-linkCentroid/GAACLabelingclustersVariants Complete-linkdendrogram 1.0 0.8 0.6 0.4 0.2 0.0 NYSE closing averages Hog prices tumble Oil prices slip Ag trade reform. Chrysler / Latin America Japanese prime minister / Mexico Fed holds interest rates steady Fed to keep interest rates steady Fed keeps interest rates steady Fed keeps interest rates steady Mexican markets British FTSE index War hero Colin Powell War hero Colin Powell Lloyd’s CEO questioned Lloyd’s chief / U.S. grilling Ohio Blue Cross Lawsuit against tobacco companies suits against tobacco firms Indiana tobacco lawsuit Viag stays positive Most active stocks CompuServe reports loss Sprint / Internet access service Planet Hollywood Trocadero: tripling of revenues Back−to−school spending is up German unions split Chains may raise prices Clinton signs law Noticethatthis dendrogramismuch morebalancedthan thesingle-linkone. Wecancreatea 2-clusterclustering withtwoclustersof aboutthesame size. Sojka,IIRGroup:PV211:Hierarchicalclustering27/62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Exercise: Compute single and complete link clusterings 0 1 2 3 4 0 1 2 3 d5 d6 d7 d8 d1 d2 d3 d4 Sojka, IIR Group: PV211: Hierarchical clustering 28 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Single-link clustering 0 1 2 3 4 0 1 2 3 d5 d6 d7 d8 d1 d2 d3 d4 Sojka, IIR Group: PV211: Hierarchical clustering 29 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Complete link clustering 0 1 2 3 4 0 1 2 3 d5 d6 d7 d8 d1 d2 d3 d4 Sojka, IIR Group: PV211: Hierarchical clustering 30 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Single-link vs. Complete link clustering 0 1 2 3 4 0 1 2 3 d5 d6 d7 d8 d1 d2 d3 d4 0 1 2 3 4 0 1 2 3 d5 d6 d7 d8 d1 d2 d3 d4 Sojka, IIR Group: PV211: Hierarchical clustering 31 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Single-link: Chaining 0 1 2 3 4 5 6 7 8 9 10 11 12 0 1 2 Single-link clustering often produces long, straggly clusters. For most applications, these are undesirable. Sojka, IIR Group: PV211: Hierarchical clustering 32 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants What 2-cluster clustering will complete-link produce? 0 1 2 3 4 5 6 7 0 1 d1 d2 d3 d4 d5 Coordinates: 1 + 2 × ǫ, 4, 5 + 2 × ǫ, 6, 7 − ǫ. Sojka, IIR Group: PV211: Hierarchical clustering 33 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Complete-link: Sensitivity to outliers 0 1 2 3 4 5 6 7 0 1 d1 d2 d3 d4 d5 The complete-link clustering of this set splits d2 from its right neighbors – clearly undesirable. The reason is the outlier d1. This shows that a single outlier can negatively affect the outcome of complete-link clustering. Single-link clustering does better in this case. Sojka, IIR Group: PV211: Hierarchical clustering 34 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Centroid HAC The similarity of two clusters is the average intersimilarity – the average similarity of documents from the first cluster with documents from the second cluster. A naive implementation of this definition is inefficient (O(N2)), but the definition is equivalent to computing the similarity of the centroids: sim-cent(ωi , ωj ) = µ(ωi ) · µ(ωj) Hence the name: centroid HAC Note: this is the dot product, not cosine similarity! Sojka, IIR Group: PV211: Hierarchical clustering 36 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Exercise: Compute centroid clustering 0 1 2 3 4 5 6 7 0 1 2 3 4 5 d1 d2 d3 d4 d5 d6 Sojka, IIR Group: PV211: Hierarchical clustering 37 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Centroid clustering 0 1 2 3 4 5 6 7 0 1 2 3 4 5 d1 d2 d3 d4 d5 d6 µ1 µ2 µ3 Sojka, IIR Group: PV211: Hierarchical clustering 38 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Inversion in centroid clustering In an inversion, the similarity increases during a merge sequence. Results in an “inverted” dendrogram. Below: Similarity of the first merger (d1 ∪ d2) is -4.0, similarity of second merger ((d1 ∪ d2) ∪ d3) is ≈ −3.5. 0 1 2 3 4 5 0 1 2 3 4 5 d1 d2 d3 −4 −3 −2 −1 0 d1 d2 d3 Sojka, IIR Group: PV211: Hierarchical clustering 39 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Inversions Hierarchical clustering algorithms that allow inversions are inferior. The rationale for hierarchical clustering is that at any given point, we’ve found the most coherent clustering for a given K. Intuitively: smaller clusterings should be more coherent than larger clusterings. An inversion contradicts this intuition: we have a large cluster that is more coherent than one of its subclusters. The fact that inversions can occur in centroid clustering is a reason not to use it. Sojka, IIR Group: PV211: Hierarchical clustering 40 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Group-average agglomerative clustering (GAAC) GAAC also has an “average-similarity” criterion, but does not have inversions. The similarity of two clusters is the average intrasimilarity – the average similarity of all document pairs (including those from the same cluster). But we exclude self-similarities. Sojka, IIR Group: PV211: Hierarchical clustering 41 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Group-average agglomerative clustering (GAAC) Again, a naive implementation is inefficient (O(N2)) and there is an equivalent, more efficient, centroid-based definition: sim-ga(ωi , ωj) = 1 (Ni + Nj )(Ni + Nj − 1) [( dm∈ωi ∪ωj dm)2 − (Ni + Nj )] Again, this is the dot product, not cosine similarity. Sojka, IIR Group: PV211: Hierarchical clustering 42 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Which HAC clustering should I use? Don’t use centroid HAC because of inversions. In most cases: GAAC is best since it isn’t subject to chaining and sensitivity to outliers. However, we can only use GAAC for vector representations. For other types of document representations (or if only pairwise similarities for documents are available): use complete-link. There are also some applications for single-link (e.g., duplicate detection in web search). Sojka, IIR Group: PV211: Hierarchical clustering 43 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Flat or hierarchical clustering? For high efficiency, use flat clustering (or perhaps bisecting k-means) For deterministic results: HAC When a hierarchical structure is desired: hierarchical algorithm HAC also can be applied if K cannot be predetermined (can start without knowing K) Sojka, IIR Group: PV211: Hierarchical clustering 44 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Major issue in clustering – labeling After a clustering algorithm finds a set of clusters: how can they be useful to the end user? We need a pithy label for each cluster. For example, in search result clustering for “jaguar”, The labels of the three clusters could be “animal”, “car”, and “operating system”. Topic of this section: How can we automatically find good labels for clusters? Sojka, IIR Group: PV211: Hierarchical clustering 46 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Exercise Come up with an algorithm for labeling clusters Input: a set of documents, partitioned into K clusters (flat clustering) Output: A label for each cluster Part of the exercise: What types of labels should we consider? Words? Sojka, IIR Group: PV211: Hierarchical clustering 47 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Discriminative labeling To label cluster ω, compare ω with all other clusters Find terms or phrases that distinguish ω from the other clusters We can use any of the feature selection criteria we introduced in text classification to identify discriminating terms: mutual information, χ2 and frequency. (but the latter is actually not discriminative) Sojka, IIR Group: PV211: Hierarchical clustering 48 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Non-discriminative labeling Select terms or phrases based solely on information from the cluster itself E.g., select terms with high weights in the centroid (if we are using a vector space model) Non-discriminative methods sometimes select frequent terms that do not distinguish clusters. For example, Monday, Tuesday, . . . in newspaper text Sojka, IIR Group: PV211: Hierarchical clustering 49 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Using titles for labeling clusters Terms and phrases are hard to scan and condense into a holistic idea of what the cluster is about. Alternative: titles For example, the titles of two or three documents that are closest to the centroid. Titles are easier to scan than a list of phrases. Sojka, IIR Group: PV211: Hierarchical clustering 50 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Cluster labeling: Example labeling method # docs centroid mutual information title 4 622 oil plant mexico production crude power 000 refinery gas bpd plant oil production barrels crude bpd mexico dolly capacity petroleum MEXICO: Hurricane Dolly heads for Mexico coast 9 1017 police security russian people military peace killed told grozny court police killed military security peace told troops forces rebels people RUSSIA: Russia’s Lebed meets rebel chief in Chechnya 10 1259 00 000 tonnes traders futures wheat prices cents september tonne delivery traders futures tonne tonnes desk wheat prices 000 00 USA: Export Business - Grain/oilseeds com- plex Three methods: most prominent terms in centroid, differential labeling using MI, title of doc closest to centroid All three methods do a pretty good job. Sojka, IIR Group: PV211: Hierarchical clustering 51 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Bisecting K-means: A top-down algorithm Start with all documents in one cluster Split the cluster into 2 using K-means Of the clusters produced so far, select one to split (e.g. select the largest one) Repeat until we have produced the desired number of clusters Sojka, IIR Group: PV211: Hierarchical clustering 53 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Bisecting K-means BisectingKMeans(d1, . . . , dN) 1 ω0 ← {d1, . . . , dN} 2 leaves ← {ω0} 3 for k ← 1 to K − 1 4 do ωk ← PickClusterFrom(leaves) 5 {ωi , ωj } ← KMeans(ωk, 2) 6 leaves ← leaves \ {ωk} ∪ {ωi , ωj } 7 return leaves Sojka, IIR Group: PV211: Hierarchical clustering 54 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Bisecting K-means If we don’t generate a complete hierarchy, then a top-down algorithm like bisecting K-means is much more efficient than HAC algorithms. But bisecting K-means is not deterministic. There are deterministic versions of bisecting K-means (see resources at the end), but they are much less efficient. Sojka, IIR Group: PV211: Hierarchical clustering 55 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Efficient single link clustering SingleLinkClustering(d1, . . . , dN , K) 1 for n ← 1 to N 2 do for i ← 1 to N 3 do C[n][i].sim ← SIM(dn, di ) 4 C[n][i].index ← i 5 I[n] ← n 6 NBM[n] ← arg maxX∈{C[n][i]:n=i} X.sim 7 A ← [] 8 for n ← 1 to N − 1 9 do i1 ← arg max{i:I[i]=i} NBM[i].sim 10 i2 ← I[NBM[i1].index] 11 A.Append( i1, i2 ) 12 for i ← 1 to N 13 do if I[i] = i ∧ i = i1 ∧ i = i2 14 then C[i1][i].sim ← C[i][i1].sim ← max(C[i1][i].sim, C[i2][i].sim) 15 if I[i] = i2 16 then I[i] ← i1 17 NBM[i1] ← arg maxX∈{C[i1][i]:I[i]=i∧i=i1} X.sim 18 return A Sojka, IIR Group: PV211: Hierarchical clustering 56 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Time complexity of HAC The single-link algorithm we just saw is O(N2). Much more efficient than the O(N3) algorithm we looked at earlier! There are also O(N2) algorithms for complete-link, centroid and GAAC. Sojka, IIR Group: PV211: Hierarchical clustering 57 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Combination similarities of the four algorithms clustering algorithm sim(ℓ, k1, k2) single-link max(sim(ℓ, k1), sim(ℓ, k2)) complete-link min(sim(ℓ, k1), sim(ℓ, k2)) centroid ( 1 Nm vm) · ( 1 Nℓ vℓ) group-average 1 (Nm+Nℓ)(Nm+Nℓ−1)[(vm + vℓ)2 − (Nm + Nℓ)] Sojka, IIR Group: PV211: Hierarchical clustering 58 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Comparison of HAC algorithms method combination similarity time compl. optimal? comment single-link max intersimilarity of any 2 docs Θ(N2) yes chaining effect complete-link min intersimilarity of any 2 docs Θ(N2 log N) no sensitive to outliers group-average average of all sims Θ(N2 log N) no best choice for most applications centroid average intersimilarity Θ(N2 log N) no inversions can occur Sojka, IIR Group: PV211: Hierarchical clustering 59 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants What to do with the hierarchy? Use as is (e.g., for browsing as in Yahoo hierarchy) Cut at a predetermined threshold Cut to get a predetermined number of clusters K Ignores hierarchy below and above cutting line. Sojka, IIR Group: PV211: Hierarchical clustering 60 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Take-away today Introduction to hierarchical clustering Single-link and complete-link clustering Centroid and group-average agglomerative clustering (GAAC) Bisecting K-means How to label clusters automatically Sojka, IIR Group: PV211: Hierarchical clustering 61 / 62 Introduction Single-link/Complete-link Centroid/GAAC Labeling clusters Variants Resources Chapter 17 of IIR Resources at https://www.fi.muni.cz/~sojka/PV211/ and http://cislmu.org, materials in MU IS and FI MU library Columbia Newsblaster (a precursor of Google News): McKeown et al. (2002) Bisecting K-means clustering: Steinbach et al. (2000) PDDP (similar to bisecting K-means; deterministic, but also less efficient): Saravesi and Boley (2004) Sojka, IIR Group: PV211: Hierarchical clustering 62 / 62