1 Tree Learning Based on the ML lecture by Raymond J. Mooney and Peter Flach book 2 Machine learning settings (from Peter Flach book) 3 Machine learning settings •  Logical approach - Decision and regression trees, rules •  Probabilistic methods – Bayesian methods •  Linear methods – Linear discriminant, SVM, perceptron •  Distance-based methods – Lazy Learning (kNN), clustering 4 Learning Trees •  Supervised method – data classified into classes, i.e. data contains a target attribute. •  Decision trees. Finite number of classes >= 2 •  Regression trees. Class is continuous 5 Learning Trees 2 •  Classifier is a tree that represents a hypotheses in a disjunctive normal form •  Finding a minimal decision tree (nodes, leaves, or depth) is an NP-hard optimization problem. •  Heuristic algorithm can be used to build a tree •  Want to pick a feature that creates subsets of examples that are – in the case of decision trees - relatively “pure” in a single class so they are “closer” to being leaf nodes. 6 Learning Decision Trees 7 Data example 8 Decision Trees •  Tree-based classifiers for instances represented as feature-vectors. Nodes test features, there is one branch for each value of the feature, and leaves specify the category. •  Can represent arbitrary conjunction and disjunction. Can represent any classification function over discrete feature vectors. •  Can be rewritten as a set of rules, i.e. disjunctive normal form (DNF). –  red ∧ circle → pos –  red ∧ circle → A blue → B; red ∧ square → B green → C; red ∧ triangle → C color red blue green shape circle square triangle neg pos pos neg neg color red blue green shape circle square triangle B C A B C 9 Properties of Decision Tree Learning •  Continuous (real-valued) features can be handled by allowing nodes to split a real valued feature into two ranges based on a threshold (e.g. length < 3 and length ≥3) •  Classification trees have discrete class labels at the leaves, regression trees allow real-valued outputs at the leaves. •  Algorithms for finding consistent trees are efficient for processing large amounts of training data for data mining tasks. •  Methods developed for handling noisy training data (both class and feature noise). •  Methods developed for handling missing feature values. 10 Top-Down Decision Tree Induction •  Recursively build a tree top-down by divide and conquer. : + : + : - : - color red blue green : + : + : - 11 shape circle square triangle Top-Down Decision Tree Induction •  Recursively build a tree top-down by divide and conquer. : + : + : - : : + : + : - color red blue green : + : + pos : neg pos : neg neg 12 Decision Tree Induction Pseudocode DTree(examples, features) returns a tree If all examples are in one category, return a leaf node with that category label. Else if the set of features is empty, return a leaf node with the category label that is the most common in examples. Else pick a feature F and create a node R for it For each possible value vi of F: Let examplesi be the subset of examples that have value vi for F Add an out-going edge E to node R labeled with the value vi. If examplesi is empty then attach a leaf node to edge E labeled with the category that is the most common in examples. else call DTree(examplesi , features – {F}) and attach the resulting tree as the subtree under edge E. Return the subtree rooted at R. 13 Decision Tree Induction Pseudocode DTree(examples, features) returns a tree If all examples are in one category, return a leaf node with that category label. Else if the set of features is empty, return a leaf node with the category label that is the most common in examples. Else pick a feature F and create a node R for it For each possible value vi of F: Let examplesi be the subset of examples that have value vi for F Add an out-going edge E to node R labeled with the value vi. If examplesi is empty then attach a leaf node to edge E labeled with the category that is the most common in examples. else call DTree(examplesi , features – {F}) and attach the resulting tree as the subtree under edge E. Return the subtree rooted at R. 14 Picking a Good Split Feature •  Goal is to have the resulting tree be as small as possible, per Occam’s razor. •  Finding a minimal decision tree (nodes, leaves, or depth) is an NP-hard optimization problem. •  Top-down divide-and-conquer method does a greedy search for a simple tree but does not guarantee to find the smallest. –  General lesson in ML: “Greed is good.” •  Want to pick a feature that creates subsets of examples that are relatively “pure” in a single class so they are “closer” to being leaf nodes. •  There are a variety of heuristics for picking a good test, a popular one is based on information gain that originated with the ID3 system of Quinlan (1979). 15 Entropy •  Entropy (disorder, impurity) of a set of examples, S, relative to a binary classification is: where p1 is the fraction of positive examples in S and p0 is the fraction of negatives. •  If all examples are in one category, entropy is zero (we define 0⋅log(0)=0) •  If examples are equally mixed (p1=p0=0.5), entropy is a maximum of 1. •  Entropy can be viewed as the number of bits required on average to encode the class of an example in S where data compression (e.g. Huffman coding) is used to give shorter codes to more likely cases. •  For multi-class problems with c categories, entropy generalizes to: )(log)(log)( 020121 ppppSEntropy −−= ∑= −= c i ii ppSEntropy 1 2 )(log)( 16 Entropy Plot for Binary Classification 17 Information Gain •  The information gain of a feature F is the expected reduction in entropy resulting from splitting on this feature. where Sv is the subset of S having value v for feature F. •  Entropy of each resulting subset weighted by its relative size. •  Example: –  : + : + –  : - : - )()(),( )( v FValuesv v SEntropy S S SEntropyFSGain ∑∈ −= 2+, 2 -: E=1 size big small 1+,1- 1+,1E=1 E=1 Gain=1-(0.5⋅1 + 0.5⋅1) = 0 2+, 2 - : E=1 color red blue 2+,1- 0+,1E=0.918 E=0 Gain=1-(0.75⋅0.918 + 0.25⋅0) = 0.311 2+, 2 - : E=1 shape circle square 2+,1- 0+,1E=0.918 E=0 Gain=1-(0.75⋅0.918 + 0.25⋅0) = 0.311 18 Hypothesis Space Search •  Performs batch learning that processes all training instances at once rather than incremental learning that updates a hypothesis after each example. •  Performs hill-climbing (greedy search) that may only find a locally-optimal solution. Guaranteed to find a tree consistent with any conflict-free training set (i.e. identical feature vectors always assigned the same class), but not necessarily the simplest tree. •  Finds a single discrete hypothesis, so there is no way to provide confidences or create useful queries. 19 Continuous features Use binary split of the current interval using the same impurity measure as for discrete attributes 20 Missing feature values •  Remove the instance •  Replace with the most common (mode, mean) value •  Replace with the most common (mode, mean) value w.r.t. a class •  Decision trees: use weighted Impurity measure (add relative increment to each atribute value 21 Bias in Decision-Tree Induction •  Information-gain gives a bias for trees with minimal depth. •  Implements a search (preference) bias instead of a language (restriction) bias. 22 Learning Regression Trees 23 Learning Regression Trees Goal: to construct a regression tree that will help you determine a reasonable price for your next purchase. 24 Regression Trees: Impurity measure •  Need for another impurity measure •  Weighted average variance • 25 Regression tree growing •  To minimize the square error on the learning sample, the prediction at a leaf is the average output of the learning cases reaching that leaf •  Impurity of a sample is defined by the variance of the output in that sample: I(LS)=vary|LS{y}=Ey|LS{(y-Ey|LS{y})2} •  The best split is the one that reduces the most variance: }{var || || }{var),( || y LS LS yALSI aLSy a a LSy ∑-=Δ 26 Learning Regression Trees There are three features, hence three possible splits: Model [A100,B3,E112,M102,T202] [1051,1770,1900][4513][77][870][99,270,625] Condition [excellent,good, fair] [1770,4513][270,870,1051,1900][77,99,625] Leslie [yes,no] [625,870,1900][77,99,270,1051,1770,4513] 27 Learning Regression Trees There are three features, hence three possible splits: Model [A100,B3,E112,M102,T202] [1051,1770,1900][4513][77][870][99,270,625] means : 1574, 4513, 77, 870 and 331, weighted average of squared means 3.21*106 Condition [excellent,good, fair] [1770,4513][270,870,1051,1900][77,99,625] means : 3142, 1023 and 267, weighted average 2.68*106 Leslie [yes,no] [625,870,1900][77,99,270,1051,1770,4513] means : 1132 and 1297, weighted average 1.55*106 28 Learning Regression Trees Continue in the same manner and receive There are three features, hence three possible splits: Model [A100,B3,E112,M102,T202] [1051,1770,1900][4513][77][870][99,270,625] means : 1574, 4513, 77, 870 and 331, weighted average of squared means 3.21*106 Model is a winner Condition [excellent,good, fair] [1770,4513][270,870,1051,1900][77,99,625] means : 3142, 1023 and 267, weighted average 2.68*106 Leslie [yes,no] [625,870,1900][77,99,270,1051,1770,4513] means : 1132 and 1297, weighted average 1.55*106 29 Result • 30 Regression trees (2) •  A regression tree is a piecewise constant function of the input attributes X1≤ t1 X2 ≤ t2 X1 ≤ t3 X2 ≤ t4r1 r2 r3 r4 r5 r2 r1 r3 r5 r4 t2 t1 t3 X1 X2 31 Regression Trees: Evaluation •  descend the path from root to a leaf •  how accurate the prediction is? •  measure a difference between a correct value and the value in the leaf 32 Differences from classification trees •  Prediction is computed as the average of numerical target variable in the rectangle (in CT it is majority vote) •  Impurity measured by sum of squared deviations from leaf mean •  Performance measured by RMSE (root mean squared error) • 33 34 History of Decision-Tree Research •  Hunt and colleagues use exhaustive search decision-tree methods (CLS) to model human concept learning in the 1960’s. •  In the late 70’s, Quinlan developed ID3 with the information gain heuristic to learn expert systems from examples. •  Simulataneously, Breiman and Friedman and colleagues develop CART (Classification and Regression Trees), similar to ID3. •  In the 1980’s a variety of improvements are introduced to handle noise, continuous features, missing features, and improved splitting criteria. Various expert-system development tools results. •  Quinlan’s updated decision-tree package (C4.5) released in 1993. •  Weka includes Java version of C4.5 called J48. 35 Computational Complexity •  Worst case builds a complete tree where every path test every feature. Assume n examples and m features. •  At each level, i, in the tree, must examine the remaining mi features for each instance at the level to calculate info gains. •  However, learned tree is rarely complete (number of leaves is ≤ n). In practice, complexity is linear in both number of features (m) and number of training examples (n). F1 Fm ⋅⋅⋅⋅⋅ Maximum of n examples spread across all nodes at each of the m levels )( 1 2 ∑= =⋅ m i nmOni 36 Overfitting •  Learning a tree that classifies the training data perfectly may not lead to the tree with the best generalization to unseen data. –  There may be noise in the training data that the tree is erroneously fitting. –  The algorithm may be making poor decisions towards the leaves of the tree that are based on very little data and may not reflect reliable trends. •  A hypothesis, h, is said to overfit the training data is there exists another hypothesis which, h´, such that h has less error than h´ on the training data but greater error on independent test data. hypothesis complexity accuracy on training data on test data 37 Overfitting Example voltage (V) current(I) Testing Ohms Law: V = IR (I = (1/R)V) Ohm was wrong, we have found a more accurate function! Perfect fit to training data with an 9th degree polynomial (can fit n points exactly with an n-1 degree polynomial) Experimentally measure 10 points Fit a curve to the Resulting data. 38 Overfitting Example voltage (V) current(I) Testing Ohms Law: V = IR (I = (1/R)V) Better generalization with a linear function that fits training data less accurately. 39 Bias-variance tradeoff Another example 40 Bias-variance tradeoff Linear function works well but Cubic seems better Is there any general view to the problem, preferably with a theoretical background? 41 Bias-variance tradeoff y = f(x) + e, E[e] = 0, Var[e] = s2 f’() … estimate of f() learned by a classifier E[ (y-f’(x))2 ] = Bias[ f’(x) ] + Var[ f’(x) ] + s2 42 Bias-variance tradeoff y = f(x) + e, E[e] = 0, Var[e] = s2 f’() … estimate of f() learned by a classifier E[ (y-f’(x))2 ] = Bias[ f’(x) ] + Var[ f’(x) ] + s2 43 Bias-variance tradeoff y = f(x) + e, E[e] = 0, Var[e] = s2 f’() … estimate of f() learned by a classifier E[ (y-f’(x))2 ] = Bias[ f’(x) ] + Var[ f’(x) ] + s2 44 Bias-variance tradeoff y = f(x) + e, E[e] = 0, Var[e] = s2 f’() … estimate of f() learned by a classifier E[ (y-f’(x))2 ] = Bias[ f’(x) ] + Var[ f’(x) ] + s2 45 Bias-variance tradeoff y = f(x) + e, E[e] = 0, Var[e] = s2 f’() … estimate of f() learned by a classifier E[ (y-f’(x))2 ] = Bias[ f’(x) ] + Var[ f’(x) ] + s2 46 Bias-variance tradeoff y = f(x) + e, E[e] = 0, Var[e] = s2 f’() … estimate of f() learned by a classifier E[ (y-f’(x))2 ] = Bias[ f’(x) ] + Var[ f’(x) ] + s2 47 Overfitting Noise in Decision Trees •  Category or feature noise can easily cause overfitting. –  Add noisy instance : pos (but really neg) shape circle square triangle color red bluegreen pos neg pos neg neg 48 Overfitting Noise in Decision Trees •  Category or feature noise can easily cause overfitting. –  Add noisy instance : pos (but really neg) shape circle square triangle color red bluegreen pos neg pos neg : : + small med big posneg neg •  Noise can also cause different instances of the same feature vector to have different classes. Impossible to fit this data and must label leaf with the majority class. –  : neg (but really pos) •  Conflicting examples can also arise if the features are incomplete and inadequate to determine the class or if the target concept is non-deterministic. 49 Overfitting Prevention (Pruning) Methods •  Two basic approaches for decision trees –  Prepruning: Stop growing tree as some point during top-down construction when there is no longer sufficient data to make reliable decisions. –  Postpruning: Grow the full tree, then remove subtrees that do not have sufficient evidence. •  Label leaf resulting from pruning with the majority class of the remaining data, or a class probability distribution. •  Method for determining which subtrees to prune: –  Cross-validation: Reserve some training data as a hold-out set (validation set, tuning set) to evaluate utility of subtrees. –  Statistical test: Use a statistical test on the training data to determine if any observed regularity can be dismisses as likely due to random chance. –  Minimum description length (MDL): Determine if the additional complexity of the hypothesis is less complex than just explicitly remembering any exceptions resulting from pruning. 50 Reduced Error Pruning •  A post-pruning, cross-validation approach. Partition training data in “grow” and “validation” sets. Build a complete tree from the “grow” data. Until accuracy on validation set decreases do: For each non-leaf node, n, in the tree do: Temporarily prune the subtree below n and replace it with a leaf labeled with the current majority class at that node. Measure and record the accuracy of the pruned tree on the validation set. Permanently prune the node that results in the greatest increase in accuracy on the validation set. 51 Issues with Reduced Error Pruning •  The problem with this approach is that it potentially “wastes” training data on the validation set. •  Severity of this problem depends where we are on the learning curve: testaccuracy number of training examples 52 Cross-Validating without Losing Training Data •  If the algorithm is modified to grow trees breadthfirst rather than depth-first, we can stop growing after reaching any specified tree complexity. •  First, run several trials of reduced error-pruning using different random splits of grow and validation sets. •  Record the complexity of the pruned tree learned in each trial. Let C be the average pruned-tree complexity. •  Grow a final tree breadth-first from all the training data but stop when the complexity reaches C. •  Similar cross-validation approach can be used to set arbitrary algorithm parameters in general. 53 Additional Decision Tree Issues •  Better splitting criteria –  Information gain (IG) prefers features with many values. –  Gain Ratio : taking into account the number and size of daughter nodes into which an attribute splits the dataset, disregarding any information about the class ( = SplitInfo, using entropy measure used again), GR = IG/SplitInfo –  Gini 2*ppos*(1-ppos) , sqrt(Gini), Variance, … •  Features with costs •  Misclassification costs •  Incremental learning •  Mining large databases that do not fit in main memory •  Most common: C4.5 (here) and CART (at tutorials) 54 C4.5 •  Based on ID3 algorithm (Ross Quinlan) •  gain ratio used •  C4.5 ver.8 -> j48 (java) Scheme of C4.5 algorithm: Run several time and choose the best tree Inner: Take L% of learning data randomly Call ID3 (pre-pruning, see –m parameter) Prune the tree (post-pruning, -cf) Take T% of unseen learning data for validation If validation criterion holds, exit Otherwise add L% to L and go to Inner