The Cross-Battery Assessment Approach An Overview, Historical Perspective, and Current Directions Dawn P. Flanagan Vincent C. Alfonso Samuel O. Ortiz AN OVERVIEW OF THE XBA APPROACH The Cattell-Hom-Carroll (CHC) cross-battery assessment approach (hereafter referred to as the XBA approach) was introduced by Flanagan and her colleagues well over a decade ago {Flanagan & McGrew, 1997; Flanagan, McGrew, & Ortiz, 2000; Flanagan & Ortiz, 2001; McGrew & Flanagan, 1998). The XBA approach provides practitioners with the means to make systematic, reliable, and theory-based interpretations of cognitive batteries, and to augment them with academic ability tests and neuropsychological instruments, to gain a more complete understanding of an individual's strengths and weaknesses (Flanagan, Ortiz, & Alfonso, 2007, 2012). Moving beyond the boundaries of a single cognitive, achievement, or neuropsychological battery by adopting the theoretically and psychometrically sound principles and procedures outlined in the XBA approach represents a significant improvement over single-battery assessment because it allows practitioners to focus on measurement of the cognitive constructs and neu-rodevelopmental functions that are most germane to referral concerns (e.g., Carroll, 1998; Decker, 2008; Kaufman, 2000; Wilson, 1992). According to Carroll (Appendix, this volume), the CHC taxonomy of human cognitive abilities "appears to prescribe that individuals should be assessed with regard to the total range of abilities the theory specifies" (p. 889; emphasis in original). However, because Carroll recognized that "any such prescription would of course create enormous problems," he indicated that "[r]esearch is needed to spell out how the assessor can select what abilities need to be tested in particular cases" (p. 889). Flanagan and colleagues' XBA approach was developed to "spell out" how practitioners can conduct assessments that approximate the total range of cognitive and academic abilities and neuropsychological processes more adequately than what is possible with most collections of co-normed tests. In a review of the XBA approach, Carroll (1998) stated that it "can be used to develop the most appropriate information about an individual in a given testing situation" (p. xi). In Kaufman's (2000) review of XBA, he stated that the approach is based on sound assessment principles, adds theory to psychometrics, and improves the quality of the assessment and interpretation of cognitive abilities and processes. More recently, Decker (2008) stated that the CHC XBA approach "may improve school psychology assessment practice and facilitate the integration of neuropsychological methodology in school-based assessments . . . [because it] shift[s] assessment practice from IQ composites to neurodevelopmental functions" (p. 804). 459 460 CONTEMPORARY INTERPRETIVE APPROACHES Noteworthy is the fact that assessment professionals "crossed" batteries well before Woodcock (1990) recognized the need, and before Flanagan and her colleagues introduced the XBA approach in the late 1990s based, in part, on Woodcock's suggestion. Neuropsychologists have long adopted the practice of crossing various standardized tests in an attempt to measure a broader range of brain functions than that offered by any single instrument (Lezak, 1976, 1995; Lezak, Howieson, &Lor-ing, 2004; see Wilson, 1992, for a review). Nevertheless, several problems with crossing batteries plagued assessment-related fields for years. Many of these problems have been circumvented by Flanagan and colleagues' XBA approach (see Table 19.1 for examples). But unlike the XBA approach, the various so-called "cross-battery" techniques applied within the field of neuropsychological assessment, for example, are not typically grounded in a systematic approach that is theoretically and psychometrically sound. Thus, as Wilson (1992) cogently pointed out, the field of neuropsychological assessment was in need of an approach that would guide practitioners through the selection of measures that would result in more specific and delineated patterns of function and dysfunction—an approach that would provide more clinically useful information than one "wedded to the utilization of subscale scores and IQs" (p. 382) would. Indeed, all fields involved in the assessment of cognitive and neuropsychological functioning have some need for an approach that would aid practitioners in their attempt to "touch all of the major cognitive areas, with emphasis on those most suspect on the basis of history, observation, and ongoing test findings" (Wilson, 1992, p. 382). The XBA approach meets this need. The definition of and rationale for XBA is presented in this chapter, followed by a description of the XBA method. Figure 19.1 provides an overview of the information presented in this chapter. DEFINITION The XBA approach is a method of assessing cognitive and academic abilities and neuropsychological processes that is grounded mainly in CHC theory and research. It allows practitioners to measure reliably a wider range (or a more in-depth but selective range) of ability constructs than that represented by any given stand-alone assessment battery. The XBA approach is based on three foundational sources of information (Flanagan et al., 2007, 2012) that together provide the knowledge base necessary to organize theory-driven, comprehensive assessment of cognitive, academic, and neuropsychological constructs. THE FOUNDATION OF THE XBA APPROACH The foundation of the XBA approach is contemporary CHC theory—specifically, the broad and narrow CHC ability classifications of all subtests constituting current cognitive, achievement, and selected neuropsychological batteries. CHC Theory The CHC theory was selected to guide assessment and interpretation because it is based on a more thorough network of validity evidence than any other contemporary multidimensional model of intelligence within the psychometric tradition (see. Carroll, 1993; Horn & Blankson, Chapter 3, this volume; McGrew, 2005; Messick, 1992; Schneider & McGrew, Chapter 4, this volume; Sternberg 6k Kaufman, 1998). According to Daniel (1997), the: strength of the multiple-cognitive-abilities (CHC) model is that it was arrived at "by synthesizing hundreds of factor analyses conducted over decades by independent researchers using many different collections of tests. Never before has a psychometric ability model been so firmly grounded in data" (pp. 1042-1043). Because CHC theory is discussed in detail by Schneider and McGrew in Chapter 4 of this volume, it is not described in detail here. CHC Broad (Stratum II) Classifications of Major Ability Tests Using the results of a series of cross-battery confirmatory factor analysis (CFA) studies of the major intelligence batteries (see Keith & Reynolds, 2010, for a review) and the task analyses of many cognitive test experts, Flanagan and colleagues classified the subtests of the major cognitive, neuropsychological, and achievement batteries according to the particular CHC broad abilities they measured (e.g., Flanagan, Alfonso, Ortiz, & Dynda, 2010; Flanagan, Ortiz, Alfonso, & Mas-cola, 2006; Flanagan et al, 2007, 2012; McGrew, 1997; McGrew & Flanagan, 1998; Reynolds, Keith, Flanagan, & Alfonso, 2011). To date, hundreds of CHC broad-ability classifications have been based on the results of these studies. These Cross-Battery Assessment 461 TABLE 19.1. Parallel Needs in Assessment-Related Fields Addressed by the Cross-Battery Assessment (XBA) Approach Need within assessment-related fields0 Need addressed by the XBA approach School psychology, clinical psychology, and neuropsychology have lagged in the development of conceptual models of the assessment of individuals. There is a need for the development of contemporary models. It is likely that there is a need for events external to a field of endeavor to give impetus to new developments and real advances in that field. There is a need for truly unidimensional assessment instruments for children and adults. Without them, valid interpretations of test scores are problematic at best. There is a need to utilize a conceptual framework to direct any approach to assessment. This would aid both in the selection of instruments and methods, and in the interpretation of test findings. it is necessary for the conceptual framework or model underlying assessment to incorporate various aspects of neuropsychological and cognitive functioning, which can be described in terms of constructs that are recognized in the neuropsychological and cognitive psychology literature. There is a need to adopt a conceptual framework that allows for the measurement of the full range of behavioral functions subserved by the brain. Unfortunately, in neuropsychological assessment there is no inclusive set of measures that is standardized on a single normative population. Because there are no truly unidimensional measures in psychological assessment, there is a need to select subtests from standardized instruments that appear to reflect the neurocognitive function of interest. In neuropsychological assessment, therefore, the aim is to select those measures that, on the basis of careful task analysis, appear mainly to tap a given construct. The XBA approach provides a contemporary model for measurement and interpretation of cognitive and academic abilities and neuropsychological processes. Carroll and Horn's fluid-crystallized theoretical models and systematic programs of research in cognitive psychology provided the impetus for the XBA approach and led to the development of better assessment instruments and interpretive procedures. Several scale and composite measures on ability batteries are mixed, containing excess reliable variance associated with a construct irrelevant to the one intended for interpretation. The XBA approach ensures that assessments include composites or clusters that are relatively pure representations of Cattell-Hom-Carroll (CHC) broad and narrow abilities, allowing for valid measurement and interpretation of multiple, relatively distinct abilities. The XBA approach to assessment is based mainly on CHC theory as well as sound measurement and interpretive procedures. Since this approach links all the major intelligence batteries, academic achievement tests, and selected neuropsychological instruments to this theory, both selection of tests and interpretation of test findings are made within the context of an overarching conceptual framework. The XBA approach incorporates various aspects of neuropsychological and cognitive ability functions, which are described in terms of constructs that are recognized in the related literature. XBA allows for the measurement of a wide range of broad and narrow cognitive abilities specified in CHC theory. Although an XBA norm group does not exist, the method of crossing batteries to obtain a broad assessment of human cognitive abilities is grounded in sound psychometric principles and procedures. The XBA approach is defined in part by a CHC classification system. Subtests from the major intelligence batteries, academic achievement tests, and selected neuropsychological instruments were classified as measures of broad and narrow CHC constructs. Use of these classifications allows practitioners to be reasonably confident that a given test taps a given construct. (cone.) 462 CONTEMPORARY INTERPRETIVE APPROACHES TABLE 19,1. (cont.) Need within assessment-related fields'1 Need addressed by the XBA approach It is clear that an eclectic approach is needed in the selection of measures—preferably subtests rather than the omnibus IQs, in order to gain more specificity in the delineation of patterns of function and dysfunction. There is a need to solve the potential problems that can arise from crossing normative groups as well as sets of measures that vary in reliability. The XBA approach ensures that two or more relatively pure, but qualitatively different, indicators of each broad cognitive ability are represented in an assessment of broad CHC constructs. Two or more qualiratively similar indicators are necessary to make inferences about specific or narrow CHC constructs. The XBA approach is eclectic in its selection of measures, but attempts to represent all broad and narrow abilities and processes of interest by using a subset of measures from one battery to augment another battery. In the XBA approach, one can typically achieve baseline data in cognitive functioning across seven or eight CHC broad abilities and processes through the use of two well-standardized batteries that were normed within a few years of one another; this minimizes the effects of error due to norming differences. Also, since interpretation of both broad and narrow CHC abilities is made at the cluster (rather than subtest) level, issues related to low reliability are less problematic in this approach. Also, because confidence intervals are used for all broad- and narrow-ability clusters, the effects of measurement error are reduced further. Additionally, any and all evidence of weakness, deficit, or dysfunction must have ecological validity (see Flanagan et al, 2012, for details). "Information obtained, in part, from Wilson (1992). classifications of cognitive, neuropsychological, and achievement batteries assist practitioners in identifying measures that assess the various broad and narrow abilities represented in CHC theory. Classification of tests at the broad-ability level is necessary to improve upon the validity of cognitive assessment and interpretation. Specifically, broad-ability classifications ensure that the CHC constructs underlying assessments are minimally affected by construct-irrelevant variance (Messick, 1989, 1995). In other words, knowing what tests measure what abilities enables clinicians to organize tests into construct-relevant clusters—clusters that contain only measures that are relevant to the construct or ability of interest (McGrew & Flanagan, 1998). To clarify, construct-irrelevant variance is present when an "assessment is too broad, containing excess reliable variance associated with other distinct constructs . . . that affects responses in a manner irrelevant to the interpreted constructs" (Messick, 1995, p. 742). For example, the Wechsler Intelligence Scale for Children—Fourth Edition (WISC-IV; Wechsler, 2003) Perceptual Reasoning Index (PRI) has construct-irrelevant variance because, in addition to its two indicators of Gf (i.e., Picture Concepts, Matrix Reasoning), it has an indicator of Gv (i.e., Block Design). Therefore, the PRI is a mixed measure of two relatively distinct, broad CHC abilities (Gf and Gv); it con': tains reliable variance (associated with Gv) that is irrelevant to the interpreted construct of Gf. The PRI represents a grouping together of subtests on the basis of factor analysis and face validity (e.g.+ grouping tests together that appear to measure the same common construct) the latter of which may result in an inappropriate aggregation of subtests that can actually decrease reliability and validity (Epstein, 1983). Through CHC-driven CPA; Keith, Fine, Reynolds, Taub, and Kranzler (2006) showed that a five-factor model that included Gf and Gv (not PRI) fit the WISC-IV standardization data equally well as the four-factor Wechsler model. As a result of their analysis, Gf and Gv composites for the WISC-IV were provided in Flanagan and Kaufman (2004, 2009) and are recommended in the XBA approach because they contain only construct-relevant variance (Flanagan et al„ 2012). Construct-irrelevant variance can also operate at the subtest (as opposed to composite) level-For example, a Verbal Analogies test (e.g., "Sun Cross-Battery Assessment 463 Rationale Broad (stratum II test classifications Narrow . (stratum i) test classifications Test development Bridge the theory-practice gap Blueprint for improving upon the substantive and structural validity of tests .....i Provide standard nomenclature Identification of cognitive, academic, and neuropsychological strengths and weaknesses Guiding principles Select battery that best addresses referral concerns. X Use clusters based on actual ■ norms. when possible Select tests ified through an acceptable method X When broad ability ts underrepresented, obtain from another battery When crossing batteries, use tests developed' and normed within a few years SelecTlests from the smallest number of batteries to minimize error Establish ecological validity for deficits Step-by-step 1 process* Select battery that best addresses referral concerns identify adequately represented CHC abilities tests to measure CHC abilities not measured by core battery Administer core battery and supplemental tests Use XBA DM)A to assist with interpretation Follow XBA interpretive guidelines FIGURE 19.1. Overview of the CHC XBA approach. XBA DMIA is the XBA Data Management and Interpretive Assistant. This program (described in Table 19.6) automates the XBA approach. These steps are described in Essential of Cross-Battery Assessment (Flanagan et al., 2007, 2012). is to day as moon is to_") measures both Gc and Gf. That is, in theory-driven factor-analytic studies, Verbal Analogies tests have significant loadings on both the Gc and Gf factors (e.g., Woodcock, 1990). Therefore, this test is considered factorially complex—a condition that complicates interpretation. For example, is poor performance due to low vocabulary knowledge [Gc] or to poor reasoning ability [Gf], or both? In short, interpretation is less complicated when composites are derived from relatively pure measures of the underlying construct. Conversely, "any test that measures more than one common factor to a substantial degree yields scores that are psychologically ambiguous and very difficult to interpret" (Guilford, 1954, p. 356; cited in Briggs & Cheek, 1986). Therefore, cross-battery assessments are typically designed using only empirically strong Cross-Battery Assessment 463 oundation Rationale CHC theory SroacS (stratum II) test classifications Narrow (stratum I) test classifications Practice Test development Bridge the theory-practice gap Blueprint for improving upon the substantive and structural validity of tests ............................f„,..... Provide standard nomenclature Identification of cognitive, academic, and neuropsychological strengths : ■ :■ and weaknesses Guiding principles Select battery that best addresses referral concerns Use clusters based on actual ■ norms when possible Selecttests classified through an acceptable method When broad ability is underrepresented, obtain from another battery When crossing ■ batteries, use . tests developed and normed within a few years "Select tests from the smallest number of batteries to minimize error Establish ecological validity for deficits Step-by-step process* Select battery that best addresses referral concerns Identify adequately represented CHC abilities Select tests to measure CHC abilities not measured by core battery I Administer : core battery and supplemental tests: zziz........l Use XBA DM1A to assist with interpretation Follow XBA interpretive : guidelines- FIGURE 19.1. Overview of the CHC XBA approach. XBA DMIA is the XBA Data Management and Interpretive Assistant. This program (described in Table 19.6} automates the XBA approach. *These steps are described in Essential of Cross-Battery Assessment (Flanagan et al, 2007, 2012). is to day as moon is to_") measures both Gc and Gf. That is, in theory-driven factor-analytic studies, Verbal Analogies tests have significant loadings on both the Gc and Gf factors (e.g., Woodcock, 1990). Therefore, this test is considered factorially complex—a condition that complicates interpretation. For example, is poor performance due to low vocabulary knowledge [Gc] or to poor reasoning ability [Gf], or both? In short, interpretation is less complicated when composites are derived from relatively pure measures of the underlying construct. Conversely, "any test that measures more than one common factor to a substantial degree yields scores that are psychologically ambiguous and very difficult to interpret" (Guilford, 1954, p. 356; cited in Briggs & Cheek, 1986). Therefore, cross-battery assessments are typically designed using only empirically strong 464 CONTEMPORARY INTERPRETIVE APPROACHES or moderate (but not factorially complex or mixed) measures of CHC abilities (Flanagan et al., 2007, 2012; McGrew & Flanagan, 1998). CHC Narrow (Stratum I) Classifications of Major Ability Tests Narrow-ability classifications were originally reported in McGrew (1997), then later reported in McGrew and Flanagan (1998) and Flanagan and colleagues (2000) after minor modifications. Flanagan and her colleagues continued to gather content validity data on cognitive ability tests and expanded their analyses to include tests of academic achievement (Flanagan, Ortiz, Alfonso, & Mascolo, 2002; Flanagan et al., 2006) and, more recently, tests of neuropsychological processes (Flanagan et al, 2010, 2012). Classifications of cognitive ability tests according to content, format, and task demand at the narrow-ability (stratum 1) level were necessary to improve further upon the validity of cognitive ability assessment and interpretation. Specifically, these narrow-ability classifications were necessary to ensure that the CHC constructs underlying assessments are well represented (McGrew & Flanagan, 1998). According to Messick (1995), construct underrepresentation is present when an "assessment is too narrow and fails to include important dimensions or facets of the construct" (p. 742). Interpreting the Woodcock-Johnson III Tests of Cognitive Abilities (WJ III; Woodcock, McGrew, & Mather, 2001, 2007) Concept Formation (CF) test as a measure of fluid intelligence (i.e., the broad Gf ability) is an example of construct underrepresentation. This is because CF measures one narrow aspect of Gf (viz., inductive reasoning). At least one other Gf measure (i.e., subtest) that is qualitatively different from inductive reasoning is necessary to include in an assessment to ensure adequate representation of the Gf construct (e.g., a measure of general sequential [or deductive] reasoning). Two or more qualitatively different indicators (i.e., measures of two or more narrow abilities subsumed by the broad ability) are needed for adequate construct representation (see Comrey, 1988; Messick, 1989, 1995). The aggregate of CF (a measure of inductive reasoning at the narrow-ability level) and the WJ III Analysis-Synthesis test (a measure of deductive reasoning at the narrow-ability level), for example, would provide an adequate estimate of the broad Gf ability because these tests are strong measures of Gf and represent qualitatively different aspects of this broad ability. The Verbal Comprehension Index (VCI) of the Wechsler Adult Intelligence Scale—Fourth Edition (WAIS-IV; Wechsler, 2008) is an example of good construct representation. This is because the VCI includes Vocabulary (lexical knowledge), Similarities (language development/lexical knowledge), and Information (general information), which represent qualitatively different aspects of Gc. Most intelligence batteries yield construct-relevant composites, although some of these composites underrepresent the broad ability intended to be measured. This is because construct underrepresentation can also occur when the composite consists of two or more measures of the same narrow (stratum I) ability. For example, the Number ■ Recall and Word Order subtests of the Kaufman Assessment Battery for Children—Second Edition (KABC-II; Kaufman & Kaufman, 2004) were intended to be interpreted as a representation of the broad Gsm ability. However, these subtests primarily measure memory span, a narrow ability subsumed by Gsm. Thus the Gsm cluster of the KABC-II is more appropriately interpreted as memory span (a narrow ability) than as an estimate of the broad ability of short-term memory. ' ■ "A scale [or broad CHC ability cluster] will yield far more information—and, hence, be a more valid measure of a construct—if it contains more differentiated items [or tests]" (Clarke & Watson,: 1995, p. 311). The XBA approach circumvents the misinterpretations that can result from underrep-resented constructs by specifying the use of two or more qualitatively different indicators to represent each broad CHC ability. In order to ensure that qualitatively different aspects of broad abilities are represented in assessment, classification of cognitive and academic ability tests at the narrow-ability (stratum I) level was necessary (McGrew &. Flanagan, 1998). The subtests of current cognitive batteries, special-purpose tests (including neuropsychological tests), and achievement tests have been classified at both the broad- and narrow-ability levels (see Flanagan et al., 2006, 2007, 2010; Flanagan, Alfonso, & Mascolo, 2011). In sum, the classifications of tests at the broad-and narrow-ability levels of CHC theory guard against two ubiquitous sources of invalidity in assessment: construct-irrelevant variance and construct underrepresentation. Taken together, CHC theory and the CHC classifications of tests that underlie the XBA approach provide the necessary foundation from which to organize assessments Cross-Battery Assessment 465 that are theoretically driven, comprehensive, and supported by research. RATIONALE FOR THE XBA APPROACH The XBA approach has significant implications for practice, research, and test development (see Figure 19.1). A brief discussion of these implications follows. Practice The XBA approach provides "a much needed and updated bridge between current intellectual theory and research and practice" (Flanagan & McGrew, 1997, p. 322). The need for the XBA "bridge" became evident following a review of the results of several cross-battery factor analyses conducted prior to 2000. In particular, the results demonstrated that none of the intelligence batteries in use at that time contained measures that sufficiently approximated the full range of broad abilities defining the structure of intelligence specified in contemporary psychometric theory (see Table 19.2; see also Alfonso, Flanagan, & Radwan, 2005, for a comprehensive discussion of these findings). Indeed, the joint factor analyses conducted by Woodcock (1990) suggested that it might be necessary to "cross" batteries to measure a broader range of cognitive abilities than that provided by a single intelligence battery. As may be seen in Table 19.2, most batteries fell far short of measuring all seven of the broad cognitive abilities listed. Of the major intelligence batteries in use prior to 2000, most failed to measure three or more broad CHC abilities (viz., Ga, Glr, Gf, Gs) that were (and are) considered important in understanding and predicting school achievement (Flanagan et al, 2006; McGrew & Wendling, 2010). In fact, Gf, often considered to be the essence of intelligence, was either not measured or not measured adequately by most of the intelligence batteries included in Table 19.2 (i.e., WISC-III, WAIS-R.WPPSl-R, K-ABC, and CAS) (Alfonso et al„ 2005). The finding that the abilities not measured by the intelligence batteries listed in Table 19.2 are important in understanding children's learning difficulties provided much of the impetus for developing the XBA approach (McGrew & Flanagan, 1998). In effect, the XBA approach was developed to systematically replace the dashes in Table 19.2 with tests from another battery. As such, this ap- proach guides practitioners in the selection of tests that together provide measurement of abilities that can be considered sufficient in both breadth and depth for the purpose of addressing referral concerns. Another contribution of the XBA approach to practice was that it facilitated communication among professionals. Most scientific disciplines have a standard nomenclature (i.e., a common set of terms and definitions) that facilitates communication and guards against misinterpretation (McGrew & Flanagan, 1998). For example, the standard nomenclature in chemistry is reflected in the periodic table of elements; in biology, it is reflected in the classification of animals according to phyla; in psychology and psychiatry, it is reflected in the Diagnostic and Statistical Manual of Mental Disorders; and in medicine, it is reflected in the International Classification of Diseases. Underlying the XBA approach is a standard nomenclature or table of human cognitive abilities (McGrew & Flanagan, 1998) that includes classifications of hundred of tests according to the broad and narrow CHC abilities they measure (see also Alfonso et al, 2005; Flanagan & Ortiz; 2001; Flanagan et al., 2002, 2006, 2007, 2010, 2012). The XBA classification system has had a positive impact on communication among practitioners; has improved our understanding of and guided the research on the relations between cognitive and academic abilities (Flanagan et al., 2011; McGrew & Wendling, 2010); and has resulted in improvements in the measurement of cognitive constructs, as may be seen in the design and structure of current cognitive batteries. Finally, the XBA approach offers practitioners a psychometrically sound means to identifying population-relative (or normative) strengths and weaknesses. Because the approach focuses interpretation on cognitive ability clusters (i.e., via combinations of construct-relevant subtests) that contain either qualitatively different indicators of each CHC broad-ability construct (to represent broad-ability domains) or qualitatively similar indicators of narrow abilities (to represent narrow- or specific-ability domains), the identification of normative strengths and weaknesses via XBA is possible. Adhering closely to the guiding principles of the approach (described later) will help to ensure that the identified strengths and weaknesses may be interpreted in a theoretically and psychometrically sound manner. In sum, the XBA approach addresses the long-standing need within the entire field of assessment, from learning disabilities DA- TABLE 19.2. Representation of Broad CHC Abilities on Nine Intelligence Batteries Published Prior to 2000 Battery Gf Gc Gv Gsm Glr Ga Gs WISC-III Vocabulary Information Similarities Comprehension Block Design Object Assembly Picture Arrangement Picture Completion Mazes Digit Span Symbol Sear< Coding WAIS-R Vocabulary Information Similarities Comprehension Block Design Object Assembly Picture Completion Picture Arrangement Digit Span Digit-Symbc WPPSI-R Vocabulary Information Similarities Comprehension Block Design Object Assembly Picture Completion Mazes Geometric Design Sentences Animal Peg; KAÍT Mystery Codes Logical Steps Definitions Famous Faces Auditory Comprehension Double Meanings Memory for Block Designs Rebus Learning Rebus Delayed Recall Auditory Delayed Recall K-ABC Matrix Analogies Triangles Face Recognition Gestak Closure Magic Window Hand Movements Spatial Memory Photo Series Number Recall Word Order CAS Figure Memory Verbal-Spatial Relations Nonverbal Matrices Word Series Sentence Repetition Sentence Questions Matching Numbers Receptive Attention Planned Codes Number Detection Planned Connections Expressive Attention DAS Matrices Picture Similarities Sequential and Quantitative Reasoning WJ-R Concept Formation Analysis-Synthesis SB-IV Matrices Equation Building Number Series Similarities Verbal Comprehension Word Definitions Naming Vocabulary Oral Vocabulary Picture Vocabulary Listening Comprehension Verbal Analogies Verbal Relations Comprehension Absurdities Vocabulary Pattern Construction Block Building Copying Matching Letter-Like Forms Recall of Designs Recognition of Pictures Spatial Relations Picture Recognition Visual Closure Recall of Digits Recall of Objects Pattern Analysis Bead Memory Copying Memory for Objects Paper Folding and Cutting Memory for Words Memory for Sentences Numbers Reversed Memory for Sentences Memory for Digits Memory for Names Visual-Auditory Learning Delayed Recall: Memory for Names Delayed Recall: Visual-Auditory Learning Incomplete Words Sound Blending Sound Patterns Speed of Information Processing Visual Matching Cross Out Note. WISC-III, Wechsler Intelligence Scale for Children—Third Edition (Wechsler, 1991); WAIS-R, Wechsler Adult Intelligence Scale—Revised (Wechsler, 1981); WPPSI-R, Wechsler Preschool and Primary Scale of Intelligence—Revised (Wechsler, 1989); KALT, Kaufman Adolescent and Adult Intelligence Test (Kaufman &. Kaufman, 1993); K-ABC, Kaufman Assessment Battery for Children (Kaufman & Kaufman, 1983); CAS, Cognitive Assessment System (Naglieri & Das, 1997); DAS, Differential Ability Scales {Elliott, 1990); WJ-R, Woodcock-Johnson Psycho-Educational Battery-Revised (Woodcock & Johnson., 1989); SB-IV, Stanford-Binet Intelligence Scale: Fourth Edition (Thomdike, Hagen, & Sattler, 1986). to neuropsychological assessment, for methods that "provide a greater range of information about the ways individuals learn—the ways individuals receive, store, integrate, and express information" (Brackett & McPherson, 1996, p. 80; see also Decker, 2008). Test Development Although there was substantial evidence of at least eight or nine broad cognitive CHC abilities by the late 1980s, the tests of the time did not reflect this diversity in measurement. For example, Table 19.2 shows that the WPPSI-R, K-ABC, KAIT, WAIS-R, and CAS batteries (see the footnotes to this and subsequent tables for full names of most test batteries from this point on) only measured two or three broad CHC abilities adequately. The WPPSI-R primarily measured Gv and Gc. The K-ABC primarily measured Gv and Gsm, and to a much lesser extent Gf; the KAIT primarily measured Gc and Glr, and to a much lesser extent Gf and Gv. The CAS measured Gs, Gsm, and Gv.1 Finally, while the DAS, SB-IV, and WISC-III did not provide sufficient coverage of abilities to narrow the gap between contemporary theory and practice, their comprehensive measurement of approximately four CHC abilities was nonetheless an improvement over the previously mentioned batteries. Table 19.2 shows that only the WJ-R included measures of all broad cognitive abilities as compared to the other batteries available at that time. Nevertheless, most of the broad abilities were not measured adequately by the WJ-R (Alfonso et al., 2005; McGrew & Flanagan, 1998). In general, Table 19.2 shows that Gf, Gsm, Glr, Ga, and Gs were not measured well by the majority of intelligence batteries published before 2000. Therefore, it was clear that most test authors did not use contemporary psychometric theories of the structure of cognitive abilities to guide the development of their intelligence batteries. As such, a substantial theory-practice gap existed; that is, theories of the structure of cognitive abilities were far in advance of the instruments used to op-erationalize them. In fact, prior to the mid-1980s, theory seldom played a role in intelligence test development. The numerous dashes in Table 19.2 exemplify the theory-practice gap that existed in the field of intellectual assessment at that time (Alfonso et al, 2005). In the past decade particularly, CHC theory has had a significant impact on the revision of old and development of new intelligence batter- ies. For example, a wider range of broad and narrow abilities is represented in current intelligence batteries than in previous editions of these tests. Table 19.3 provides several salient examples of the impact that CHC theory and the XBA classifications have had on intelligence test revision over the past two decades. This table lists the major intelligence tests that have been revised since 2000 in the order in which they were revised, beginning with those tests with the greatest number of years between revisions (i.e., K-ABC). Not included in Table 19.3 are fairly dated tests that have yet to be revised (e.g., the CAS). As is obvious from a review of Table 19.3, CHC theory and XBA classifications have had a significant impact on test development (Alfonso et al, 2005). Of the seven intelligence batteries that were revised since 2000, the test authors of four clearly used CHC theory and XBA classifications as a blueprint for test development (i.e., the WJ III, SB5, KABC-II, and DAS-II). Only the authors of the Wechsler scales (i.e., the WPPSI-III, WISC-IV, and WAIS-IV) did not state explicitly that CHC theory was used as a guide for revision. Nevertheless, the authors of the Wechsler scales have acknowledged the research of Cattell, Horn, and Carroll in their most recent test manuals (Wechsler, 2002, 2003, 2008). Presently, as Table 19.3 suggests, nearly all intelligence batteries that are used with some regularity subscribe either explicitly or implicitly to CHC theory (Alfonso et al, 2005; Flanagan et al, 2006, 2012). Convergence toward the incorporation of CHC theory is also seen clearly in Table 19.4- This table is similar to Table 19.2 except that it includes all intelligence battery revisions published after 2000. A comparison of Table 19.2 and Table 19.4 shows that many of the gaps in measurement of broad cognitive abilities have been filled in the revisions. Specifically, the majority of test revisions published after 2000 now measure four or five broad cognitive abilities adequately (see Table 19.4), as compared to two to three (see Table 19.2). For example, Table 19.4 shows that the WISC-IV measures Gf, Gc, Gv, Gsm, and Gs, while the KABC-II measures Gf, Gc, Gv, and Glr adequately, and to a lesser extent Gsm. The WAIS-IV measures Gc, Gv, Gsm, and Gs adequately, and to a lesser extent Gf, while the WPPSI-III measures Gf, Gc, Gv, and Gs adequately. Finally, the SB5 measures: four CHC broad abilities (i.e., Gf, Gc, Gv, Gsm) (Alfonso et al, 2005). Table 19.4 shows that the DAS-II and the WJ III include measures of all the major broad cognitive TABLE 19.3. Impact of CHC Theory and XBA on Infelilgenee Test Revision _ Test (year of publication) Revision (year of publication) CHC and XBA impact CHC and XBA impact K-ABC (1983) KABC-II (2004) No obvious impact. Provides a second global score that includes fluid and crystallized abilities. Includes several new subtests measuring reasoning. Interpretation of test performance may be based on CHC theory or Luna's theory. Provides assessment of five CHC broad abilities. SB-iV (1986) SB5 (2003) Used a three-level hierarchical model of CHC theory has been, used to guide test development. Increases the structure of cognitive abilities to guide the number of broad factors from four to five. Includes a Working construction of the test. The top level included Memory factor, based on research indicating its importance for a general reasoning factor or g; the middle level academic success, included three broad factors called Crystallized Abilities, Fluid-Analytic Abilities, and Short-Term Memory; the third level included more specific factors, including Verbal Reasoning, Quantitative Reasoning, and Abstract/Visual Reasoning. WPPSI-R (1989) No obvious impact. Wj-R (1989) Modern Gf-Gc theory was used as the cognitive model for test development. Included two measures of each of seven broad abilities. WISC-11I (1991) No obvious impact. DAS (1990) No obvious impact. WA1S-I1I (1997) Enhances the measurement of fluid reasoning by adding the Matrix Reasoning subtest. Includes four index scores that measures specific abilities more purely than the traditional IQs. Includes a Working Memory Index, based on research indicating its importance for academic success. WPPSI-IH (2002) Incorporates measures of Processing Speed that yield a Processing Speed Quotient, based on recent research indicating the importance of processing speed for early academic success. Enhances the measurement of fluid reasoning by adding the Matrix Reasoning and Picture Concepts subtests, WJ 111 (2001) CHC theory has been used as a "blueprint" for test development. Includes two or three qualitatively different narrow abilities for each broad ability. The combined cognitive and achievement batteries of the WJ III include nine broad abilities comprised in CHC theory. WISC-IV (2003) Eliminates Verbal and Performance IQs. Replaces the Freedom from Distractibility Index with the Working Memory Index. Replaces the Perceptual Organization Index with the Perceptual Reasoning Index. Enhances the measurement of fluid reasoning by adding the Matrix Reasoning and Picture Concepts subtests. Enhances the measurement of Processing Speed with the Cancellation subtest. DAS-II (2007) Five CHC broad abilities are well represented in the DAS-II. Othets are represented by diagnostic subtests. WAIS-IV (2008) Eliminates Verbal and Performance IQs. Replaces the Perceptual Organization Index with the Perceptual Reasoning Index. Enhances the measurement of fluid reasoning by adding the Figure Weights and Visual Puzzles subtests. Enhances measurement of Processing Speed with the Cancellation subtest. Note. K-ABC, Kaufman Assessment Battery for Children (Kaufman & Kaufman, 1983); KABC-I1, Kaufman Assessment Battery for Children—Second Edition (Kaufman & Kaufman, 2004); SB-IV, Stanford-Binet Intelligence Scale: Fourth Edition (Thorndike, Hagen, & Sattler, 1986); SB5, Stanford-Binet Intelligence Scales, Fifth Edition (Roid, 2003); WAIS-III, Wechslet Adult Intelligence Scale—Third Edition (Wechsler, 1997); WAIS-IV, Wechsler Adult Intelligence Scale—Fourth Edition (Wechlser, 2008); WPPSI-R, Wechsler Preschool and Primary Scale of Intelligence—Revised (Wechsler, 1989); WPPSHII, Wechsler Preschool and Primary Scale of Intelligence—Third Edition (Wechsler, 2002); WJ-R, Woodcock-Johnson Psycho-Educational Battery—Revised (Woodcock & Johnson, 1989); WJ III, Woodcock-Johnson III Tests of Cognitive Abilities (Woodcock, McGrew, & Mather, 2001); WISC-IÜ, Wechsler Intelligence Scale for Children—Third Edition (Wechsler, 1991); WISC-IV, Wechsler Intelligence Scale for Children—Fourth Edition (Wechsler, 2003); DAS, Differential Ability Scales (Elliott, 1990); DAS-II, Differential Ability Scales—Second Edition (Elliott, 2007). 469 o TABLE 19.4. Representation of Broad and Marrow CHC Abilities on Seven Intelligence Batteries Revised after 2000 Battery Gf Gc Gv Gsm Glr 'Ga Gs WISC-IV Matrix Reasoning (I, RG) Picture Concepts (I) Arithmetic (RQ, Gsm-MW) WAIS-IV Matrix Reasoning (I, RG) Arithmetic (RQ, Gsm-MW) Figure Weights (RQ) WPPSI-III Matrix Reasoning (I, RG) Picture Concepts (Gc-KO, I) KABC-II Pattern Reasoning (1, Gv-Vz) Story Completion (I, RG, Gv-Vz, Gc-KO) Vocabulary (VL) Information (KO) Similarities (VL, LD, Gf-I) Comprehension (KO, LD) Word Reasoning (VL, Gf-I) Vocabulary (VL) Information (KO) Similarities (VL, LD, Gf-I)) Comprehension (KO, LD) Vocabulary (VL) Information (KO) Similarities (VL, LD, Gf-I) Comprehension (KO, LD) Receptive Vocabulary (VL, LD) Picture Naming (VL, KO) Word Reasoning (VL, Gf-I) Expressive Vocabulary (VL) Verbal Knowledge (VL, KO) Riddles (VL, LD, Gf-RG) Block Design (SR, Vz Picture Completion (CF, Gc-KO) Digit Span (MS, MW) Letter-Number Sequencing (MW) Block Design (SR, Vz) Digit Span (MS, Picture Completion MW) (CF, Gc-KO) Letter-Number Visual Puzzles (SR, Vz) Sequencing (MW) Block Design (SR, VZ) Object Assembly (CS, SR) Picture Completion (CF, Gc-KO) Face Recognition (MV) Number Recall Triangles (SR, Vz) Gestalt Closure (CS) Rover (SS, Gf-RG, Gq-A3) Block Counting (Vz, Gq-A3) Conceptual Thinking (Vz, Gf-I) (MS) Word Order (MS, MV) Hand Movements (MS, Gv-MV) Atlantis (MA) Rebus (MA) Atlantis—Delayed (MA) Rebus—Delayed (MA) Symbol Search (P, R9) Coding (R9) Cancellation (P, R9) - Svmbol Search ' (P, R9) Coding (R9) Cancellation (P, R9) Coding (R9) Symbol Search (P, R9) WJ III Concept Formation (I) Analysis-Synthesis (RG) SB5 Nonverbal Fluid Reasoning (I, RG, Gv) Verbal Fluid Reasoning (I, RG, Gc-CM) Nonverbal Quantitative Reasoning (RQ, Gq-Km, Gc-VL) Verbal Quantitative Reasoning (RQ, Gq-A3) DAS-II Matrices (I) Picture Similarities (I) Sequential and Quantitative Reasoning (RQ) Verbal Comprehension Spatial Relations (SR, (VL, LD) Vz) General Information (KO) Picture Recognition (MV) Planning (SS, Gf-RG) Memory for Words Visual-Auditory Nonverbal Knowledge (KO, LS, Gf-RG) Verbal Knowledge (VL) Nonverbal Visual-Spatial Processing (SR, CS) Verbal Visual-Spatial Processing (Vz, Gc-VL, KO) (MS) Numbers Reversed (MW) Auditory Working Memory (MW) Early Number Concepts Pattern Construction Recall of Digits- (VL, Gq-KM) Naming Vocabulary (VL) Word Definitions (VL, LD) Verbal Comprehension (LS) Verbal Similarities (LD, Gf-I) (SR) Recall of Designs (MV) Recognition of Pictures (MV) Copying (Vz) Matching Letter-Like Forms (Vz) Forward (MS) Recall of Digits— Backward (MW) Recall of Sequential Order (MW) Learning (MA) Retrieval Fluency (FI, NA) Visual-Auditory Learning— Delayed (MA) Rapid Picture Naming (NA, Gs-P) Nonverbal Working Memory (MS, MW, Gv-MV) Verbal Working Memory (MS, MW, Gc-LD) Sound Blending (PC:S) Auditory Attention (US/U3) Incomplete Words (PC:A. PC:S) Rapid Naming (NA, Phonological Gs-P) Recall of Objects— Immediate (M6) Recall of Objects— Delayed (M6) Processing (PC:S, PC:A) Visual Matching (P, R9) Decision Speed (RE) Pair , Cancellation (P) Speed of Information Processing (N, R9) Note. CHC classifications are based on primary sources, such as Carroll (199.3), Flanagan and Ortiz (2001), Flanagan, Ortiz, Alfonso, and Mascolo (2006), Horn (1991), Keith, Fine, Reynolds, Taub, and Kranzler (2006), McGtew (1997), and McGrew and Flanagan (1998). W1SC-IV, Wechsler Intelligence Scale for Children—Fourth Edition (Wechsler, 2003); WAIS-IV, Wechsler Adult Intelligence Scale—Fourth Edition (Wechsler, 2008); WPPSI-HI, Wechsler Preschool and Primary Scale of Intelligence—Third Edition. (Wechsler, 2002); KABC-II, Kaufman Assessment Battery for Children— Second Edition (Kaufman & Kaufman, 2004); WJ III, Woodcock-Johnson ill Tests of Cognitive Abilities (Woodcock, McGrew, &. Mather, 2001); SB5, Stanford-Binet Intelligence Scales, Fifth Edition (Roid, 2003); DAS-II, Differential Ability Scales—Second Edition (Elliott, 2007). Gf, fluid intelligence; Gc, crystallized intelligence; Gv, visual processing; Gsm, short-term memory; Glr, long-term storage and retrieval; Ga, auditory processing; Gs, processing speed; Gq, quantitative knowledge; RQ, quantitative reasoning; 1, induction; RG, general sequential reasoning; RE, speed of reasoning; VL, lexical knowledge; KO, general (verbal) knowledge; LD, language development; LS, listening ability; MV, visual memory; SR, spatial relations; Vz, visualization; SS, spatial scanning; CF, flexibility of closure; CS, closure speed; MW, working memory; MS, memory span; MA, associative memory; LI, learning abilities; FI, ideational fluency; NA, naming facility; M6, free-recall memory; PC;S, phonetic coding: synthesis; PC:A, phonetic coding: analysis; US, speech sound discrimination; U3, general sound discrimination; P, perceptual speed; R9, rate of test taking; N, number facility; KM, math knowledge; A3, math achievement. 472 CONTEMPORARY INTERPRETIVE APPROACHES abilities and now measures these abilities. The WJ III measures all broad abilities adequately whereas the DAS-II measures five adequately, leaving two (Ga, Gs) underrepresented. Also, a comparison of Tables 19.2 and 19.4 indicates that two broad abilities not measured by many intelligence batteries prior to 2000 are now measured by the majority of revised intelligence batteries available today—that is, Gf and Gsm. These broad abilities may be better represented on revised (and new) intelligence batteries because of the accumulating research evidence regarding their importance in overall academic success (see Flanagan & Alfonso, 2011). Finally, Table 19.4 reveals that these intelligence batteries continue to fall short in their measurement of three CHC broad abilities—specifically, Glr, Ga, and Gs. In addition, these batteries do not provide adequate measurement of most specific or narrow CHC abilities, many of which are important in predicting academic achievement. Thus, although there is greater coverage of CHC broad abilities now than there was just a few years ago, the need for the XBA approach to assessment remains, particularly to ensure better measurement and interpretation of narrow abilities (Alfonso et at, 2005; Flanagan et al, 2007, 2012). APPLICATION OF THE XBA APPROACH Guiding Principles In order to ensure that XBA procedures are theoretically and psychometrically sound, it. is recommended that practitioners adhere to several guiding principles (McGrew & Flanagan, 1998). These principles are listed in Figure 19.1 and are defined briefly below. First, a practitioner should select a comprehensive intelligence battery as the core battery in assessment. It is expected that the battery of choice will be one that is deemed most responsive to referral concerns. These batteries may include (but are certainly not limited to) the Wechsler scales, WJ III, SB5, DAS-Il, and KABC-II. It is important to note that the use of co-normed tests (e.g., the WJ III Tests of Cognitive Abilities and Tests of Achievement, the KABC-II and Kaufman Test of Educational Achievement—Second Edition [KTEA-1I]) may allow for the widest coverage of broad and narrow CHC abilities and processes. Second, practitioners should use subtests and clusters/composites from a single battery whenever possible to represent broad CHC abilities. In other words, best practices involve using actual norms whenever they are available, in lieu of arithmetic averages of scaled scores from different batteries. In the past, it was necessary to convert subtest scaled scores from different batteries to a common metric and then average them (after determining that there was a nonsignificant difference between the scores) in order to build construct-relevant CHC broad-ability clusters. Because the development of current intelligence batteries has benefited greatly from CHC theory and research, this practice is seldom necessary at the broad-ability level. It continues to be necessary at the narrow-ability level and1 for testing hypotheses about aberrant performance within broad-ability domains (see Flanagan et al, 2007, 2012, for details). Third, when constructing CHC broad- and narrow-ability clusters, practitioners should select tests that have been classified through an acceptable method, such as through CHC theory-driven. factor analyses or expert-consensus content validity studies. All test classifications included in the works of Flanagan and colleagues have been classified through these acceptable methods (Flanagan et aL, 2007, 2012). For example, when practitioners are constructing broad-ability (stratum II) ability composites or clusters, relatively pure CHC indicators should be included (i.e., tests that had either strong or moderate [but not mixed] loadings on their respective factors in theory-driven within-or cross-battery factor analyses). Furthermore, to ensure appropriate construct representation when practitioners are constructing broad-ability (stratum II) composites, two or more qualitatively different narrow-ability (stratum I) indicators should be included to represent each domain. Without empirical classifications of tests, constructs may not be adequately represented; therefore, inferences about an individual's broad (stratum II) ability cannot be made confidently. Of course, the more broadly an ability is represented (i.e., through the derivation of composites based on multiple qualitatively different narrow-ability indicators), the more confidence practitioners can have in drawing inferences about the broad ability underlying a composite. A minimum of two qualitatively different indicators per CHC broad ability is recommended in the XBA approach for practical reasons, (viz., time-efficient assessment). Noteworthy is the fact that most intelligence tests also include only two qualitatively different indicators (subtests) to represent broad abilities, which is why constructing broad-ability clusters in the XBA. approach is seldom necessary. Cross-Battery Assessment 473 Fourth, when at least two qualitatively different indicators of a broad ability of interest is not available on the core battery, then a practitioner should supplement the core battery with at least two qualitatively different indicators of that broad ability from another battery. In other words, if an evaluator is interested in measuring auditory processing (Ga), and the core battery includes only one or no Ga subtests, then the evaluator should select a Ga cluster from another battery to supplement the core battery. This practice ensures that actual norms are used for interpreting broad-ability performance whenever they are available. Fifth, when crossing batteries (e.g., augmenting a core battery with relevant CHC clusters from another battery) or when constructing CHC narrow-ability clusters using tests from different batteries (e.g., averaging scores when the narrow-ability cluster of interest is not available), practitioners should select tests that were developed and normed within a few years of one another, to minimize the effect of spurious differences between test scores that may be attributable to the "Flynn effect" (Flynn, 1984, 2010). The tests recommended by Flanagan and her colleagues in their most recent XBA book include only those that were normed within 10 years of one another (Flanagan et al, 2012). Sixth, practitioners should select tests from the smallest possible number of batteries, to minimize the effect of spurious differences between test scores that may be attributable to differences in the characteristics of independent norm samples (McGrew, 1994). In most cases, using selected tests from a single battery to augment the constructs measured by any other major cognitive battery is sufficient to represent the breadth of broad cognitive abilities adequately, as well as to allow for at least two or three qualitatively different narrow-ability indicators of most broad abilities (Flanagan et al, 2007). Seventh, practitioners should establish ecological validity for any and all test performances that are suggestive of normative weaknesses or deficits. The finding of a cognitive weakness or deficit is largely meaningless without evidence of how the weakness manifests in activities of daily living, including academic achievement (Flanagan et al., 2011). The validity of test findings is bolstered when clear connections are made between the cognitive dysfunction (as measured by standardized tests) and the educational impact of that dysfunction, for example, as observed in classroom performance and as may be gleaned from a stu- dent's work samples. To demonstrate, Table 19.5 includes information about (1) the major cognitive domains of functioning comprising CHC theory, (2) how deficits in these domains manifest in general as well as how they manifest in specific academic areas, and (3) interventions and recommendations that can be tailored to the unique learning needs of the individual when such weaknesses are found. Noteworthy is the fact that when the XBA guiding principles are implemented systematically and the recommendations for development, use, and interpretation of clusters are adhered to, the potential error introduced through the crossing of norm groups is negligible (Flanagan et al, 2007). Additionally, the authors of Essentials of Cross-Battery Assessment included software with their book to facilitate the implementation of the XBA method and aid in the interpretation of cross-battery data (see Flanagan et al., 2007). (The XBA approach may be carried out following a straightforward set of steps, which are detailed in Flanagan and colleagues, 2007, 2012.) HISTORICAL PERSPECTIVE AND CURRENT DIRECTIONS OF THE XBA APPROACH The preceding discussion was designed to provide readers with an overview of the foundation, rationale, and guiding principles that underlie the XBA approach. Because a definitive discussion regarding all aspects of the approach was beyond the scope of this chapter, the reader is referred to other sources for comprehensive step-by-step XBA procedures (i.e., Flanagan et at,, 2007, 2012). But beyond what the XBA approach is, and the manner in which it is implemented, an understanding of its evolution provides the perspective necessary to appreciate its impact and influence on applied domains of psychology, in particular cognitive evaluation and interpretation. To that end, we offer Table 19.6. This table provides an annotated chronology of some of the more significant past and present contributions of the XBA approach and how it has evolved in response to practice-based changes in the field. For example, Flanagan and McGrew (1997) originally developed the XBA approach based on the need within the field to narrow the theory-practice gap. That is, because the major intelligence tests of the time did not measure the breadth of abilities inherent in contemporary theory on the structure of cog- ■bj TABLE 19.5. CHC Broad Ability Definitions, Manifestations of CHC Weaknesses, and Suggested Recommendations and Interventions General manifestations of the Specific manifestations of the CHC CHC hroad-abiiity CHC broad-ability definition CHC broad-ability weakness broad-ability weakness in academic areas Recommendations/interventions Fluid Reasoning (Gf) Crystallized Intelligence (Gc) Auditory Processing (Ga) Novel reasoning and problem solving Processes are minimally dependent on learning and acculturation Involves manipulating rules, abstracting, generalizing, and identifying logical relationships Breadth and depth and knowledge of a culture Developed through formal education and general learning experiences Stores of information and declarative and procedural knowledge Ability to verbally communicate and reason with previously learned procedures • Ability to analyze and synthesize auditory information Difficulties with: • Higher-level thinking • Transferring or generalizing • learning • Deriving solutions for novel problems • Extending knowledge through critical thinking • Perceiving and applying underlying rules or process (es) to solve problems Difficulties with: • Vocabulary acquisition • Knowledge acquisition • Comprehending language • Fact-based/informational questions • Using prior knowledge to support learning Difficulties with: * Hearing information presented orally, initially processing oral information Redding Difficulties: • Inferential reading comprehension • Abstracting main idea(s) Math Difficulties; • Math reasoning (word problems) • Internalizing procedures and processes used to solve problems • Apprehending relationships between numbers Writing Difficulties; • Essay writing and generalizing concepts • Developing a theme • Comparing and contrasting ideas Reading Difficulties: Decoding and comprehension Math Difficulties: • Understanding math concepts and the "vocabulary of math" Writing Difficulties: • Grammar (syntax) • Bland writing with limited descriptors • Verbose writing with repetitions • Inappropriate word usage Language Difficulties; • Understanding class lessons • Expressive language—"poverty of thought" Reading Difficulties: • Acquiring phonics skills • Decoding and comprehension • . Using phonetic strategies Math Difficulties; • Develop student's skill in categorizing objects and drawing conclusions • Use demonstrations to externalize the reasoning process • Gradually offer guided practice (e.g., guided questions list) to promote internalization of procedures or process (es) • Targeted feedback • Cooperative learning • Reciprocal teaching • Graphic organizers to arrange information in visual format ® Metacognitive strategies • Comparison of new concepts to previously learned (same. vs. different) • Using analogies, similes, and metaphors when presenting tasks • Provide an environment rich in language and experiences • Frequent practice with and exposure to words a Read aloud to children • Vary reading purpose (leisure, information) • Work on vocabulary building • Teach morphology • Use text talks • Phonemic awareness activities • Emphasis.on sight-word reading • Teach comprehension monitoring (e.g., does the word I heard/read make sense m contcxi.') • Paying attention, especially in the presence of background noise • Discerning the direction from which auditory information is coming • Foreign language acquisition • Acquiring receptive vocabulary • Word problems Writing Difficulties: • Spelling • Note-taking • Poor quality of writing Long-Term Retrieval (Glr) Ability to store information (e.g., concepts, words, facts) and fluently retrieve it later through association Difficulties with: • Learning new concepts • Retrieving or recalling information by using association • Performing consistently across different task formats (e.g., recognition vs. recall formats) • Speed with which information is retrieved and/or learned • Paired learning (visual-auditory) • Recalling specific information (words, facts) Reading Difficulties: • Accessing background knowledge to support new learning while reading (Associative Memory deficit) • Slow to access phonological representations during decoding (RAN deficit) Math Difficulties: • Recalling procedures to use for math problems • Memorizing and recalling math facts Writing Difficulties: • Accessing words to use during essay writing • Specific writing tasks (compare and contrast; persuasive writing; conceptual) • Note-taking Language Difficulties: • Expressive—circumlocutions, speech fillers, "interrupted" thought, pauses • Receptive—making connections throughout oral presentations (e.g., class lectures) • Annunciating sounds in words in an emphatic manner when teaching new words for reading or spelling • Use work preview/text preview to clarify unknown words • Provide guided notes during note-taking activities • Build in time for clarification questions related to "missed" or "misheard" items during lecture • Supplement oral instructions with written instructions • Shortening instructions • Pteferential seating • Localizing sound source for student • Minimizing background noise • Repeated practice with and review of newly presented information • Teach memory strategies (verbal rehearsal to support encoding, use of mnemonic devices) • Use multiple modalities when teaching new concepts (pair written with verbal information) • Limit the amount of new material to be learned; introduce new concepts gradually and with a lot of context • Be mindful of when new concepts are presented • Make associations between newly learned and prior explicit information • Use lists to facilitate recall (prompts) • Expand vocabulary to minimize impact of word-retrieval deficits • Build in wait-time for student when fluency of retrieval is an issue • Use text previews to "prime" knowledge • Provide background knowledge first before asking a question to "prime" student for retrieval (com.) 4äk TABLE 19.5. (cont.) CHC broad-ability CHC broad-ability definition General, manifestations of the CHC broad-ability weakness Specific manifestations of the CHC broad-abilitv weakness in academic areas Recommendations/interventions Processing Speed (Gs) Visual Processing (Gv) Speed of processing, particularly when required to pay focused attention Usually measured by tasks that require rapid processing, but are relatively easy Ability to genetate, perceive, analyze, synthesize, manipulate, transform, and thmk with .visual patterns and stimuli Difficulties with: • Efficient processing of information • Quickly perceiving relationships (similarities and differences between stimuli or information) • Working within time parameters « Completing simple, rote tasks quickly Difficulties with: • Recognizing patterns *> Reading maps, graphs, and charts • Attending to fine visual detail ..... Reading Difficulties: • Slow reading speed • Impaired comprehension • Need to reread for understanding Math Difficulties'. • Automatic computations • Computational speed is slow despite accuracy • Slow speed can result in reduced accuracy due to memory decay Writing Difficulties'. • Limited output due to time factors • Labored process results in reduced motivation to produce Language Difficulties: • Cannot retrieve information quickly— slow, disrupted speech as cannot get out thoughts quickly enough • Is slow to process incoming information, puts demands on memory store, which can result in information overload and loss of meaning Reading Difficulties: • Orthographic coding (using visual features of letters to decode) • Sight-word acquisition • Using charts and graphs within a text in conjunction with reading Repeated ptactice Speed drills Computer activities that require quick, simple decisions Extended time Reducing the quantity of work required Increasing wait-times both after questions are asked and after responses are given • Capitalize on students phonemic skills for decoding tasks • Teach orthographic strategies for decoding (e.g., word length, shape of word) • Provide oral explanation for visual Recalling visual information Appreciation of spatial characteristics of objects (e.g., size, length) Recognition of spatial orientation of objects • Comprehension of text involving spatial concepts (e.g., social studies text describing physical boundaries, movement of troops along a specified route) Math Difficulties: • Number alignment during computations • Reading and interpreting graphs, tables, and charts Writing Difficulties: • Spelling sight words • Spatial planning during writing tasks (e.g., no attention to margins, words that overhang a line) • Inconsistent size, spacing, position, and slant of letters concepts Review spatial concepts and support comprehension through use of hands-on activities and manipulatives (e.g., using models to demonstrate the moon's orbital path) Highlight margins during writing tasks Provide direct handwriting practice Use graph paper to assist with number alignment Short-Term Memory (Gsm) Ability to hold information in immediate awareness and use or transform it within a few seconds Difficulties with: • Following oral and written instructions • Remembering information long enough to apply it Remembering the sequence of information • Rote memorization Reading Difficulties: • Reading comprehension • Decoding multisyllabic words • Orally retelling or paraphrasing what one has read Math Difficulties: • Rote memorization of facts • Remembering mathematical procedures • Multistep problems and regrouping • Extracting information to be used in word problems Writing Difficulties: • Spelling multisyllabic words • Redundancy in writing (word and conceptual levels) • Note-taking • Provide opportunities for repeated practice and review • Provide supports (e.g., lecture notes, study guides, written directions) to supplement oral instruction • Break down instructional steps for student • Provide visual support (e.g., times table) to support acquisition of basic math facts • Outline math procedures for student and provide procedural guides or flashcards for the student to use when approaching problems • Highlight important information within a word problem • Have student write all steps and show all work for math computations .fa. 478 CONTEMPORARY INTERPRETIVE APPROACHES TABLE 19.6. Evaluation Post and Present Contributions ©f the XBA Approach to Psychological Source Contribution Flanagan, Genshaft, and Harrison (1997) McGrew and Flanagan (1998) Flanagan, McGrew, and Ortiz (2000) Flanagan and Ortiz (2001) Flanagan, Ortiz, Alfonso, and Mascolo (2002) • First attempt at merging the Cattell-Horn Gf-Gc theory and Carroll's three-stratum theory (McGrew, 1997), which represented the foundation of Cross-Battery Assessment (XBA). • First expert consensus study regarding the narrow abilities measured by intelligence tests (McGrew, 1997), an important component of XBA. • Introduced the need for XBA and the assumptions, foundations, and operationalized set of principles that comprise it (Flanagan & McGrew, 1997). • Introduced a step-by-step approach to XBA in an attempt to improve upon the measurement of cognitive constructs. • Demonstrated how the XBA approach guarded against two ubiquitous sources of invalidity in assessment: construct irrelevant variance and construct underrepresentation. • Provided worksheets for organizing assessments according to contemporary Gf-Gc theory and for conducting XBA. • Provided a review of the research on the relations between broad and narrow Gf-Gc abilities and academic (reading and math) and occupational outcomes. » Provided a desk reference of all the major intelligence tests, which provided important information for each subtest as a means of informing interpretation of XBA data (e.g., reliability, validity, standardization sample characteristics, test floors and ceilings, item gradients, variables influencing subtest performance, g loadings, broad and narrow abilities measured by subtest). • Provided the first comprehensive set of theory-based classifications of tests in an attempt to further establish a Gf-Gc nomenclature for the field. • Highlighted the importance of joint or cross-battery confirmatory factor analytic studies for understanding the Gf-Gc broad abilities underlying intelligence tests. • Provided the first set of systematic classifications of ability tests according to degree of cultural loading and degree of linguistic demand. • Introduced the "Integrated Cattell-Horn and Carroll Gf-Gc Model" as the foundation for ctoss-battery assessment based on analyses conducted by McGrew (e.g., McGrew, 1997), This integrated model was renamed "Cattell-Horn-Carroll (CHC) theory" shortly thereafter (see McGrew, 2005, for details). • Applied Gf-Gc theory to interpretation of the Wechsler scales. • Demonstrated that the Wechsler scales included redundancy in the assessment of certain constructs (e.g., Gc and Gv) and omitted measurement of other important constructs (e.g., Gf, Ga, and Glr). • Offered step-by-step XBA guidelines for augmenting a Wechsler scale so that a broader range of cognitive abilities could be measured as deemed relevant and necessary vis a vis referral concerns. • Provided a set of worksheets for conducting XBA with the Wechsler scales. • Used CHC theory as the foundation for XBA. • Expanded test classifications to include a variety of special-purpose tests in addition to the major intelligence tests. • Included more comprehensive coverage of test interpretation. • Provided updated and improved XBA worksheets. • Expert consensus studies provided the basis for narrow-ability classifications of cognitive tests. • Refined classifications of ability tests according to degree of cultural loading and degree of linguistic demand. • Extended the XBA approach to achievement tests. • Included the largest expert consensus study of the narrow abilities underlying ability tests. • Provided an updated review of the literature on the relations between cognitive abilities and reading and math achievement. Review was expanded to include the area of written language. • Demonstrated how to use the XBA approach within the context of a CHC-based operational definition of SLD. • Provided a desk reference of achievement tests, which provided important information for each subtest (e.g., reliability, validity, standardization sample characteristics, test floors and ceilings, broad and narrow abilities measured by each subtest). • Included tables of the qualitative characteristics of individual achievement subtests from 48 batteries—information that informs test selection for XBA as well as interpretation. (cent.) Cross-Battery Assessment 479 TABLE 19.6» (coni.) Source Contribution Flanagan and Kaufman (2004, 2009) Flanagan and Harrison (2005) Flanagan, Ortiz, Alfonso, and Mascolo (2006) Flanagan, Ortiz, and Alfonso (2007) Flanagan, Alfonso, Ortiz, and Dynda (2010) This volume « Provided a CHC interpretive framework for the WISC-IV, thereby facilitating the use of this instrument in the XBA approach. • Included actual norms for seven CHC-based clinical clusters, including narrow-ability clusters that were incorporated into the XBA approach. • Automated the CHC-interpretation method for the WISC-IV (program included on CD that accompanies the book). • Detailed origins of the XBA approach and the theoretical and research foundation upon which it was based (McGrew, 1997, 2005). » Detailed the manner in which CHC rheory and the XBA approach influenced test development (Alfonso, Flanagan, & Radwan, 2005). • Highlighted the XBA approach as an example of the current "wave" of intelligence test interpretation: application of theory (Kamphaus, Winsor, Rowe, & Kim, 2005) • Included variation in task characteristics of the subtests of over 50 achievement batteries— information that informs test selection for XBA as well as interpretation. ® Updated CHC-based classifications of achievement tests. • Provided a desk reference of achievement tests, which provided important information for each subtest (e.g., reliability, validity, standardization sample characteristics, test floors and ceilings, broad and narrow abilities measured by each subtest). • Revised and refined the operational definition of SLD and demonstrated how to use the XBA approach within the context of this definition. • Introduced Academic Clinical Clusters according to the eight areas of specific learning disability listed in IDEA 2004. • Introduced automated XBA worksheets in a program called the XBA Data Management and Interpretive Assistant (DMIA). • Introduced an automated Culture-Language Interpretive Matrix (C-L1M) program to evaluate whether test performance systematically declines as a function of increased culture and language demands for English language learners. • Introduced an automated program called the SLD Assistant, This program was intended to assist in determining whether an individual was of at least average overall intellectual ability despite cognitive deficits in one or more specific areas. • Uses core tests (and supplemental tests as may be necessary) from a single battery, rather than selected components of a battery, as part of the assessment because (1) current intelligence tests have better representation of the broad CHC abilities and use only two or three subtests to represent them; and (2) the broad abilities measured by current intelligence batteries are typically represented by qualitatively different indicators that are relevant only to the broad ability intended to be measured. • Greater emphasis placed on use of actual norms, rather than averages. Averages are obtained under a selected few circumstances (e.g., narrow-ability level). • Expanded coverage of CHC theory to include abilities typically measured on achievement tests (e.g., Grw, Gq, Ga), providing additional information useful in the identification of specific learning disabilities. • Addressed the "disorder in a basic psychological process" language of IDEA (2004). • Demonstrated how the XBA approach might be used to operationalize the "pattern of strengths and weaknesses" language of the Federal Regulations (2006). 8 Extended CHC classifications to neuropsychological instruments, thus expanding the range of instruments that might be used in the XBA approach. • Applied neuropsychological domain classifications to cognitive tests, which was intended to expand the interpretive options for XBA data. • Application of XBA principles to neuropsychological evaluation. • Expanded CHC theory to include 16 broad abilities and over 80 narrow abilities (Schneider &. McGrew, Chapter 4) • Emphasized the relevance of the XBA approach for augmenting stand-alone batteries (e.g., McCallum & Bracken, Chapter 14) (com.) TABLE 19.6. (cont.) Source Contribut Flanagan, Ortiz, • Expands coverage of CHC theory to include abilities not measured by most major intelligence and Alfonso and cognitive batteries (e.g., Gh-tactile abilities, Gk-kinesthetic abilities). (2012) • Incorporates and integrates all current intelligence batteries (i.e., WJ III, WPPSI-III, WISC-IV, SB5, KABC-II, DAS-II, and WAIS-IV), tests of academic achievement, and selected neuropsychological instruments. • Provides a stronger emphasis on using actual norms when available. • Includes more stringent guidelines for averaging subtest scores from the same or different batteries under specific circumstances. • Summarizes current research on the relations between cognitive abilities and processes and academic skills and places even greater emphasis on forming narrow CHC ability clusters given their importance in understanding academic outcomes. • The DMIA was revised and incorporates and integrates all features of the XBA approach and includes interpretive statements. It also includes tabs for all current intelligence batteries, major achievement tests (e.g., WJ III Tests of Achievement), and co-normed (e.g., KABC-II and KTEA-II) or linked (WISC-IV and WIAT-III) batteries. Additionally, the DMIA now uses a variety of criteria to determine whether within-battery clusters are cohesive. • Revised the SLD Assistant and renamed it the Ability, Aptitude, and Response to Intervention -Estimator (AARTIE). This program allows practitioners to estimate the likelihood that a student with a specific pattern of strengths and weaknesses, for example, will respond to high-quality instruction and intervention in a manner that approximates the rate and level of learning typical of average same-grade peers. • Revised and updated the C-LIM to include current cognitive tests, special-purpose tests, and selected neuropsychological instruments. The C-LIM now provides additional features for evaluating individuals based on varying levels of language proficiency, acculturative knowledge, and/or giftedness. The C-LIM also allows for an examination of cognitive performance by the influences of language or culture independently. • Classifies current cognitive batteries according to neuropsychological domains of functioning : (e.g., sensorimotor, visual-spatial, speed and efficiency, executive). • Includes examples of how the XBA approach is used within the context of various state and district criteria for SLD identification. • Includes guidelines for linking findings of cognitive weaknesses or deficits to intervention. nitive abilities, there was a need to systematically supplement these batteries with tests from, other batteries to broaden an assessment of cognitive functioning and thereby address referral concerns more comprehensively and directly. Also, because research on the relations between cognitive abilities and processes demonstrated the importance of narrow (rather than broad) abilities in explaining academic skill acquisition and development, there was a need in the field to measure narrow abilities reliably and validly. Flanagan and colleagues (2006, 2007) presented the research on the narrow abilities that are most important in understanding reading, math, and writing achievement and provided a means of measuring these narrow abilities as part of the XBA approach. The information in Table 19.6 also reflects how the XBA approach has served to engender changes in practice. For example, in the past, the lack of theoretical clarity of widely used intelligence tests (e.g., the Wechsler scales) confounded interpreta- tion and adversely affected the examiner's ability to draw clear and useful conclusions from the data. The principles and procedures of XBA put forth by Flanagan, McGrew, and their colleagues (e.g., Flanagan, McGrew, & Ortiz, 2000) aided test authors and publishers in clarifying the theoretical underpinnings of their instruments. The XBA approach has also influenced test construction (Alfonso et al., 2005). In particular, the XBA approach was designed to reduce major sources of invalidity in assessment known as construct irrelevant variance and construct underrepresentation. Test authors and publishers have addressed these problems in. the current editions of their intelligence tests and cognitive batteries. To illustrate, two composites (containing construct-irrelevant variance) on the Wechsler scales that were interpreted routinely over a period of several decades—the Verba! IQ and Performance IQ—-were dropped from the current editions of the WISC and the WAIS. As another example, the WJ-R Gc cluster was under- Cross-Battery Assessment 481 represented because it contained two tests that measured only lexical knowledge. The current W] III Gc cluster provides an adequate representation of this broad ability because it contains two qualitatively different measures of Gc (i.e., the cluster was expanded to measure general information in addition to lexical knowledge). The XBA approach continues to shape the field of applied psychology and influence cognitive evaluation. Greater integration of neuropsychological constructs, more psychometric rigor behind assessing and interpreting cognitive constructs, expanded application to the evaluation of SLD as well as evaluation of other populations (e.g., preschool), and more emphasis on the relations between the narrow CHC abilities and specific academic skills are just some examples of how the XBA approach continues to evolve. We believe that such developments will enhance both the reliability and validity of evaluations in future practice while at the same time provide information that is directly relevant to the learning and instructional needs of examinees. SUMMARY In this chapter we presented the XBA approach as a method that allows practitioners to augment or supplement any major ability test (e.g., cognitive, neuropsychological, academic, speech-language) to ensure measurement of a wider range, of broad and narrow cognitive abilities in a manner that is consistent with contemporary theory and research and that is predicated upon sound psychometric principles. The foundational sources of information upon which the XBA approach was formulated (e.g., CHC theory and the classifications of ability tests according to this theory), coupled with straightforward step-by-step procedures, provide a way to systematically construct a theoretically driven, comprehensive, and valid assessment of a wide range of cognitive abilities and processes. When the XBA approach is applied to the Wechsler Intelligence Scales, for example, it is possible to measure important abilities that would otherwise not be assessed (e.g., Ga, Glr)—abilities that are important in understanding school learning and certain vocational and occupational outcomes (e.g., Flanagan et al, 2006; Flanagan & Kaufman, 2009). The XBA approach allows for the measurement of the major cognitive areas specified in CHC theory with emphasis on those considered most critical on the basis of history, observation, response to intervention, and other available sources of data. The CHC classifications of a multitude of ability tests bring stronger content and construct validity evidence to the evaluation and interpretation process. As test development continues to evolve and becomes increasingly more sophisticated (psy-chometrically and theoretically), batteries of the future will undoubtedly possess stronger content and construct validity. (The above comparison of Tables 19.2 and 19.4 illustrated this point.) Improvements in test construction notwithstanding, it is unrealistic from an economic and practical standpoint to develop a battery that operationaliz-es contemporary CHC theory fully (Carroll, 1998; Flanagan et al., 2007). Therefore, it is likely that the XBA approach will remain important as the empirical support for CHC theory mounts and the need to evaluate comprehensively a greater range of abilities continues (Reynolds et al., 2011). NOTE 1. Das and Naglieri developed the CAS front PASS theory; therefore, their test is based on an information-processing theory, rather than any specific theory within the psychometric tradition (see Naglieri & Otero, Chapter 15, this volume). REFERENCES Alfonso, V. C, Flanagan, D. P., & Radwan, S. (2005). The impact of the Cattell-Horn-Carroll theory on test development and interpretation of cognitive and academic abilities. In D. P. Flanagan & P. L. Harrison (Eds.), Contemporary intellectual assessment: Theories, tests, and issues (2nd eci, pp. 185-202). New York: Guilford Press. Borgas, K. (1999). Intelligence theories and psychological assessment: Which theoty of intelligence guides your interpretation of intelligence test profiles? The School Psychologist, 53, 24-25. Brackett, ]., & McPherson, A. (1996). Learning disabilities diagnosis in postsecondary students: A comparison of discrepancy-based diagnostic models. In N. Gregg, C. Hoy, & A. F. Gay (Eds.), Adults with learning disabilities: Theoretical and practical perspectives (pp. 68-84). New York: Guilford Press. Briggs, S. R., & Cheek, j. M. (1986). The role of factor analysis in the development and evaluation of personality scales. Journal of Personality, 54(1), 106-148. Carroll, J. B. (1993). Human cognitive abilities: A survey 482 CONTEMPORARY INTERPRETIVE APPROACHES of factor-analytic studies. Cambridge, UK: Cambridge University Press. Carroll, J. B. (1998). Foreword. In K. S. McGrew & D. P. Flanagan, The intelligence test desk reference: Gf-Gc cross'battery assessment (pp. xi-xii). Boston: Ailyn & Bacon. Clarke, L. A., & Watson, D. (1995). Constructing validity: Basic issues in objective scale development. Psychological Assessment, 7, 309-319. Comrey, A. L. (1988). Factor-analytic methods of scale development in personality and clinical psychology. Journal of Consulting and Clinical Psychology, 56, 754-761. Daniel, M. H. (1997). Intelligence testing: Status and trends. American Psychologist, 52, 1038-1045. Decker, S. L. (2008). School neuropsychology consultation in neurodevelopmental disorders. Psychology in the Schools, 45, 799-811. Elliott, C. D. (1990). Differential Ability Scales, San Antonio, TX: Psychological Corporation. Elliott, C. D. (2007). Differential Ability Scales—Second Edition. San Antonio, TX: Harcourt Assessment. Epstein, S. (1983). Aggression and beyond: Some basic issues on the prediction of behavior. Journal of Personality, 51, 360-392. Esters, E. G., Ittenbach, R. E, 6k Han, K, (1997). Today's IQ tests: Are they really better than their historical predecessors? School Psychology Review, 26, 211-223. Flanagan, D. P., 6k Alfonso, V. C. (Eds.). (2011). Essentials of specific learning disability identification Hobo-ken, NJ: Wiley. Flanagan, D. P., Alfonso, V. C, & Mascolo, J. T. (2011). A CHC-based operational definition of SLD: Integrating multiple data sources and multiple data-gathering methods. In D. P. Flanagan & V. C. Alfonso (Eds.), Essentials of specific learning disability identification (pp. 233-298). Hoboken, NJ: Wiley. Flanagan, D. P., Alfonso, V. C, Ortiz, S. O, & Dynda, A. M. (2010). Integrating cognitive assessment in school neuropsychological evaluations. In D. C. Miller (Ed.), Best practices in school neuropsychology: Guidelines for effective practice assessment, and evidence-based intervention (pp. 101-140). Hoboken, NJ: Wiley. Flanagan, D. P., Genshaft, J. L., 6k Harrison, P. L. (Eds.). (1997). Contemporary intellectual assessment: Theories, tests and issues. New York: Guilford Press. Flanagan, D. P., 6k Harrison, P. L. (Eds.). (2005). Con-temporary intellectual assessment: Theories, tests and issues (2nd ed.). New York: Guilford Press. Flanagan, D. P., & Kaufman, A. S. (2004). Essentials of W7SC-IV assessment Hoboken, NJ: Wiley. Flanagan, D. P., 6k Kaufman, A. S. (2009). Essentials of WISC-W assessment (2nd ed.). Hoboken, NJ: Wiley. Flanagan, D. P., 6k McGrew, K. S. (1997). A cross-battery approach to assessing and interpreting cognitive abilities: Narrowing the gap between practice and cognitive science. In D. P. Flanagan, J. L. Genshaft, 6k P. L. Harrison (Eds.), Contemporary intellectual assessment: Theories, tests, and issues (pp. 314-325). New York: Guilford Press. Flanagan, D. P., McGrew, K. S, 6k Ortiz, S. O. (2000). The Wechsler intelligence scales and Gf-Gc theory: A contemporary approach to interpretation. Needham Heights, MA: Allyn 6k Bacon. Flanagan, D. P., 6k Ortiz, S. O. (2001). Essentials of cross-battery assessment. New York: Wiley. Flanagan, D. P., Ortiz, S. O, 6k Alfonso, V. C. (2007). Essentials of cross-battery assessment (2nd ed.). New York: Wiley. Flanagan, D. P., Ortiz, S. O., 6k Alfonso, V, C. (2012). Essentials of cross-battery assessment (3rd ed.). Hoboken, NJ: Wiley. Flanagan, D. P., Ortiz, S. O., Alfonso, V. C, 6k Mascolo, J. T. (2002). The achievement test desk reference (ATDR): Comprehensive assessment and learning disabilities. Boston: Allyn 6k Bacon. Flanagan, D. P., Ortiz, S. O., Alfonso, V. C, 6k Mascolo, J. T. (2006). The achievement test desk reference (ADTR): A guide to learning disability identification. Boston: Allyn 6k Bacon. Flynn, J. R. (1984). The mean IQ of Americans: Massive gains 1932 to 1978. Psychological Bulletin, 95, 29-51. Flynn, J. R. (2010). Problems with IQ gains: The huge vocabulary gap. fournal of Psychoeducational Assessment, 28, 412-433. Genshaft, J. L., 6k Gerner, M. (1998). CHC cross-battery assessment: Implications for school psychologists. Communique, 26(8), 24-27. Guilford, J. P. (1954). Psychometric methods (2nd ed.): New York: McGraw-Hill. Horn, J. L. (1991). Measurement of intellectual capabilities: A review of theory. In K. S. McGrew, J. K. Werd-er, 6k R. W. Woodcock, Woodcock-Johnson technical manual (pp. 197-232), Chicago: Riverside. Kamphaus, R. W, Winsor, A. P., Rowe, E. W., 6k Kim, S. (2005). A history of intelligence test interpretation,: In D. P. Flanagan 6k P. L, Harrison (Eds.), Contemporary intellectual assessment (2nd ed., pp. 23—38). New York: Guilford Press, Kaufman, A. S. (2000), Foreword. In D. P. Flanagan* K. S. McGrew, 6k S. O. Ortiz, The Wechsler intelligence scales and Gf-Gc theory: A contemporary approach to interpretation. Needham Heights, MA: Allyn 6k Bacon. Kaufman, A. S., 6k Kaufman, N. L. (1983). Kaufman Assessment Battery for Children. Circle Pines, MN: American Guidance Service. Kaufman, A. S., 6k Kaufman, N. L. (1993). Kaufman Cross-Battery Assessment 483 : Adolescent and Adult Intelligence Test. Circle Pines, MN: American Guidance Service. Kaufman, A. S., & Kaufman, N. L. (2004). Kaufman As- ■ sessment Battery for Children—Second Edition. Circle Pines, MN: American Guidance Service. Keith, T. Z., Fine, J. G„ Reynolds, M. R., Taub, G. E., & Kranzler, J. H. (2006). Hierarchical, multi-sample, confirmatory factor analysis of the Wechsler Intelligence Scale for Children-Fourth edition: What does it measure? School Psychology Review, 35, 108-127. Keith, T. Z„ & Reynolds, M. R. (2010). CHC and cognitive abilities: What we've learned from 20 years of research. Psychology in the Schools, 47, 635-650. Lezak, M. D. (1976). Neuropsychological assessment. New York: Oxford University Press. Lezak, M. D. (1995). Neuropsychological assessment (3rd ed.). New York: Oxford University Press. Lezak, M. D., Howieson, D. B., & Loring, D. W. (2004). Neuropsychological assessment (4th ed.). New York: Oxford University Press. McGrew, K. S. (1994). Clinical interpretation of the Woodcock-] ohnson Tests of Cognitive Ability—Revised. Boston: Allyn & Bacon. McGrew, K. S. (1997). Analysis of the major intelligence batteries according to a proposed comprehensive CHC framework. In D. P. Flanagan, j. L. Genshaft, & P. L. Harrison (Eds.), Contemporary intellectual assessment: Theories, tests, and issues (pp. 151-180). New York: Guilford Press. McGrew, K. S. (2005). The Cattell-Hom-Carroll theory of cognitive abilities: Past, present, and future. In D. P. Flanagan & P. L. Harrison (Eds.), Contemporary intellectual assessment: Theories, tests, and issues (2nd ed., pp. 136-182). New York: Guilford Press. McGrew, K. S., & Flanagan, D. P. (1998). The intelligence test desk reference (ITDR): CHC cross-battery assessment. Boston: Allyn 6k Bacon. McGrew, K. S„ & Wendling, B. J. (2010). Cattell-Horn-Catroll cognitive-achievement relations: What we have learned from the past 20 years of research. Ps;y-chology in the Schools, 47, 651-675. Messick, S. (1989), Validity. In R. Linn (Ed.), Education-al measurement (3rd ed., pp. 104-131). Washington, DC: American Council on Education. Messick, S. (1992). Multiple intelligences or multilevel intelligence?: Selective emphasis on distinctive properties of hierarchy: On Gardner's Frames of Mind and Sternberg's Beyond IQ in the context of theory and research on the structure of human abilities. Psychological Inquiry, 3, 365-384- Messick, S. (1995). Validity of psychological assessment: Validation of inferences from persons' responses and performances as scientific inquiry into score meaning. American Psychologist, 50, 741-749. Naglieri, j. A., 6k Das, J. P. (1997). Cognitive Assessment System. Itasca, IL: Riverside. Reynolds, M. R., Keith, T. Z., Flanagan, D. P., & Alfonso, V. C. (2011). CHC taxonomy. Invariance of selection of variables and population. Manuscript in preparation. Roid, G. H. (2003). Stanford-Biyiet Intelligence Scales, Fifth Edition. Itasca, IL: Riverside. Sternberg, R. J., 6k Kaufman, J. C. (1998). Human abilities. Annual Review of Psychology, 49, 479-502. Thorndike, R. L, Hagen, E. P., 6k Sattler, ]. M. (1986). Sian/ord-Binet Intelligence Scale: Fourth Edition. Chicago: Riverside. Wechsler, D. (1981). Wechsler Adult Intelligence Scale— Revised. New York: Psychological Corporation. Wechsler, D. (1989). Wechsler Preschool and Primary Scale of Intelligence—Revised. San Antonio, TX: Psychological Corporation. Wechsler, D. (1991). Wechsler Intelligence Scale for Children—Third Edition. San Antonio, TX: Psychological Corporation. Wechlser, D. (1997). Wechsler Adult Intelligence Scaled-Third Edition. San Antonio, TX: Psychological Corporation. Wechsler, D. (2002). Wechsler Preschool and Primary Scale of Intelligence—Third Edition. San Antonio, TX: Psychological Corporation. Wechsler, D. (2003). Wechsler Intelligence Scale for Children—Fourth Edition. San Antonio, TX: Psychological Corporation. Wechsler, D. (2008). Wechsler Adult Intelligence Scale— Fourth Edition, San Antonio, TX: Pearson. Wilson, B. C. (1992). The neuropsychological assessment of the preschool child: A branching model. In I. Rapin 6k S. I. Segalowitz (Vol. Eds.), Handbook of neuropsychology: Vol. 6. Child neuropsychology (pp. 377-394). Amsterdam: Elsevier. Woodcock, R. W. (1990). Theoretical foundations of the WJ-R measures of cognitive ability, journal of Psychoeducational Assessment, 8, 231-258. Woodcock, R. W., & Johnson, M. B. (1989). Woodcock-Johnson Psycho-Educational Battery—Revised. Chicago: Riverside. Woodcock, R. W., McGrew, K. S., 6k Mather, N. (2001, 2007). Woodcock-Johnson III Tests of Cognitive Abilities. Itasca, IL: Riverside.