doi:10.1093/brain/awl250 Brain (2006), 129, 2571–2584 Making non-fluent aphasics speak: sing along! Ame´lie Racette,1 Ce´line Bard2 and Isabelle Peretz1 1 Department of Psychology, University of Montreal and 2 Department of Radiology, Centre Hospitalier Universitaire de Montre´al, Montreal, Quebec, Canada Correspondence to: Isabelle Peretz, PhD, De´partement de psychologie, Universite´ de Montre´al, Pavillon Marie-Victorin, local D-418, 90, avenue Vincent D’Indy, Montreal, Quebec, H2V 2S9, Canada E-mail: isabelle.peretz@umontreal.ca A classic observation in neurology is that aphasics can sing words they cannot pronounce otherwise. To further assess this claim, we investigated the production of sung and spoken utterances in eight brain-damaged patients suffering from a variety of speech disorders as a consequence of a left-hemisphere lesion. In Experiment 1, the patients were tested in the repetition and recall of words and notes of familiar material. Lyrics of familiar songs, as well as words of proverbs and prayers, were not better pronounced in singing than in speaking. Notes were better produced than words. In Experiment 2, the aphasic patients repeated and recalled lyrics from novel songs. Again, they did not produce more words in singing than in speaking. In Experiment 3, when allowed to sing or speak along with an auditory model while learning novel songs, aphasics repeated and recalled more words when singing than when speaking. Reduced speed or shadowing cannot account for this advantage of singing along over speaking in unison. The results suggest that singing in synchrony with an auditory model— choral singing—is more effective than choral speech, at least in French, in improving word intelligibility because choral singing may entrain more than one auditory–vocal interface. Thus, choral singing appears to be an effective means of speech therapy. Keywords: aphasia; singing; speech; melodic intonation therapy; music Abbreviations: MIT ¼ Melodic Intonation Therapy Received June 26, 2006. Revised August 11, 2006. Accepted August 14, 2006. Advance Access publication September 7, 2006. Introduction Singing is considered to be an effective means for non-fluent aphasics to produce words that they are not able to pronounce otherwise (Keith and Aronson, 1975; Assal et al., 1977; Yamadori et al., 1977; Jacome, 1984). Singing may facilitate speech at different stages of processing: at the motor stage by reducing the speech rate in dysarthric patients (Yorkston and Beukelman, 1981; Yorkston et al., 1990; Hustad et al., 2003), at the level of word retrieval by providing structural constraints, such as the number of syllables per beat (Wallace, 1994; Rubin, 1995; Poulin-Charronat et al., 2005) or at a motivational level by engaging recreational skills. Identification of the factors that make singing an effective treatment for speech disorders is important to determine who can be helped by musical interventions and why. The use of music as a therapy for speech has given birth to the Melodic Intonation Therapy (MIT). MIT is a ‘rehabilitation program using high probability phrases and sentences which are intoned and tapped out in a syllableby-syllable manner’, mainly used with patients with severe expressive deficits (Sparks et al., 1974; Naeser and HelmEstabrooks, 1985). An early interpretation of successful recovery from aphasia with the MIT technique was that it facilitated the use of homologous language areas of the right hemisphere after damage to the language areas in the left hemisphere (Albert et al., 1973). However, a study by Belin et al. (1996) challenged this notion in showing that repetition of words trained with MIT elicited a deactivation in the right-hemisphere structures while language-related left-hemisphere structures were active in seven non-fluent aphasic patients who were successfully treated with MIT. One possible explanation for this discrepancy is that the American use of the MIT emphasizes melodic cues whereas the French intervention stresses rhythm. The rhythmic support seems to be key (Boucher et al., 2001). MIT treatments emphasizing rhythm (with stressing points emphasized by the spoken syllable /tae/, and handtapping) lead to better syllable repetition than treatments emphasizing intonation (tone contour and melodic intonation). Another advantage related to rhythm is that singing slows down the rate of word production and thereby may improve intelligibility (Laughlin et al., 1979; Pilon et al., 1998). Intelligibility is tightly related to speech rate as # The Author (2006). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oxfordjournals.org byguestonFebruary12,2014http://brain.oxfordjournals.org/Downloadedfrom observed in three dysarthric patients with three different pacing methods (Pilon et al., 1998). The slower the pace imposed on speech production, the better the intelligibility, but only when the speech reduction was severe. Because singing slows down word production by 50% as compared with speaking (Kilgour et al., 2000), singing should be effective in improving intelligibility. An additional reason why singing could improve speech is that words learned through music, as in familiar songs, are fixed or non-generative. Automatic or non-propositional speech, which includes not only song lyrics but also prayers and swearing, is usually preserved in non-fluent aphasia (Ryding, et al., 1987; Speedie et al., 1993; Blank et al., 2002; Van Lancker-Sidtis, 2004). Moreover, in songs, words and melodies are learned together. Melody may facilitate access to words because they are tightly associated in memory (Serafine et al., 1984, 1986; Crowder et al., 1990; Peretz et al., 2004b). In sum, there are many reasons why singing can improve language production. Nevertheless, supporting evidence is scant. Cohen and Ford (1995) examined the production of songs by 12 patients who became aphasic after a unilateral left-hemisphere vascular accident, under three experimental conditions: naturally spoken, spoken with a steady drumbeat accompaniment and sung with the melody played on a keyboard. They found that speech content and error types did not differ across conditions, but that word intelligibility was higher when utterances were spoken without any support. However, it may be the case that the patients could not benefit from the musical aid because of associated musical disorders. Indeed, aphasia and amusia often occur together (Marin and Perry, 1999). Yet, even when music processing is preserved, we found that singing did not help speech recovery in two single-case studies (He´bert et al., 2003; Peretz et al., 2004a). The results suggest that verbal production, be it sung or spoken, is mediated by the same (impaired) language output system and that this speech route is distinct from the (spared) melodic route. Thus, the classic reports that non-fluent aphasics are able to sing may simply reflect the dissociation between automatic speech (in singing) and propositional speech (in spontaneous speech). However, our two aphasic patients (He´bert et al., 2003; Peretz et al., 2004a) presented atypical forms of aphasia (i.e. crossed aphasia and primary progressive aphasia, respectively). The goal of the present study was to examine this issue in common speech disorders. To this aim, we conducted a multiple-case study with aphasic patients whose performance was compared when singing and reciting both familiar and novel utterances. A variety of speech deficits were considered to allow the exploration of a wide range of error types in production. In Experiment 1, we investigated patients’ production of familiar material, such as traditional songs, prayers, proverbs and rhymes. If automatic access to words in memory is critical, then the aphasic patients should be fluent with all verbal material, be it spoken or sung. The patients not only had to recite the lyrics of the songs, but they also had to sing the words of prayers and proverbs on a familiar melody. If singing helps word production, then sung words should be more accurate than spoken words in both lyrics and wellknown expressions. In order to assess the effect of singing on word production when access to the words is not automatic, we tested the patients with novel songs in Experiments 2 and 3. Furthermore, patients learned the novel songs either alone (Experiment 2) or in unison (Experiment 3). Unison production was performed at two different speeds in order to explore the role of speech rate on word intelligibility. Unison production should improve performance when compared with speaking or singing alone, simply because it provides online cues for the words to vocalize. ‘Choral speech’ is known to improve speech fluency in people who stutter (Saltuklaroglu et al., 2004). The aid should be particularly effective at slow speed. General method Participants Eight non-fluent aphasics were recruited through the Quebec Association of Persons with Aphasia (AQPA). A summary of the patients’ characteristics is given in Table 1. All participants are right-handed, French-speaking and suffered a left cerebral vascular accident at least 2 years before the study. No patient had pre-morbid history of neurological or psychiatric problems. CT scans of four patients and MRI scans of three patients were obtained at the time of testing (summer 2002, except for JH’s scan that was taken after his stroke in 1997). Informed consent was obtained from all patients and the study was approved by the Ethical Committee of the Montreal University Geriatric Institute. The patients were assessed and tested in French, their native language. All patients had undergone speech therapy. At the time of testing, the patients were re-evaluated by two independent speech therapists, using the MT-86 (Nespoulous et al., 1992), the short version of the Token Test (De Renzi and Vignolo, 1962) and subtests of the French version of the Boston Diagnostic Aphasia Examination (Mazaux and Orgogozo, 1981). On the basis of these tests, the patients were diagnosed as suffering from Broca’s aphasia (JS, LB, RD), mixed aphasia with predominance of expressive deficits (RH, PP, CA, JH) and anomia (LD; see Table 1). In addition to the language problem, most cases (except JH and LD) suffered from dysarthria, an articulatory problem due to impaired coordination of speech muscles, and buccofacial apraxia, an inability to coordinate and carry out facial and lip movements. The main diagnostic scores are presented in Table 2. As can be seen, the speech disorders were mostly expressive, affecting both oral and written language. Simple comprehension was well preserved. Limited verbal expression, preserved comprehension and intact right hemisphere made all the patients eligible for MIT, even if some were better candidates (e.g. 2572 Brain (2006), 129, 2571–2584 A. Racette et al. byguestonFebruary12,2014http://brain.oxfordjournals.org/Downloadedfrom LB, JS, RD) than others (e.g. LD), because of their severe speech reduction. Individual scores are ordered in the Tables from the most to the less severe cases of speech reduction. In order to assess prosody, the patients and three neurologically intact participants were required to produce three neutral sentences with four different intonations: statement, question, joy and sadness. The renditions were Table 2 Language assessment LB PP JS JH CA RD RH LD MT-86 aphasia battery Expression Naming 0/6 11/31 26/31 17/31 30/31 31/31 28/31 28/31 Verbal fluency 3 0 13 15 35 38 32 36 No agrammatism Automatized speech Digits 1 to 10 n n n n n n n Slow Days of the week n– d n n n n n Slow Months of the year – d n n n n n Slow Words of familiar songs d d d n n– n n n– Melody of familiar songs n n n n n– n n n– Repetition Syllables 5/20 19/20 3/20 20/20 16/20 18/20 11/20 20/20 Words and non-words 10/30 22/30 17/30 30/30 28/30 27/30 21/30 30/30 Sentences 0/3 0/3 0/3 2/3 1/3 1/3 0/3 1/3 Auditory comprehension Words 9/9 9/9 9/9 9/9 8/9 9/9 – 9/9 Simple sentences 6/6 6/6 5/6 6/6 6/6 6/6 6/6 6/6 Complex sentences 24/32 25/32 23/32 28/32 24/32 24/32 32/32 28/32 Body-part identification 8/8 7/8 8/8 8/8 8/8 8/8 8/8 9/9 Object manipulation 5/8 5/8 5/8 6/8 5/8 7/8 8/8 7.5/8 Reading comprehension 5/9 14/20 16/23 15/23 17/23 18/23 23/23 13/13 Dictation – 3/7 4/9 0 2/4 1/6 23/35 n Token test 16/36 10.5/36 13.5/27 22/36 16.5/28 28.5/36 29/36 26.5/36 n = normal performance; n- = with help; d = deficit; – = not assessed. Table 1 Characteristics of patient’s diagnosis, aetiology and lesion location Patient Sex Age Education Years since last infarct Number of infarcts Diagnosis Aetiology Lesion location LB F 56 10 5 1 Severe Broca’s aphasia Severe dysarthria Left sylvian CVA PP M 45 11 6 2 Severe mixed aphasia Left sylvian CVA JS M 55 18 20 1 Severe Broca’s aphasia Left frontoparietal aneurysm Thalamic lacuna cerebri JH M 62 20 5 1 Moderate to severe mixed aphasia; Predominantly expressive deficit Left sylvian CVA CA F 36 14 13 1 Moderate to severe mixed aphasia; Predominantly expressive deficit Aneurysm of the anterior and mid-left cerebral artery RD F 54 9 19 1 Moderate Broca’s aphasia Left sylvian CVA RH M 67 7 9 2 Atypical Broca’s aphasia; severe dysarthria Left sylvian CVA LD F 38 15 19 1 Mild to moderate anomia Left sylvian CVA The right side of the brain is on the left; CVA = cerebral vascular accident. Making non-fluent aphasics sing Brain (2006), 129, 2571–2584 2573 byguestonFebruary12,2014http://brain.oxfordjournals.org/Downloadedfrom randomly mixed and presented to five judges who had to guess and rate the intended intonation. The aphasic patients scored lower than controls for at least one intonation, especially joy. The fact that aphasics had pronunciation difficulties might have played a role. Each participant was tested with a short neuropsychological battery of tests, including the digit span, the standard non-coloured Raven’s matrices (1996) and the Tower of London (Shallice, 1982; see Table 3). Digit spans were limited, probably because of the alteration of verbal rehearsal (Burgio and Basso, 1997). Raven’s matrices revealed good reasoning abilities in half of the participants. The other half was impaired. This is not too surprising since lefthemisphere lesions, especially in the frontal region, have been associated with difficulties in reasoning (Langdon and Warrington, 2000). Disorders of executive functions, as revealed by planning difficulties in the Tower of London, were present in JS, and to a lesser degree in PP. Regarding musical abilities, it is worth mentioning that five aphasics (LB, PP, JS, CA, RH) participate in the choir activity organized by their association for 2 h a week. The choir activity consists in singing along familiar tunes. None of the participants had formal musical training. Their musical perception abilities were assessed with the Montreal Battery for Evaluation of Amusia (MBEA; Peretz et al., 2003). The scores are presented in Table 4. Five participants had normal scores, whereas two participants (PP and RD) were considered amusic because their composite scores were 2 SD below the mean of normal controls. PP’s performance reflects a disorder in the temporal organization of music. RD’s amusia is more severe; however, her residual pitch and temporal abilities, as revealed in the repetition of familiar melodies (see Experiment 1), seem sufficient to support singing. General procedure and data analysis The patients participated in about four sessions of 2 h each. Sessions were adapted to the patients’ capacities with flexible durations, pauses and at-home testing. The testing sessions were recorded on a DAT Sony recorder via a Shure 565SD microphone. All productions were saved in digital sound files. The experimenter (AR) and a university student in speech–language pathology transcribed all the produced words for scoring purpose. Words were considered correct or incorrect, irrespectively of their pitch and duration. Words were chosen over syllables because number of syllables sometimes differs across conditions; mute vowels are often sung but not pronounced. Contractions, such as articles or short words that were not produced on a distinct note, were considered as part of the word to which they were attached. A point was given for every word correctly produced in the right order. Recognizable (because of mild phonemic errors) words were given half a point. Omissions and substitutions were given no point. The number of correct words corresponded to the number of words for which both judges gave a point. The words on which they disagreed were discarded. The score corresponded to the number of words for which both judges gave a point or half a point, divided by the total number of words both judges agreed on and multiplied by 100, for each song line. A total of 4957, 1835 and 4960 words were analysed in Experiments 1, 2 and 3, with inter-rater agreements of 92, 96 and 88%, respectively. The experimenter (AR) and a music faculty student transcribed by ear and scored the musical productions. Pitch intervals and directions were analysed, but not rhythm because of the numerous pauses or hesitations. The number of correct notes corresponded to the number of notes for Table 4 Patients’ score on the MBEA Patient Scale Contour Interval Rhythm Metre Memory Composite score LB 29 25 26 26 26 25 26 PP 27 25 24 21* 18* 16* 22* JS 26 24 24 26 27 20* 25 JH 24 29 21 30 22 25 25 CA 24 26 22 28 26 25 25 RD 19* 21* 18* 22* 20 15* 19* RH 30 28 29 27 20 24 26 LD 27 24 25 30 23 26 26 Cut-off score 22 22 21 23 20 22 23 The maximal score on each test is 30 (chance is 15). *Below cut-off. Table 3 Neuropsychological assessment Patient Digit span Raven matrices (percentile) Tower of London mean number of movements (mean total time in seconds)* LB 3 48/60 (41 p.) 8.3 (26.3) PP 2 52/60 (50 p.) 9.75 (30.75) JS 3 27/60 (<5 p.) 10.7 (100.7) JH 4 35/60 (10 p.) 5.2 (25.7) CA 3 42/50 (11 p.) 6.4 (33.8) RD 4 41/60 (12 p.) 7 (38.5) RH 4 39/60 (25 p.) 5.4 (21.7) LD 3 52/60 (50 p.) 5.1 (18) *The standard norms for the Tower of London are M = 5.75, SE = 1.49 for the mean number of movements and M = 24.67, SE = 24.5 for the mean total time. 2574 Brain (2006), 129, 2571–2584 A. Racette et al. byguestonFebruary12,2014http://brain.oxfordjournals.org/Downloadedfrom which both judges gave a point. When there was a disagreement, the note was discarded. The musical score corresponded to the total of correct pitches divided by the total number of possible notes minus the notes both raters disagreed upon for these productions, multiplied by 100. A total of 7717, 1410 and 3398 musical notes were analysed in Experiments 1, 2 and 3, with inter-rater agreements of 95, 85 and 79%, respectively. Because of the limited number of patients and the variability between subjects, non-parametric tests were used, with an alpha level of 0.05. Experiment 1: Production of familiar material Method Patients were presented with two types of material. The first material consisted of 14 familiar songs selected from children and traditional songs (Peretz et al., 1995). Song excerpts were, on average, 5 words (range: 3–7) and 9 notes (6–11) long. The second material was composed of two well-known prayers (‘Our Father’ and ‘Holy Mary’), six proverbs (e.g. ‘All roads lead to Rome’) and one nursery rhyme (e.g. ‘Eenie, meenie, minie, mo’). We will refer to these selected non-musical over-learned expressions as ‘verbatim speech’. The sentences were, on average, 7 words (6–10) and 8.5 notes (7–12) long. Patients had to perform two tasks on these materials: a repetition task and a recall from memory task. In the repetition task, the experimenter gave the first line and the participant had to repeat it in the same mode (for examples, see www.brams.umontreal.ca/peretz/). For sung repetition, the verbatim speech lines were sung to a familiar melody. Eight familiar songs that matched the number of syllables in the line were used. The second task implied recall from memory: the titles were presented to participants, who had to produce as much as they knew of the song, prayer or rhyme. For the proverbs, the beginning of the expression was given (e.g. An apple a day. . .) and the patient completed it (. . . keeps the doctor away). The recall of verbatim speech was always performed before the repetition task. Otherwise, order of presentation was counter-balanced across participants. At the end, spontaneous production and repetition of the songs’ melodies on the syllable /la/ were assessed in one single block, in a counter-balanced order. Throughout the experiment, live presentation was used, hence enabling the use of visual as well as auditory cues. Patients could also ask the experimenter to repeat the title or the line in order to achieve his/her best performance (usually two to four times). Results Individual scores, corresponding to the percentage of correct words or notes produced and obtained for the different songs and for the different verbatim expressions, were averaged for each participant and for each task (repetition and recall; see Table 5) and compared with Wilcoxon tests. The results will first be presented for songs and then for verbatim speech. For these two types of automatic productions, two conditions (isolated, combined) and two tasks (repetition, recall) will be compared for words and for musical notes. The isolated condition refers to the production of words only or musical notes only, while the combined condition refers to the sung production of both words and notes. Error types in word production were also assessed and compared between the sung and spoken conditions. Songs As can be seen in Table 5, the percentage of words correctly produced was not higher in singing than in speaking, in both repetition, Z = 0.34, n.s., and recall, Z = 0.56, n.s. This held true for each patient when considering song excerpts as the random variable (all P > 0.05, by Wilcoxon tests). Only one patient (LB) produced more words from memory in singing than in speaking, Z = 1.96, P = 0.05. The correlation between participants’ word scores in singing and speaking was significant in repetition, r(8) = 0.99, P < 0.001, and in recall, r(8) = 0.89, P < 0.01. The most frequent errors committed by the patients were non-words which share phonemes with the target word (phonemic paraphasias: 39%). The other errors Table 5 Percentages of correct word and note production for familiar songs Patient Repetition Recall Combined Isolated Combined Isolated LB Words 54 56 50 34 Notes 90 98 90 85 PP Words 34 38 24 16 Notes 83 81 43 41 JS Words 13 10 6 10 Notes 98 91 80 24 JH Words 93 93 73 71 Notes 100 98 94 95 CA Words 91 83 66 69 Notes 93 85 70 72 RD Words 89 90 65 72 Notes 88 82 88 67 RH Words 85 89 66 72 Notes 96 92 83 64 LD Words 84 78 49 77 Notes 74 63 31 18 Mean Words 67.9 67.1 49.9 52.6 Notes 90.3 86.3 72.4 58.3 Making non-fluent aphasics sing Brain (2006), 129, 2571–2584 2575 byguestonFebruary12,2014http://brain.oxfordjournals.org/Downloadedfrom were omissions (31%), neologisms (12%) and real words semantically related to the target word (semantic paraphasia: 11%). There were very few real words with no semantic relation with the target word (lexical paraphasias: 6%). In repetition, the error types were similar in singing and speaking (all P > 0.05), except for the phonemic errors, which were more frequent in singing (45%) than in speaking (33%), Z = 1.96, P < 0.05. This was true for all but one participant (RH) who suffers from a severe dysarthric problem. For all patients but LD, production of musical notes was much easier than words. This was confirmed statistically in singing songs in both repetition, Z = 1.82, P = 0.07, and recall, Z = 2.10, P < 0.05 (see Words versus Notes Combined in Table 5). A similar trend was apparent when participants were producing musical notes or words alone, although it did not reach significance. None of the correlations computed between word and note scores reached significance (all P > 0.05). LD’s discrepancy between repetition and recall scores for notes seems to reflect a production deficiency rather than a perceptual or memory difficulty. In general, word performance was higher in repetition than in recall, both in singing, Z = 2.52, P < 0.05, and in reciting, Z = 2.38, P < 0.05, although the three patients with the most severe form of aphasia (LB, PP, JS) had very low scores in both tasks. Similarly, the percentage of musical notes correctly produced was higher in repetition than in recall, both when produced alone, Z = 2.52, P < 0.05, and with words, Z = 2.20, P < 0.05. While sung words were not better performed than recited words, sung notes were more accurate when sung with words than on /la/ in recall, Z = 2.03, P < 0.05. The same tendency was present in repetition but failed to reach significance, Z = 1.61, n.s. Verbatim speech Comparing the sung and spoken production of popular expressions is interesting because it is usually the goal of speech therapy to improve production of functional sentences such as ‘Thank you’. However, the present results suggest that using songs is not very effective (see Table 6). No effect of condition was obtained in repetition of verbatim speech, Z = 0.98, n.s. The only patient (LB) to show a positive effect of singing, with Z = 2.06, P < 0.05, is the same patient who benefited from singing songs. Here, LB was unable to recite a single word. However, two other patients (JS, RD) exhibited the reverse pattern, with spoken repetitions being better than sung repetitions, Z = 2.03 and 2.20, P < 0.05. JS was unable to repeat the expressions in singing. For these patients, executive impairments might explain the difficulty to sing conventionally spoken expressions to familiar melodies. The other participants had similar performance in the two modalities (all P > 0.05). Here, omission was the most frequent error (53%). Phonemic paraphasias (21%) were again more frequent than semantic paraphasias (3%). Comparisons of the proportions of each error type in the sung and the spoken conditions revealed only one significant difference for neologisms (e.g. tantan for claire). These errors were more frequent in singing (14.3%) than in speaking (10.6%; Z = 1.99, P < 0.05). For both materials, the errors respected the number of syllables of the words in 92% of the phonemic paraphasias, thus preserving the rhythmic structure of the line, both in singing and in reciting. Vowels were correctly produced in 83% (range: 38–100%) of the phonemic paraphasias while the correct consonants were only preserved in 31% (0–75%) of the cases. Finally, we compared performance obtained with songs and verbatim speech. Song lyrics were always better produced than verbatim speech, in both sung and spoken repetition, Z = 2.52 and 2.03, P < 0.05, respectively, and there was a trend in that direction in spoken recall, Z = 1.68, P = 0.09. The musical notes were also more accurate when sung with the original lyrics than when paired with verbatim speech, Z = 2.52, P < 0.05. Discussion Singing did not improve word production compared with speaking. Even in songs where lyrics are usually sung, there was no advantage of singing the words over speaking them. This was also true for verbatim speech, such as proverbs and prayers, which were not better repeated when sung on a familiar melody than when spoken. Table 6 Percentages of correct word and note production for verbatim speech Patient Repetition Recall Combined Isolated Isolated LB Words 28 0 7 Notes 77 PP Words 18 9 0 Notes 75 JS Words 0 11 28 Notes 22 JH Words 59 78 48 Notes 77 CA Words 50 72 83 Notes 38 RD Words 50 84 71 Notes 27 RH Words 84 82 48 Notes 84 LD Words 76 84 51 Notes 55 Mean Words 45.6 52.5 39.8 Notes 56.9 2576 Brain (2006), 129, 2571–2584 A. Racette et al. byguestonFebruary12,2014http://brain.oxfordjournals.org/Downloadedfrom Musical production was generally more preserved than speech. Interestingly, the melody was best performed when sung with the lyrics, except for one patient (LD). This result supports the fact that melody and text of songs are tightly associated in memory so that the words can facilitate melodic performance. The fact that this association is not as effective in the other direction, from melody to words, in aphasics suggests that the association is asymmetrical. Finally, words in songs are more easily produced than words in prayers or proverbs. Different factors may account for this advantage. Songs may be more familiar, by being more often practised or heard, than prayers or proverbs. This difference in frequency of occurrence would make songs more accessible in memory than verbatim speech. Songs are also stored in memory in a dual code, that is, in a speech and a musical code (Samson and Zatorre, 1991). The melody might act as an additional cue that facilitates the word retrieval compared with prayers or proverbs. Thus, the automatic status of the material does not seem to account entirely for the advantage of song production over spontaneous speech in aphasic patients. Experiment 2: Novel song learning In novel songs, the sung version bears no advantage over the spoken version because they are both heard together for the first time. If singing does indeed facilitate word production in non-fluent aphasics, sung words should be better produced than spoken words when learning novel songs. The goal of the present experiment was to test this prediction with repetition and recall of novel words and notes. Furthermore, the putative influence of music at encoding words was also examined separately from its output influence. Presentation of the lines was either sung or spoken (with the melody in the background, referred to as the ‘divided’ presentation). Repetition was sung, spoken or sung on /la/ (see Table 8). Material Unfamiliar songs were chosen from the repertoire of Claude Gauthier, a popular French–Canadian folk-singer, author and composer. Four songs with few repetitions of words or melodic lines were selected (see Table 7 for an example). The 115 words used in the songs had a mean frequency of 2650 per million, including function words, based on a French lexical database (New et al., 2001): 76% were highly frequent, with a frequency of usage >50 per million. Only 10% of the words had a low frequency, corresponding to <15 per million. The musical notes (nine per line on average) outnumbered words (six per line on average), t(31) = 11.21, SE = 0.22, P < 0.001. The song excerpts were considered to be ‘good’ songs, as assessed by seven pilot participants who were unfamiliar with the singer. The judges were presented with the song excerpts in their original version and these were randomly mixed with Table 7 Illustration of the adaptive learning procedure with two trials Lyrics presented Lyrics repeated Lyrics to be recalled 1 Dans cette petite boıˆte vide 1 Dans cette petite boıˆte vide 1 Dans cette petite boıˆte vide 1 Dans cette petite boıˆte vide 2 Avec un ruban de velours 2 Avec un ruban de velours 2 Avec un ruban de velours 2 Avec un ruban de velours 1 Dans cette petite boıˆte vide 2 Avec un ruban de velours 3 Il y a tout mon cœur et mes rides 3 Il y a tout mon cœur et mes rides 3 Il y a tout mon cœur et mes rides 3 Il y a tout mon cœur et mes rides 1 Dans cette petite boıˆte vide 2 Avec un ruban de velours 3 Il y a tout mon cœur et mes rides 4 Mon sourire et tout mon amour 4 Mon sourire et tout mon amour 4 Mon sourire et tout mon amour 4 Mon sourire et tout mon amour 1 Dans cette petite boıˆte vide 2 Avec un ruban de velours 3 Il y a tout mon cœur et mes rides 4 Mon sourire et tout mon amour Table 8 Modes of presentation and production in each condition of Experiment 2 Presentation Repetition/recall Sung Sung Sung Spoken Spoken–sung on /la/ Spoken Spoken–sung on /la/ Sung on /la/ Making non-fluent aphasics sing Brain (2006), 129, 2571–2584 2577 byguestonFebruary12,2014http://brain.oxfordjournals.org/Downloadedfrom excerpts of hit songs from the same folk-singer. Each excerpt was presented twice, in a random order. For each song excerpt, the judges rated its musicality, its simplicity and its potential to be a hit, on three six-point scales where 1 meant poor, and 6 excellent. Very similar ratings were obtained for the hits and the experimental songs on each dimension (3.7 and 3.8 for musicality, 3.4 and 3.5 for simplicity and 3.2 and 3.4 for hit potential, with first and second ratings pooled together), supporting the idea that the selected material corresponded to well-formed songs. In all songs, there was a one-to-one mapping between syllables and tones, with each syllable coupled with a single note. Lines respected the grouping preference rules proposed by Lerdahl and Jackendoff (1983). The alignment between the prosody of the text and the rhythm of the melody conformed to the rules of French songs (Dell, 1989). Regarding musical structure, the songs had a stable and standard metre. All the songs were in the major mode. Even if melodies were chosen for their diversity, the melody parts were highly coherent within a song. Eight four-line excerpts (two per song) were included for the learning task (see Table 7). On average, an excerpt contained 25 words (range: 21–27) and 34 notes (range: 28– 38). An additional four-line excerpt from an unfamiliar choir song (mean of five words and nine notes per line) by Johann Steuerlein served as a training song. The four songs and the training song were produced a capella (without instrumental accompaniment) by a female singer, who learned the songs beforehand. The same singer also sang each song on /la/ and pronounced the lyrics with a natural intonation. The three versions of the same song served to create two types of stimuli, the sung and ‘divided’ songs. The latter were created by coupling each spoken line with its corresponding melody sung on /la/, in order to give the spoken presentation of words the same musical context as the sung presentation of words. In these divided songs, the intensity of the melody was decreased by 32%, on average, in order to make the spoken version intelligible. The intelligibility of songs’ lyrics was equivalent in the sung and the ‘divided’ songs. The length of the original spoken version was about half of the length of the sung version [M = 2.48 and 4.95 s per line, respectively, t(31) = 13.44, P < 0.001]. Because the divided condition combined both the spoken and the ‘sung on /la/’ versions (M = 4.92 s per line), divided and sung presentations had equivalent lengths. In the divided condition, the shorter spoken line was placed in the middle of the sung melody, so that it was preceded and followed by equivalent durations of the melody. Procedure The practice song was learned before each condition. It served to determine the number of attempts that each participant needed to achieve his/her best repetition of a line. Patients had always the same number of attempts to repeat a line throughout the experiment. The number of trials varied between two for JH, RD, RH, LD; three for LB, JS, CA; and four for PP. Then, the patients heard the whole excerpt to be learned. Then, (s)he had to repeat (as many times as determined previously) one line at a time, following the procedure shown in Table 7. Each time a line was added, the patient had to recall them from the beginning, until all four lines had been presented, repeated and recalled. In the sung–sung condition, the patient listened to the sung version of the lyrics and sang them back. In the sung– spoken condition, the patient listened to the sung version of the lines and repeated only the lyrics, by pronouncing them in a natural way. In the divided–spoken condition, the participant listened to the divided version of the lines and repeated again only the lyrics. In the last condition, dividedsung on /la/ the participant listened to the divided version and repeated only the melody, on the syllable /la/ (see Table 8). The patients learned one excerpt in each condition, for a total of four different songs. The order of presentation of the conditions that involved repetition of lyrics (sung–sung, sung–spoken, divided–spoken) was counter-balanced across participants. The divided-sung on /la/ condition was done last. Examples of stimuli and patients’ productions are available at www.brams.umontreal.ca/peretz/. Data scoring was performed as in Experiment 1, the best repetition performance and recall of the whole song (lines 1–4) being analysed for words and notes. The best Table 9 Percentage of words correctly reproduced in Experiment 2 Patient Word production Sung–sung Sung–spoken Divided–spoken LB Repetition 9 31 14 Recall 8 0 0 PP Repetition 5 0 0 Recall 0 0 0 JS Repetition 9 10 11 Recall 4 8 10 JH Repetition 54 51 50 Recall 13 9 14 CA Repetition 26 49 56 Recall 0 9 4 RD Repetition 69 50 70 Recall 25 4 44 RH Repetition 78 83 88 Recall 21 26 24 LD Repetition 85 70 83 Recall 36 29 63 Mean Repetition 41.9 43.0 46.5 Recall 13.4 10.6 19.9 2578 Brain (2006), 129, 2571–2584 A. Racette et al. byguestonFebruary12,2014http://brain.oxfordjournals.org/Downloadedfrom repetition for the sung production corresponded to the best word repetition performance. The scoring system thus favoured word production over musical note production in the combined production. Results As can be seen in Tables 9 and 10, there was a large variability in the scores obtained in each condition. Despite this important variability, none of the patients obtained a higher score in the sung–sung condition than in the spoken conditions. Singing did not help word repetition. The comparison of the three conditions involving words (sung– sung, sung–spoken, divided–spoken; Table 9) did not reveal any difference, x2 (2) = 1.23, n.s. (by Friedman test). Similar effects were obtained with recall scores [x2 (2) = 2.74, n.s.; see Table 9]. The rate of syllable production was calculated in order to examine if singing slowed down aphasic’s speech relatively to speaking. The rate corresponded to the total duration of word articulation (independently of accuracy) divided by the number of syllables produced. Sung syllables (M = 572 ms) were slower than the spoken syllables (M = 494 ms), with Z = 2.20, P < 0.05. The types of word errors were similar across the three conditions in repetition and there was no effect of condition (all P > 0.05). Omissions were the most common type of error (40%). As for familiar songs, phonemic paraphasias were slightly more frequent in the sung condition (34% versus 17 and 20% for the sung–spoken and the divided– spoken conditions, respectively). Compared with university students, who only made omissions and substitutions (Racette and Peretz, 2006), aphasic patients made more omissions and slightly less substitutions (semantic and lexical paraphasias). Thus, phonemic paraphasias are specific to aphasic patients; they preserved the syllabic structure of the word in 85% of the cases, the vowels in 80% and the consonants in only 37%. As can be seen in Table 10, the patients did not reproduce more musical notes when singing with words (sung–sung condition) than when singing on /la/ (divided-sung on /la/ condition), Z = 1.61, n.s. and Z = 0, n.s., in repetition and recall, respectively. The scores for notes repetition were actually higher without than with words for six patients. The proportions of correctly reproduced words and musical notes were also compared and did not yield any significant difference in the sung–sung condition, with Z = 0 and 0.67, both n.s., in repetition and recall, respectively. The same was true when notes and words were produced in isolation after the divided presentation, Z = 0.14 and 1.19, both n.s., in repetition and recall, respectively. There was no correlation between word and note scores (all P > 0.05). As expected, recall scores were always inferior to repetition scores, in each condition and for each component (all P < 0.05). Discussion Music did not improve intelligibility or verbal memory when aphasics learned a novel song. Moreover, the aphasic patients made the same type of errors when singing and when speaking, suggesting that the speech output, whether sung or spoken, was controlled by the same mechanisms. Music had no effect on presentation. Hearing a sung text (in the sung–spoken condition) did not lead to better word production than hearing the spoken text (in the divided– spoken condition). Thus, the results obtained here with unfamiliar songs parallel those found with familiar songs in Experiment 1. Unlike Experiment 1, the aphasics did not reproduce more notes than words. However, here, the scoring system was biased towards words in the combined productions of words and notes to enable comparison across conditions. In our prior study with students, they performed much better on the verbal than musical component (Racette and Peretz, 2006). Here, in aphasics, the superiority of words over notes in production was reduced. Experiment 3: Production in unison In this last experiment, we examined another strategy that is often used in speech therapy, particularly the MIT, which consists in singing in unison with the therapist before trying to sing alone. We examined here how the patients benefit from singing and speaking along. Repetition and recall of Table 10 Percentage of notes correctly reproduced in Experiment 2 Patient Note production Sung–sung Divided-sung on /la/ LB Repetition 28 89 Recall 6 58 PP Repetition 37 33 Recall 0 4 JS Repetition 43 69 Recall 0 0 JH Repetition 60 67 Recall 56 11 CA Repetition 30 34 Recall 0 0 RD Repetition 17 50 Recall 11 39 RH Repetition 70 48 Recall 21 16 LD Repetition 15 23 Recall 28 0 Mean Repetition 37.5 51.6 Recall 15.3 16.0 Making non-fluent aphasics sing Brain (2006), 129, 2571–2584 2579 byguestonFebruary12,2014http://brain.oxfordjournals.org/Downloadedfrom words and notes were done in unison. We also used two speeds of presentation—the original and a much slower one—to further assess the role of rate of articulation. Method Only the sung–sung and the divided–spoken conditions were used here at the same speed as used in Experiment 2. These versions were also slowed down by 50%, using the Cool Edit program (Syntrillium Software Corporation, 2000). This reduction of speed was chosen because it preserved the naturalness of the voice. During repetition and recall, the patients produced the words while listening to the target song lines. Otherwise, the same learning procedure as used in Experiment 2 was followed. The sung–sung and divided–spoken conditions were administered in the same order as used in Experiment 2 for a given patient. Two excerpts (four lines) were learned in each condition: one at the original speed first and then at the slow speed, and vice versa for the other. The speed was counter-balanced across conditions. A total of four excerpts, different from the ones learned in Experiment 2, were learned: two while singing and two while speaking. The patients listened to the songs via a headset that was equipped with a microphone (see Fig. 1). The presented lines fed one channel of a DAT Sony recorder and were timelocked to the patient’s production that was recorded on the other channel of the DAT recorder. Patients could hear themselves but the headphones might have altered the auditory–vocal feedback to some extent. Like in the previous experiments, the best repetition was analysed. Results and comments Two conditions (sung–sung, divided–spoken) and two speeds (original, slow) could be compared in repetition and recall. This time, singing was found to improve production. More words were correctly articulated when singing along (M = 63.2%) than when speaking along (M = 50.5%), with Z = 2.10, P < 0.05 for repetition scores averaged over the two speeds (see Table 11). Six of the eight patients (LB, PP, JS, CA, RD, LD) showed this effect in repetition and/or recall. As for recall, before shadowing all the lines from the beginning, the patients were first invited to recall words alone, without the model. Patients’ performance could thus be compared when recalling alone and along (see Table 11). In general, recall from the beginning of the song was better when shadowing than when producing alone without a model, both when singing, Z = 2.52, P < 0.05, and when speaking, Z = 2.24, P < 0.05. Shadowing was also beneficial to pitch accuracy: repetition of musical notes was better in unison than alone, Z = 2.38, P < 0.05 (see Table 12). Yet, shadowing had a differential impact on singing and speaking. Singing tended to be superior to speaking when shadowing the four lines (with 46.2 versus 34.9%, for sung and spoken recall along the model, respectively; Z = 1.82, P = 0.07), as described above for each line. In contrast, speaking tended to be more advantageous than singing when recalling the four lines without shadowing (with 22 versus 8.1%, for spoken and sung recall alone, respectively; Z = 1.86, P = 0.06). Thus, singing alone does not facilitate word recall, only singing along does. The rate of articulation was significantly slower in singing (696 ms/syllable) than in speaking (426 ms/syllable), with Z = 2.52, P < 0.05. In order to explore the possible Fig. 1 Setting used in Experiment 3. Table 11 Percentages of words correctly reproduced in Experiment 3 (shadowing-like task) Patient Words Original speed Slow speed Sung– sung Divided– spoken Sung– sung Divided– spoken LB Repetition 62 26 57 44 Recall 41 22 60 30 Alone 13 10 10 8 PP Repetition 41 16 55 25 Recall 38 7 38 18 Alone 0 0 0 0 JS Repetition 7 0 29 1 Recall 13 1 22 0 Alone 3 3 3 2 JH Repetition 83 76 81 84 Recall 64 55 70 74 Alone 26 38 12 38 CA Repetition 75 52 75 59 Recall 52 36 59 35 Alone 10 14 4 21 RD Repetition 68 30 65 58 Recall 34 27 41 44 Alone 18 27 2 36 RH Repetition 59 79 76 84 Recall 22 37 39 35 Alone 25 32 23 40 LD Repetition 85 88 93 85 Recall 67 54 78 70 Alone 0 61 0 21 Mean Repetition 60.0 46.3 66.4 55.0 Recall 41.4 31.4 50.9 38.3 2580 Brain (2006), 129, 2571–2584 A. Racette et al. byguestonFebruary12,2014http://brain.oxfordjournals.org/Downloadedfrom contribution of speed of articulation to the benefit of singing, the individual rates of articulation, obtained in each condition, were plotted as a function of the word score. This plot is presented in Fig. 2. As can be seen, four of the six patients (LB, PP, CA, RD) exhibited a significant correlation between syllable duration and word accuracy. However, most patients had small score differences in word accuracy between the slow and the original speeds in singing (with 66.4 versus 60%, on average; Z = 1.35, n.s.), even though the rates of articulation were significantly different between the two presentation speeds (Z = 2.52, P < 0.05, with M = 796 and 596 ms/syllable for the slow and original speed, respectively). In speaking, the effect of speed reaches significance (with 55 versus 45.9%; Z = 2.24, P < 0.05) and parallels the difference in rate of articulation (Z = 2.39, P < 0.05, with M = 459 and 393 ms/syllable for the slow and original speed, respectively). Thus, rate of articulation does contribute to intelligibility of aphasics’ speech but cannot fully account for the superiority of singing over speaking. Moreover, in Experiment 2, patients were already producing sung syllables at a slower speed than spoken syllables, without exhibiting an advantage in word accuracy. Thus, the advantage of slowing down rate of articulation on speech production is neither constant nor general. General discussion The main finding from the present study is that singing per se does not help aphasics to improve their speech. When patients were producing words alone, without shadowing, singing did not facilitate word articulation whether the songs were familiar (Experiment 1) or unfamiliar (Experiment 2). The similarity in the proportion of intelligible words and in the types of error committed while singing and speaking suggests that there is a unique code for words, either sung or 0 10 20 30 40 50 60 70 80 90 100 0 200 400 600 800 1000 1200 SYLLABLE DURATION (ms) ACCURACYSCORE(%) LB PP JS CA JH RD RH LD Fig. 2 Word accuracy according to syllable duration for each patient in the spoken–original speed, the spoken–slow speed, the sung– original speed and the sung–slow speed conditions in Experiment 3. Table 12 Percentage of notes correctly reproduced in Experiment 3 (shadowing-like task) Patient Notes Original speed Slow speed LB Repetition 81 76 Recall 80 87 PP Repetition 86 100 Recall 89 100 JS Repetition 62 69 Recall 56 65 JH Repetition 56 66 Recall 55 52 CA Repetition 76 80 Recall 64 66 RD Repetition 49 40 Recall 34 45 RH Repetition 83 89 Recall 37 55 LD Repetition 79 87 Recall 61 77 Mean Repetition 71.5 75.9 Recall 59.5 68.4 Table 13 Summary of the main results for word production in the three experiments Experiment 1 Experiment 2 Experiment 3 Material(s) Familiar songs Unfamiliar songs Unfamiliar songs Verbatim speech Tasks Repetition Repetition Unison repetition Recall Recall Recall alone Unison recall Conditions Sung Sung–sung Sung–slow speed Spoken Sung–spoken Sung–original speed Divided–spoken Spoken–slow speed Spoken–original speed Results Sung = spoken Sung = spoken Sung > spoken (in unison) Sung = divided Spoken > sung (in recall alone) Songs > verbatim speech Slow speed > original speed Making non-fluent aphasics sing Brain (2006), 129, 2571–2584 2581 byguestonFebruary12,2014http://brain.oxfordjournals.org/Downloadedfrom spoken. These results are consistent with two prior single case studies in which patients were suffering from atypical aphasia (He´bert et al., 2003; Peretz et al., 2004a) in showing that sung repetition is no better than spoken repetition. The present study extends this conclusion to a variety of speech disorders caused by a left-hemisphere lesion because each patient showed the same profile, at different levels of performance. The novelty of the present study resides in the observation in Experiment 3 that singing with someone else is better than speaking in unison (see Table 13). Thus, singing has more potential than speaking to improve intelligibility when shadowing. Different factors may account for this ‘singing along’ advantage. As discussed earlier, the slower rate of articulation imposed by singing is not sufficient to account for this effect. In Experiment 2, rate of articulation was slower in singing than in speaking, and yet, there was no advantage of singing over speaking. In Experiment 3, a decrease in rate of articulation was experimentally induced by slowing down the speed of the model but it had little impact on performance. The ‘singing along’ advantage probably arises from the opportunity to synchronize one’s performance with a stable model. Vocal imitation alone is not sufficient to improve speech intelligibility, because vocal imitation after a model (via repetition) was involved in all three experiments. Even in the presence of visual cues provided by face-to-face vocal repetition, as was the case in Experiment 1 with familiar songs, patients did not improve their speech while singing. What seems to be critical is to be able to imitate in synchrony with an auditory model. The physical presence of the model is not necessary because the conditions of auditory–vocal synchrony, or ‘choral singing’, that we used in Experiment 3 were limited to the auditory channel. Moreover, the reduced memory load that the shadowing task affords applied to both singing and speaking in unison. Yet, shadowing only improved word production in singing along. Thus, on-line singing imitation is the critical condition to improve aphasic speech. In singing along, the perception of the song is synchronous to the production of that same song. This principle can be related to the operation of mirror neurons (Rizzolatti and Arbib, 1998; Iacoboni et al., 1999) or of an auditory–motor interface (Hickok et al., 2003; Warren et al., 2005; Callan et al., 2006). Both models posit a direct link between perception and action for planning and performance. According to these auditory–motor integration models, there is a transformation of the auditory signal to a form that constrains its articulatory output via templatematching processes (Warren et al., 2005). This mechanism would operate in language (Rizzolatti and Arbib, 1998; Hickok et al., 2003) and in music (Callan et al., 2006). Thus, in theory, this auditory–vocal interface (or mirror neurons system) could contribute to the learning of novel songs as tested here in aphasic patients, especially when singing at the same time as an auditory model. In principle, the auditory–vocal interface would intervene in speaking, not only in singing. Indeed, choral speech has been shown to improve speech intelligibility in people who stutter (Kalinowski and Saltuklaroglu, 2003). Yet, we observed a benefit of synchronous vocal imitation in singing only. There are different factors that may account for this advantage of choral singing. First, all of the aphasic participants suffered from a left sylvian stroke that affected the brain areas that are involved in the auditory–vocal matching system in humans. That is, the lesions are located in the vicinity of the left temporal region that is normally active in both perception and covert production of sung and spoken lyrics (Callan et al., 2006) and in both perception and covert production of novel sentences and melodies, as revealed by neuroimaging studies of normal subjects (Hickok et al., 2003; Warren et al., 2005). Therefore, the lesion may have damaged the auditory–vocal interfaces that are involved in both speech and music (Hickok et al., 2003). However, these systems are not totally overlapping. For instance, the greater activation of the right planum temporale for sung over spoken perception and production of songs may reflect the contribution of a distinct auditory– vocal interface recruiting the right side of the brain (Callan et al., 2006). Right-lateralized brain regions are generally more activated by singing than by speaking (Riecker et al., 2000; Jeffries et al., 2003; Brown et al., 2006). Thus, the leftsided lesions that produced the observed speech disorders in our aphasic patients might have spared to some extent the auditory–vocal interface involved in singing. Indeed, all of our aphasic patients sang (without words) with normal proficiency. Thus, it is reasonable to assume that the auditory–vocal matching system involved in singing was relatively spared in the aphasic patients. Hence, this preserved auditory–vocal interface may account for the advantage of choral singing over choral speaking. This musical auditory–vocal interface seems to best operate when singing is synchronous to a sung model. Both word and pitch accuracy were higher in singing along as compared with singing alone. Temporal accuracy could not be assessed here but timing is, in all likelihood, a major contributing factor because temporal regularity promotes synchrony (Large and Palmer, 2002) and because sung lyrics might be more regular than spoken lyrics, although this is currently an open issue (Patel et al., 2006). In this respect, it is worth noting that the tested patients were tested in French. French is not a stress-based language while English is. Future studies should test whether the advantage of choral singing over choral speech is maintained in Englishspeaking aphasic patients. Finally, choral singing is more natural than choral speech. Participating in choir or singing at church are familiar and highly enjoyable activities. The present findings point to choral singing as an effective therapy for various speech disorders. Choral singing may even account for the efficacy of the MIT in its initial stages, when patients sing in unison with the therapist. What might happen in subsequent stages was not assessed in the present study. Recent evidence suggests that singing is beneficial to speech production in the long term (Wilson et al., 2006). 2582 Brain (2006), 129, 2571–2584 A. Racette et al. byguestonFebruary12,2014http://brain.oxfordjournals.org/Downloadedfrom Future longitudinal studies should compare training with choral speech and choral singing to explore if the advantage of choral singing has more long-lasting effects on speech recovery. More generally, we propose to use musical therapies in treatments of aphasic patients because there are additional benefits that may only be provided by music. Singing is a natural and pleasurable way to entrain and engage someone. Producing musical notes is also highly motivating because it provides a natural way to express oneself vocally. As mentioned previously, most patients had a profile of aphasia without amusia (Brust, 2001; Warren et al., 2003). The patients who had a concomitant musical impairment in perception, memory or production did not show a different vocal pattern than the patients without amusia. Hence, singing can be used to motivate aphasic patients to engage in lengthy and laborious sessions by keeping their spirit high. Acknowledgements We wish to thank all participants for their generous participation and the AQPA for its help in finding participants. We also wish to thank Yannick Ge´rard, Julie Savaria and Annie Lavoie for their work in word and note scoring, and Brigitte Damien and Luce Gosselin for the language assessments. We thank Claude Gauthier for giving us access to his song material, and Sylvie He´bert and Bernard Bouchard for their assistance in creating the stimuli and in melody transcription, respectively. Thanks to Pierre Ahad for the setting in Experiment 3 and to Rene´e Be´land for helpful comments along the way. This research was supported by a CIHR doctoral fellowship to A.R. and grants from the International Human Frontier Science Program and the Natural Sciences and Engineering Research Council of Canada to I.P. References Albert ML, Sparks RW, Helm NA. Melodic intonation therapy for aphasia. Arch Neurol 1973; 29: 130–1. Assal G, Buttet J, Javet RC. Aptitudes musicales chez les aphasiques. Rev Med Suisse Romande 1977; 97: 5–12. Belin P, Van Eeckhout P, Zilbovicius M, Remy P, Francois C, Guillaume S, et al. Recovery from non-fluent aphasia after melodic intonation therapy: a PET study. Neurology 1996; 47: 1504–11. Blank SC, Scott SK, Murphy K, Warburton E, Wise RJS. Speech production: Wernicke, Broca and beyond. Brain 2002; 125: 1829–38. Boucher V, Garcia LJ, Fleurant J, Paradis J. Variable efficacy of rhythm and tone in melody-based interventions: implications for the assumption of a right-hemisphere facilitation in non-fluent aphasia. Aphasiology 2001; 15: 131–49. Brown S, Martinez M, Parsons L. Music and language side by side in the brain: a PET study of the generation of melodies and sentences. Eur J Neurosci 2006; 23: 2791–803. Brust JCM. Music and the neurologist: a historical perspective. In: Zatorre RJ, Peretz I, editors. The biological foundations of music. New York: The New York Academy of Sciences; 2001: 143–52. Burgio F, Basso A. Memory and aphasia. Neuropsychologia 1997; 35: 759–66. Callan DE, Tsytsarev V, Hanakawa T, Callan AM, Katsuhara M, Fukuyama H, et al. Song and speech: brain regions involved with perception and covert production. Neuroimage 2006; 31: 1327–42. Cohen NS, Ford J. The effects of musical cues on the nonpurposive speech of persons with aphasia. J Music Ther 1995; 32: 46–57. Crowder RG, Serafine ML, Repp B. Physical interaction and association by contiguity in memory for the words and melodies of songs. Mem Cognit 1990; 18: 469–76. Dell F. Concordances rythmiques entre la musique et les paroles dans le chant. L’accent de l’e muet dans la chanson fran¸caise. In: Dominicy M, editor. Le souci des apparences. Bruxelles: Universite´ de Bruxelles; 1989: 121–36. De Renzi E, Vignolo L. The token test: a sensitive test to detect receptive disturbances in aphasics. Brain 1962; 85: 665–78. He´bert S, Racette A, Gagnon L, Peretz I. Revisiting the dissociation between singing and speaking in expressive aphasia. Brain 2003; 126: 1838–50. Hickok G, Buchsbaum B, Humphries C, Muftuler Y. Auditory-motor interaction revealed by fMRI: speech, music, and working memory in area Spt. J Cogn Neurosci 2003; 15: 673–82. Hustad KC, Jones T, Dailey S. Implementing speech supplementation strategies: effects on intelligibility and speech rate of individuals with chronic severe dysarthria. J Speech Lang Hear Res 2003; 46: 462–74. Iacoboni M, Woods RP, Brass M, Bekkering H, Mazziotta JC, Rizzolatti G. Cortical mechanisms of human imitation. Science 1999; 286: 2526–8. Jacome DE. Aphasia with elation, hypermusia, musicophilia and compulsive whistling. J Neurol Neurosurg Psychiatry 1984; 47: 308–10. Jeffries KJ, Fritz JB, Braun AR. Words in melody: an H(2)15O PET study of brain activation during singing and speaking. Neuroreport 2003; 14: 749–54. Kalinowski J, Saltuklaroglu T. Choral speech: the amelioration of stuttering via imitation and the mirror neuronal system. Neurosci Biobehav Rev 2003; 27: 339–47. Keith R, Aronson A. Singing as therapy for apraxia of speech and aphasia: report of a case. Brain Lang 1975; 2: 482–8. Kilgour AR, Jakobson LS, Cuddy LL. Music training and rate of presentation as mediators of text and song recall. Mem Cognit 2000; 28: 700–10. Langdon D, Warrington EK. The role of the left hemisphere in verbal and spatial reasoning tasks. Cortex 2000; 36: 691–702. Large EW, Palmer C. Perceiving temporal regularity in music. Cogn Sci 2002; 26: 1–37. Laughlin SA, Naeser MA, Gordon WP. Effects of three syllable durations using the melodic intonation therapy technique. J Speech Hear Res 1979; 22: 311–20. Lerdahl F, Jackendoff R. A generative theory of tonal music. Cambridge, MA: MIT Press; 1983. Marin OSM, Perry DW. Neurological aspects of music perception and performance. In: Deutsch D, editor. The psychology of music. New York: Academic Press; 1999: 653–724. Mazaux JM, Orgogozo JM. HDAE-F. Boston diagnostic aphasia examination. Issy les Moulineaux: E´ditions Scientifiques et Psychologiques; 1981. Naeser MA, Helm-Estabrooks N. CT scan localization and response to melodic intonation therapy with non-fluent aphasia cases. Cortex 1985; 21: 203–23. Nespoulous JL, Lecours AR, Lafond D. MT-86-Protocole Montre´al-Toulouse d’examen linguistique de l’aphasie. Isberges: Ortho-Edition; 1992. New B, Pallier C, Ferrand L, Matos R. Une base de donne´es lexicales du fran¸cais contemporain sur internet: LEXIQUE. Anne´e Psychol 2001; 101: 447–62. Patel AD, Iversen JR, Rosenberg JC. Comparing the rhythm and melody of speech and music: the case of British English and French. J Acoust Soc Am 2006; 119: 3034–47. Peretz I, Babai M, Lussier I, He´bert S, Gagnon L. Corpus d’extraits musicaux: indices relatifs a` la familiarite´, a` l’aˆge d’acquisition et aux e´vocations verbales. Can J Exp Psychol 1995; 49: 211–39. Peretz I, Champod AS, Hyde K. Varieties of musical disorders. The Montreal Battery of Evaluation of Amusia. Ann NY Acad Sci 2003; 999: 58–75. Peretz I, Gagnon L, Macoir J, He´bert S. Singing in the brain: insights from cognitive neuropsychology. Music Percept 2004a; 21: 373–90. Making non-fluent aphasics sing Brain (2006), 129, 2571–2584 2583 byguestonFebruary12,2014http://brain.oxfordjournals.org/Downloadedfrom Peretz I, Radeau M, Arguin M. Two-way interactions between music and language: evidence from priming recognition of tune and lyrics in familiar songs. Mem Cognit 2004b; 32: 142–52. Pilon MA, McIntosh KW, Thaut MH. Auditory vs visual speech timing cues as external rate control to enhance verbal intelligibility in mixed spastic-ataxic dysarthric speakers: a pilot study. Brain Inj 1998; 12: 793–803. Poulin-Charronnat B, Bigand E, Madurell F, Peereman R. Musical structure modulates semantic priming in vocal music. Cognition 2005; 94: B67–B78. Racette A, Peretz I. Learning lyrics: to sing or not to sing? Mem Cognit 2006. In press. Raven JC. Standard progressive matrices: sets A, B, C, D & E. Oxford, UK: Oxford Psychologists Press; 1996. Riecker A, Ackermann H, Wildgruber D, Dogil G, Grodd W. Opposite hemispheric lateralization effects during speaking and singing at motor cortex, insula and cerebellum. Neuroreport 2000; 11: 1997–2000. Rizzolatti G, Arbib MA. Language within our grasp. Trends Neurosci 1988; 21: 188–94. Rubin DC. Memory in oral traditions. New York: Oxford University Press; 1995. Ryding E, Bradvik B, Ingvar DH. Changes of regional cerebral blood flow measured simultaneously in the right and left hemisphere during automatic speech and humming. Brain 1987; 110: 1345–58. Saltuklaroglu T, Kalinowski J, Guntupalli VK. Towards a common neural substrate in the immediate and effective inhibition of stuttering. Int J Neurosci 2004; 114: 435–50. Samson S, Zatorre R. Recognition memory for text and melody of songs after unilateral temporal lobe lesion: evidence for dual encoding. J Exp Psychol Learn Mem Cogn 1991; 17: 193–804. Serafine ML, Crowder RG, Repp BH. Integration of melody and text in memory for songs. Cognition 1984; 16: 285–303. Serafine ML, Davidson RJ, Crowder RG, Repp BH. On the nature of melody. Text integration in memory for songs. J Mem Lang 1986; 25: 123–35. Shallice T. Specific impairments of planning. Philos Trans R Soc Lond 1982; 298: 199–209. Sparks R, Helm N, Albert M. Aphasia rehabilitation resulting from melodic intonation therapy. Cortex 1974; 10: 303–16. Speedie LJ, Wertman E, Ta’ir J, Heilman KM. Disruption of automatic speech following a right basal ganglia lesion. Neurology 1993; 43: 1768–74. Van Lancker-Sidtis D. When novel sentences spoken or heard for the first time in the history of the universe are not enough: toward a dual-process model of language. Int J Lang Commun Disord 2004; 39: 1–44. Wallace WT. Memory for music: effect of melody on recall of text. J Exp Psychol Learn Mem Cogn 1994; 20: 1471–85. Warren JD, Warren JE, Fox NC, Warrington EK. Nothing to say, something to sing: primary progressive dynamic aphasia. Neurocase 2003; 9: 140–55. Warren JE, Wise RJS, Warren JD. Sounds do-able; auditory-motor transformations and the posterior temporal plane. Trends Neurosci 28: 636–43. Wilson S, Parsons K, Reutens DC. Preserved singing in aphasia: a case study of the efficacy of melodic intonation therapy. Music Perception 2006. In press. Yamadori A, Osumi Y, Masuhara S, Okubo M. Preservation of singing in Broca’s aphasia. J Neurol Neurosurg Psychiatry 1977; 40: 221–4. Yorkston KM, Beukelman DR. Communication efficiency of dysarthric speakers as measured by sentence intelligibility and speaking rate. J Speech Hear Disord 1981; 46: 296–301. Yorkston KM, Hammen VL, Beukelman DR, Traynor CD. The effect of rate control on the intelligibility and naturalness of dysarthric speech. J Speech Hear Disord 1990; 55: 550–60. 2584 Brain (2006), 129, 2571–2584 A. Racette et al. byguestonFebruary12,2014http://brain.oxfordjournals.org/Downloadedfrom