Psychological Science 2014, Vol. 25(1) 236­–242 © The Author(s) 2013 Reprints and permissions: sagepub.com/journalsPermissions.nav DOI: 10.1177/0956797613498260 pss.sagepub.com Research Report Spontaneous eye movements occur when people recall a scene from memory, and recent research suggests that such eye movements closely reflect the content and spatial relations of the original scene (Johansson, Holsanova, Dewhurst, & Holmqvist, 2012). However, the role of such eye movements remains elusive. Do they have an active and functional role facilitating the retrieval of visuospatial information, or are they merely an epiphenomenon associated with the operation of mnemonic mechanisms? In the present study, we addressed this fundamental issue using a direct manipulation of eye movement constraints during an episodic memory task and found new evidence of a facilitatory role of eye movements. Episodic memory enables people to travel back in time and re-experience previous events in great detail (Tulving, 1983). Cognitive-neuroscience models of memory suggest that such re-experiencing during retrieval is based on the reinstatement of cortical processes that were active at the time of the previous experience (e.g., Marr, 1971; Norman & O’Reilly, 2003). Accumulating evidence supports this notion by demonstrating that common neural systems are activated during perception and retrieval (e.g., Nyberg, Habib, McIntosh, & Tulving, 2000; Wheeler, Petersen, & Buckner, 2000; for reviews, see Danker & Anderson, 2010; Kent & Lamberts, 2008; Rugg, Johnson, Park, & Uncapher, 2008). Episodic remembering is thought to depend on the interaction between retrieval cues and stored memory traces (Tulving, 1983). Two principles have been forwarded to explain the effectiveness of the retrieval cues: encoding specificity (Tulving & Thomson, 1973) and transfer-appropriate processing (Morris, Bransford, & Franks, 1977). According to both of these principles, the greater the overlap between the processing engaged during encoding and during retrieval, the greater the likelihood of successful retrieval. The importance of the compatibility between encoding and retrieval conditions has been underscored by a vast body of memory research (for a review, see Roediger & Guynn, 1996). Thus, remembering involves the reinstatement of the processes that were active during encoding, and the chance of remembering is best when the processes engaged by a retrieval cue overlap with those engaged at 498260PSSXXX10.1177/0956797613498260Johansson, JohanssonEye Movements and Memory Retrieval research-article2013 Corresponding Author: Roger Johansson, Department of Cognitive Science, Lund University, Helgonabacken 12 223 62, Lund 223 62 Sweden E-mail: roger.johansson@lucs.lu.se Look Here, Eye Movements Play a Functional Role in Memory Retrieval Roger Johansson1 and Mikael Johansson2 1 Department of Cognitive Science, Lund University and 2 Department of Psychology, Lund University Abstract Research on episodic memory has established that spontaneous eye movements occur to spaces associated with retrieved information even if those spaces are blank at the time of retrieval. Although it has been claimed that such looks to “nothing” can function as facilitatory retrieval cues, there is currently no conclusive evidence for such an effect. In the present study, we addressed this fundamental issue using four direct eye manipulations in the retrieval phase of an episodic memory task: (a) free viewing on a blank screen, (b) maintaining central fixation, (c) looking inside a square congruent with the location of the to-be-recalled objects, and (d) looking inside a square incongruent with the location of the to-be-recalled objects. Our results provide novel evidence of an active and facilitatory role of gaze position during memory retrieval and demonstrate that memory for the spatial relationship between objects is more readily affected than memory for intrinsic object features. Keywords eye movements, episodic memory, memory, visual memory, visual attention Received 3/13/13; Revision accepted 6/25/13 Eye Movements and Memory Retrieval 237 encoding. To what extent do these principles generalize to the interplay between gaze behavior and memory retrieval? Recent research suggests that recognition of scenes and faces may improve when participants look at the same features of the stimuli during study and during test (Foulsham & Kingstone, 2013; Holm & Mäntylä, 2007; Mäntylä & Holm, 2006). Remarkably, it has also been shown that the oculomotor system reactivates spontaneously during memory retrieval when there is only a blank screen to look at (e.g., Brandt & Stark, 1997; Johansson, Holsanova, & Holmqvist, 2006; Laeng & Teodorescu, 2002; Richardson & Spivey, 2000; Spivey & Geng, 2001). Although it has been claimed that these eye movements to “nothing” can act as facilitatory cues during memory retrieval (cf. Ferreira, Apel, & Henderson, 2008; Richardson, Altmann, Spivey, & Hoover, 2009), there is, to date, no conclusive evidence for such a functional role. In two previous studies, eye movements on a blank screen were manipulated during episodic memory retrieval by restricting gaze behavior to a central fixation cross at the center of the screen. Both of these studies showed impaired memory performance when gaze was restricted compared with free viewing (Johansson et al., 2012; Laeng & Teodorescu, 2002). However, it is possible to attribute the lower performance in those cases to a higher cognitive load due to the additional task of maintaining gaze on the fixation cross (see Johansson et al., 2012; Mast & Kosslyn, 2002). Moreover, a recent study failed to reveal any consequence of eye position during memory retrieval (Martarelli & Mast, 2013), and previous studies without eye movement manipulations have failed to find an influence of gaze position on retrieval accuracy (Richardson & Spivey, 2000; Spivey & Geng, 2001). Thus, the overall picture remains unclear. The present study departs from previous research in several ways. First, our paradigm imposed eye movement restriction during visuospatial-memory retrieval of an arrangement of multiple objects in both free-viewing and central-fixation conditions. Previous studies have typically focused on memory for visual properties of single objects (Laeng & Teodorescu, 2002; Martarelli & Mast, 2013; Spivey & Geng, 2001) or on verbal memory for spoken information (Richardson & Spivey, 2000). Second, we considered the role of participants looking at a specific location that did or did not correspond to (i.e., was congruent or incongruent with) the location associated with the sought-after memories. It has been argued that eye movements function as “spatial indexes” and that those indexes are a part of the internal memory representation for an object or an event. When some part of this episodic trace is accessed during subsequent memory retrieval, an eye movement is thought to be spontaneously triggered toward the indexed location (Altmann, 2004; Richardson & Spivey, 2000). We thus tested the idea that positioning the eyes on a location congruent with their location during encoding increases the likelihood of successful retrieval. Another way in which our study departed from previous research is that we investigated the extent to which interactions between eye movements and visuospatialmemory retrieval depend on the nature of the queried memory representation. Much evidence suggests that the ventral (“what”) and dorsal (“how and where”) streams of visual processing (Milner & Goodale, 1995; Ungerleider & Mishkin, 1982) establish the bases for object and location memory, respectively (e.g., Farah, Hammond, Levine, & Calvanio, 1988; Pollatsek, Rayner, & Henderson, 1990). It is conceivable that the influence of eye movements on visuospatial remembering may be different for intrinsic object features than for the spatial relationship between two or more objects. This issue has not been examined in previous work, and we therefore included a comparison of memory for intrinsic object features with memory for the spatial arrangement between objects (intra- vs. interobject memories). Finally, in contrast to previous work, our analyses of memory performance included response times (RTs), which provide a complementary and potentially more sensitive measure of the availability of the sought-after memory trace than do binary measures of accuracy (cf. Sternberg, 1969). Given that gaze behavior has a functional role in memory retrieval, we expected memory performance to be superior (a) in the free-viewing than in the centralfixation condition and (b) when fixation locations were spatially congruent with the sought-after memory than when they were spatially incongruent. Method Participants Twenty-four native Swedish-speaking students at Lund University (15 female, 9 male) participated in the study (mean age = 24.5 years, SD = 7.1). All reported normal or corrected-to-normal vision. Apparatus and stimuli Stimuli were presented using Experiment Center (Version 3.1; SensoMotoric Instruments, Teltow, Germany) on a 480- × 300-mm monitor (resolution = 1,680 × 1,050 pixels). Eye movements were measured using an iView RED500 eye tracker (SensoMotoric Instruments) that recorded binocularly at 500 Hz. Data were recorded with the iView X 2.5 software following five-point calibration plus validation (average measured accuracy = 0.49°; SD = 0.10°). Fixations were detected with a saccadic-velocitybased algorithm (minimum velocity threshold = 40°/s). 238 R. Johansson, M. Johansson Ninety-six pictures of objects (280 × 262 pixels) were selected from an online database (www.clipart.com). Auditory stimuli consisted of 576 statements (2,500–4,500 ms in length). The statements served as test probes (questions with a yes/no answer) and were spoken by a female voice. Design and procedure Data were obtained in four runs, each of which comprised an encoding phase and a recall phase (see Fig. 1). Encoding phase.  In the encoding phase, participants studied 24 objects distributed in the four quadrants of the computer screen. Each quadrant contained six objects from one of four categories: humanoids, animals, things, and vehicles. Half of the objects within each quadrant were facing right, and the other half were facing left. The encoding procedure was performed in the following sequence. First, a list naming the 6 thematic objects of a quadrant was presented. The objects were then visually presented in one quadrant of the screen simultaneously (30 s). Participants orally named each object and its orientation. They were then free to inspect the objects and try to remember as much as possible about their orientation and spatial arrangement in the time allotted. The same procedure was followed for the remaining quadrants and themes, after which participants inspected all 24 objects simultaneously while rehearsing the objects’ orientation and spatial arrangement (60 s). Recall phase.  In each condition of the recall phase, participants listened to 48 statements of two types: intraobject statements concerning the orientation of an object (e.g., “the car was facing left”) and interobject statements concerning the spatial arrangement between two objects of the same category (e.g., “the train was located to the right of the car”). Participants indicated whether the statements were true or false by saying “yes” or “no.” They were encouraged to answer as quickly and as accurately as possible without guessing. Participants responded to statements in four eye movement conditions: (a) free viewing on a blank screen, (b) central fixation, (c) looking inside a square congruent with the location of the to-be-recalled objects, and (d) looking inside a square incongruent with the location of the to-berecalled objects. Twelve statements (6 intraobject, 6 interobject) were spoken in each condition. The free-viewing and central-fixation conditions were presented in blocked fashion, whereas the congruent and incongruent trials were intermingled across two blocks. Participants were not informed that the quadrant would be either congruent or incongruent with the location of the target object. Over the entire study, each participant responded to 192 statements (96 intraobject and 96 interobject); there were an equal number of true and false statements. Participants were given 8 s to respond following statement offset. The order of intraobject and interobject statements and true and false statements was randomized. The order of the four eye movement conditions was counterbalanced in a Latin square design within subjects over the four runs. “Yes”/ “No” 2 s 8 s Statement + + + Encoding Recall Free Viewing Response Full Display of Objects to Encode Example Stimuli in One Quadrant + Central Fixation Congruent/Incongruent Fig. 1.  Diagram showing the two phases of the study. In the encoding phase, participants saw the names of 24 objects from four categories and then saw the objects in each category displayed as a group; one group appeared in each quadrant of the computer screen. Half of the objects within each quadrant faced right, and the other half faced left. In each condition of the recall phase, participants listened to 96 statements of two types: intraobject statements concerning the direction in which the object was oriented and interobject statements concerning the location of the object in relation to another object of the same category. Participants had to say whether each statement was true or false by answering “yes” or “no.” Participants responded to these statements in four eye movement conditions: during free viewing, while fixating on a central cross, while looking inside a square congruent with the location of the to-be-recalled object, and while looking inside a square incongruent with the location of the to-be-recalled object. Eye Movements and Memory Retrieval 239 The size of the square in the congruent and incongruent condition was the same as the size of the individual stimulus pictures. The location of the square in the incongruent condition was always dislocated 840 pixels in the horizontal dimension and 262 pixels in the vertical dimension from the object indicated in the statement (the maximum distance that could be implemented in a consistent way for all 24 locations). Data analyses Repeated measures analyses of variance were conducted using eye movement condition and statement type (intraobject vs. interobject) as independent variables and response accuracy and RTs as dependent variables. Accuracy was quantified by subtracting the percentage of false alarms from the percentage of hits (Snodgrass & Corwin, 1988). RTs were quantified as the time between the offset of a spoken statement and the onset of the response. RTs were collapsed over all hits into a median RT for each condition and participant. Trials in which participants executed saccades more than 3° away from the fixation cross or outside the square (3° away from the center of the square) were excluded from analysis. Results Spontaneous eye movements to “nothing” Eye movement data from the free-viewing condition were analyzed to assess where participants spontaneously looked during memory retrieval (see Fig. 2). Results revealed a main effect of quadrant, F(3, 69) = 27.186, p < .001; η2 = .54, 95% confidence interval (CI) = [.37, .65]. Follow-up tests using Bonferroni correction showed that the proportion of fixations was significantly higher to the quadrant relevant to the memory task than to all the other three quadrants (p < .001). There was no effect involving statement type. These results replicate previous findings (Richardson & Spivey, 2000; Spivey & Geng, 2001) and demonstrate that eye movements are reliably executed toward empty locations where information was previously encoded. Moreover, a paired samples t test revealed that the overall gaze distance was significantly longer during interobject than during intraobject trials, t(23) = 2.348, p < .05; d = 0.48, 95% CI = [0.05, 0.90] (see the Supplemental Material available online for further details). Constraining eye movements to a central fixation cross The hypothesis that memory performance is impaired when one is not allowed to execute spontaneous eye movements to “nothing” was tested by contrasting the free-viewing and central-fixation conditions (Fig. 3a). An analysis of response accuracy revealed a significant main effect of statement type, F(1, 23) = 15.484, p < .01; η2 = .40, 95% CI = [.11, .62], which was due to better performance to interobject than to intraobject statements; however, there was no reliable effect of eye movement condition. A significant interaction between eye movement condition and statement type was observed for RTs, F(1, 23) = 10.296, p < .01; η2 = .31, 95% CI = [.04, .55]. Follow-up analyses revealed a detrimental effect of constraining eye movements to the central fixation cross; this effect was observed in prolonged RTs for interobject statements, t(23) = 4.08, p < .001; d = 0.83, 95% CI = [0.36, 1.29]. No reliable difference in RTs for the free-viewing and central-fixation conditions was found for intraobject statements. Constraining eye movements to a congruent versus an incongruent location The final and crucial set of analyses concerned the impact of constraining eye movements to a location that differed in whether it corresponded with the encoding location of the to-be-remembered information or did not (Fig. 3b). An analysis of accuracy revealed that memory performance was better for interobject than for intraobject statements, F(1, 23) = 17.523, p < .001; η2 = .43, 95% CI = [.13, .64]. More important, however, participants demonstrated a reliable benefit of looking at a congruent location, both in terms of accuracy, F(1, 23) = 13.443, p < .01; η2 = .37, 95% CI = [.08, .60], and RTs, F(1, 23) = 14.809, p < .001; η2 = .39, 95% CI = [.10, .62]. This pattern of results lends new support to the notion of gaze position 0% 5% 10% 15% 20% 25% 30% 35% 40% Critical First Second Third MeanFixations Quadrant Fig. 2.  Mean percentage of fixations in the four quadrants of the screen during the recall phase of the free-viewing condition. The quadrant that corresponded with the original location of the retrieved objects was coded as the critical quadrant, and the other three quadrants were coded as the first through third in a clockwise direction. Error bars represent standard errors. 240 R. Johansson, M. Johansson playing a functional role in memory retrieval. Furthermore, given that the task was identical in the congruent and the incongruent conditions (constraining eye movements to a small space), these results cannot be explained as a mere artifact of increased cognitive load induced by a secondary task. Discussion In the present study, we employed multiple eye movement conditions to examine the role of gaze behavior in episodic memory retrieval. Taken together, our results provide new evidence of a facilitatory influence of gaze position during remembering. First, it was demonstrated that hindering eye movements can influence visuospatial remembering. A centralfixation constraint perturbed retrieval performance (as indicated by increased RTs) for interobject representations. This finding adds weight to previous results (Johansson et al., 2012; Laeng & Teodorescu, 2002) and further suggests that the impact of eye movements on visuospatial memory may differ depending on the nature of the memory representation one is searching for. The results indicate that memory for the spatial relationship between .30 .35 .40 .45 .50 .55 .60 .65 .70 .75 .80 Intraobject Interobject ResponseAccuracy Free-Viewing Condition Central-Fixation Condition a Statement Type 1,000 1,200 1,400 1,600 1,800 2,000 2,200 2,400 Intraobject Interobject Intraobject Interobject RT(ms) Statement Type .30 .35 .40 .45 .50 .55 .60 .65 .70 .75 .80 Intraobject Interobject ResponseAccuracy b Congruent Condition Incongruent Condition Statement Type 1,000 1,200 1,400 1,600 1,800 2,000 2,200 2,400RT(ms) Statement Type Fig. 3.  Mean response accuracy (proportion of hits – proportion of false alarms; left column) and mean response time (RT) for correct responses (right column) as a function of statement type and condition. Results are shown in (a) for eye movement conditions in which participants heard statements during free viewing on a blank screen and while maintaining central fixation. Results are shown in (b) for eye movement conditions in which participants heard statements while looking inside a square congruent with the location of the to-be-recalled objects and looking inside a square incongruent with the location of the to-be-recalled objects. Error bars represent standard errors. Eye Movements and Memory Retrieval 241 objects is more readily affected than memory for intrinsic object features. Second, our results confirm that memory retrieval is indeed facilitated when eye movements are manipulated toward a blank area that corresponds with the original location of the to-be-recalled object. Results were robust both in respect to memory accuracy and RTs, and these effects were evident irrespective of statement type. Looking at a congruent location thus facilitated retrieval of both intraobject and interobject memory representations. It is important to note that this facilitatory effect cannot be attributed to a difference in cognitive resources taxed by the compared conditions (previous research— in which only free viewing and central fixation were compared—could not rule this out). This is because both the congruent and incongruent conditions in our study were characterized by identical eye movement constraints (to look inside a square). Experience in everyday life constantly reminds people that their memories often are a subject of distortion. They may misremember properties of past events and completely fail to retrieve a desired fact or previous episode. Distorted memories and inaccurate retrieval of this kind often depend on insufficient retrieval cues. The present study demonstrates that how and where eye movements are launched provide important retrieval cues for visuospatial remembering. Thus, we showed that remembering is not only accompanied by eye movements that mirror those associated with the retrieved content but also that gaze positions showing a compatibility between encoding and retrieval conditions increase the likelihood of successful episodic remembering (cf. Tulving, 1983). This is a novel finding that extends previous literature and informs current theoretical models of episodic memory. Author Contributions Data were collected and analyzed by R. Johansson. R. Johansson and M. Johansson designed the research and wrote the manuscript. Both authors approved the final version of the manuscript for submission. Acknowledgments We extend special thanks to Richard Dewhurst for valuable input on the experimental design. Declaration of Conflicting Interests The authors declared that they had no conflicts of interest with respect to their authorship or the publication of this article. Funding This study was supported by the Linnaeus Center for Thinking in Time: Cognition, Communication, and Learning (CCL) at Lund University, which is funded by the Swedish Research Council (Grant No. 349-2007-8695). Supplemental Material Additional supporting information may be found at http://pss .sagepub.com/content/by/supplemental-data References Altmann, G. T. M. (2004). Language-mediated eye movements in the absence of a visual world: The ‘blank screen paradigm.’ Cognition, 93, 79–87. Brandt, S. A., & Stark, L. W. (1997). Spontaneous eye movements during visual imagery reflect the content of the visual scene. Journal of Cognitive Neuroscience, 9, 27–38. Danker, J. F., & Anderson, J. R. (2010). The ghosts of brain states past: Remembering reactivates the brain regions engaged during encoding. Psychological Bulletin, 136, 87–102. Farah, M. J., Hammond, K. M., Levine, D. N., & Calvanio, R. (1988). Visual and spatial mental imagery: Dissociable systems of representation. Cognitive Psychology, 20, 439–462. Ferreira, F., Apel, A., & Henderson, J. M. (2008). Taking a new look at looking at nothing. Trends in Cognitive Sciences, 12, 405–410. Foulsham, T., & Kingstone, A. (2013). Fixation-dependent memory for natural scenes: An experimental test of scanpath theory. Journal of Experimental Psychology: General, 142, 41–56. Holm, L., & Mäntylä, T. (2007). Memory for scenes: Refixations reflect retrieval. Memory & Cognition, 35, 1664–1674. Johansson, R., Holsanova, J., Dewhurst, R., & Holmqvist, K. (2012). Eye movements during scene recollection have a functional role, but they are not reinstatements of those produced during encoding. Journal of Experimental Psychology: Human Perception and Performance, 38, 1289–1314. Johansson, R., Holsanova, J., & Holmqvist, K. (2006). Pictures and spoken descriptions elicit similar eye movements during mental imagery, both in light and in complete darkness. Cognitive Science, 30, 1053–1079. Kent, C., & Lamberts, K. (2008). The encoding-retrieval relationship: Retrieval as mental simulation. Trends in Cognitive Sciences, 12, 92–98. Laeng, B., & Teodorescu, D.-S. (2002). Eye scanpaths during visual imagery reenact those of perception of the same visual scene. Cognitive Science, 26, 207–231. Mäntylä, T., & Holm, L. (2006). Gaze control and recollective experience in face recognition. Visual Cognition, 14, 365– 386. Marr, D. (1971). Simple memory: A theory for archicortex. Philosophical Transactions of the Royal Society B: Biological Sciences, 262, 23–81. Martarelli, C. S., & Mast, F. W. (2013). Eye movements during long-term pictorial recall. Psychological Research, 77, 303–309. doi:10.1007/s00426-012-0439-7 Mast, F. W., & Kosslyn, S. M. (2002). Eye movements during visual mental imagery. Trends in Cognitive Sciences, 6, 271–272. 242 R. Johansson, M. Johansson Milner, A. D., & Goodale, M. A. (1995). The visual brain in action. Oxford, England: Oxford University Press. Morris, C. D., Bransford, J. D., & Franks, J. J. (1977). Levels of processing versus transfer appropriate processing. Journal of Verbal Learning and Verbal Behavior, 16, 519–533. Norman, K. A., & O’Reilly, R. C. (2003). Modeling hippocampal and neocortical contributions to recognition memory: A complementary-learning-systems approach. Psychological Review, 110, 611–646. Nyberg, L., Habib, R., McIntosh, A. R., & Tulving, E. (2000). Reactivation of encoding-related brain activity during memory retrieval. Proceedings of the National Academy of Sciences, USA, 97, 11120–11124. Pollatsek, A., Rayner, K., & Henderson, J. M. (1990). Role of spatial location in integration of pictorial information across saccades. Journal of Experimental Psychology: Human Perception and Performance, 16, 199–210. Richardson, D. C., Altmann, G. T. M., Spivey, M. J., & Hoover, M. A. (2009). Much ado about eye movements to nothing: A response to Ferreira et al.: Taking a new look at looking at nothing. Trends in Cognitive Sciences, 13, 235–236. Richardson, D. C., & Spivey, M. J. (2000). Representation, space and Hollywood Squares: Looking at things that aren’t there anymore. Cognition, 76, 269–295. Roediger, H. L., III, & Guynn, M. J. (1996). Retrieval processes. In E. L. Bjork & R. A. Bjork (Eds.), Memory (pp. 197–236). San Diego, CA: Academic Press. Rugg, M. D., Johnson, J. D., Park, H., & Uncapher, M. R. (2008). Encoding-retrieval overlap in human episodic memory: A functional neuroimaging perspective. Progress in Brain Research, 169, 339–352. Snodgrass, J. G., & Corwin, J. (1988). Pragmatics of measuring recognition memory: Applications to dementia and amnesia. Journal of Experimental Psychology: General, 117, 34–50. Spivey, M., & Geng, J. (2001). Oculomotor mechanisms activated by imagery and memory: Eye movements to absent objects. Psychological Research, 65, 235–241. Sternberg, S. (1969). Memory-scanning: Mental processes revealed by reaction-time experiments. American Scientist, 57, 421–457. Tulving, E. (1983). Elements of episodic memory. Oxford, England: Clarendon Press. Tulving, E., & Thomson, D. M. (1973). Encoding specificity and retrieval processes in episodic memory. Psychological Review, 80, 352–373. Ungerleider, L. G., & Mishkin, M. (1982). Two cortical visual systems. In D. J. Ingle, M. A. Goodale, & R. J. W. Mansfield (Eds.), Analysis of visual behavior (pp. 549–586). Cambridge, MA: MIT Press. Wheeler, M. E., Petersen, S. E., & Buckner, R. L. (2000). Memory’s echo: Vivid remembering reactivates sensoryspecific cortex. Proceedings of the National Academy of Sciences, USA, 97, 11125–11129.