Corrected 3 March 2020. See full text. RESEARCH ANIMAL COGNITION Bumble bees display cross-modal object recognition between visual and tactile senses Cwyn Solvi12*, Selene Gutierrez Al-Khudhairy't, Lars Chittka1 Many animals can associate object shapes with Incentives. However, such behavior Is possible without storing images of shapes in memory that are accessible to more than one sensory modality. One way to explore whether there are modality-independent internal representations of object shapes is to investigate cross-modal recognition—experiencing an object in one sensory modality and later recognizing it in another. We show that bumble bees trained to discriminate two differently shaped objects (cubes and spheres) using only touch (in darkness) or vision (in light, but barred from touching the objects) could subsequently discriminate those same objects using only the other sensory information. Our experiments demonstrate that bumble bees possess the ability to integrate sensory information in a way that requires modality-independent internal representations. Humans can easily recognize, through touch alone, objects they have previously only seen (1). This is demonstrated when we search and rind objects on a high shelf or inside a cluttered bag. The ability to recognize objects across different senses increases the flexibility of any object-recognition system because the amount of noise and available information within different senses can vary dramatically across situations. Cross-modal recognition requires the combination of information from multiple sensory modalities—for example, vision and touch. In humans, visual and tactile abilities are closely linked from birth, but their combination seems incomplete and limited at birth and develops to maturity over many years (2). Cross-modal object recognition has been shown across vision and touch in humans (3), apes (4), monkeys (5), and rats (6); across vision and hearing in dolphins (7); and across vision and electric sense in fish (<3). The ability to recognize objects across modalities entails some type of internal representation of an object's shape or its characteristic features (7, 9,10). In humans, cross-modal recognition seems to require mental imagery (11), an internal representation that occurs in the absence of sensory stimulation in a given sensory modality and that functions as a weak form of perception (12-14). Theoretical analyses and empirical evidence support the idea that mental imagery involves an internal representation that is not only available to awareness but is a basic building block and integral part of consciousness (14-17). The ability to recognize objects across modalities is beneficial and adaptive, allowing for enhanced perceptual monitoring of an ani- 'School of Biological and Chemical Sciences, Queen Mary University of London, London El 4NS, UK. department of Biological Sciences, Macquarie University, North Ryde, NSW 2109, Australia. corresponding author. Email: cwyn.solvi@mq.edu.au fPresent address: Department of Biology, University of York, York, Y01C 5DD, UK. mal's environment, flexible recognition of objects between senses, and richer representations of objects across multiple senses. Cross-modal recognition indicates that an animal possesses coherent "mental images" of objects. Whether the small brains of invertebrates perceive the world by storing internal representations of objects is unknown. Because bumble bees naturally forage in the light but will also forage in the dark in laboratory conditions (18), they constitute an ideal system to examine whether an invertebrate is capable of cross-modal recognition across vision and touch. To do this, we trained 44 bumble bees in a dark room (a windowless room with lights turned off and vents, door edges, and all light diodes covered completely) (supplementary materials) to find 50% sucrose solution in one of two differently shaped objects (cube or sphere; 33 training trials) (Fig. 1A). To ensure that bumble bees could learn to discriminate objects in the dark without any visual information in our paradigm, we examined the behavior of 21 of the trained bumble bees in a nonreinforced test in the same dark setup as that in training. During the test, bumble bees spent more time in contact with the previously rewarding object [generalized linear mixed-effect model (GLMM): 95% confidence interval (CI) = 2.60 (1.95 to 3.25), n = 21 bees, P = 4.51 x 10~15] (Fig. 1C). To assess bumble bees' ability for tactile-to-visual cross-modal recognition, we tested the other 23 trained bees in a lighted arena where they were unable to touch the objects (Fig. IB). In the cross-modal test situation, bumble bees spent more time in contact with the previously rewarding object [GLMM: 95% CI = 1.41 (1.06 to 1.75), n = 23 bees, P = 9.81 x 10"16] (Fig. ID). We then evaluated whether bumble bees were capable of visual-to-tactile cross-modal recognition. To do this, we trained 43 bumble bees first to discriminate cubes and spheres in a lighted arena where they could not touch the objects (Fig. IB). Subsequently, to ensure that our setup was conducive to visual cliscrimination learning, the behavior of 22 of the trained bumble bees was tested in the same setup as in training but unreinforced (no reinforcements were present). Bumble bees spent more time in contact with the previously rewarding objects [GLMM: 95% CI = 2.00 (161 to 2.40), n = 22 bees, P = L36 x KT23] (Fig. IE). To test bumble bees' ability for visual-to-tactile cross-modal recognition, the behavior of the other 21 trained bees was examined in a dark arena (Fig. 1A). In the test, bumble bees again spent more time in contact with the previously rewarding object [GLMM: 95% CI = 1.85 (0.54 to 3.15), n = 21 bees, P = 5.63 x 10~3] (Fig. IF). These results suggest that after learning to discriminate objects by using only one sensory modality, bumble bees can discriminate these same objects by using a different sensory modality. This interpretation rests on the assumption that bumble bees could not touch the objects in the lighted situation and could not see the objects in the dark situation. In the lighted situation, bees could only access the holes of the objects (which were all the same dimensions) with their proboscis but could not touch the outsides of the objects. In the dark situation, the measured light levels within the arena were less than 0.01 lux, and the infrared lights were the only source of illumination that was not covered (supplementary materials). Bumble bees were unable to find their way back to the tunnel that led to the hive without following the walls. Furthermore, the proportion of bees' first entries onto Petri dishes during the tests in the dark was no different between the positively reinforced and negatively reinforced objects [GLMM: trained in dark, 95% CI = 0.69 (-0.21 to 1.60), n = 21 bees, P = 0.13; trained in light, 95% CI = 0.69 (-0.21 to 1.60), n = 21 bees, P = 0.13]. As an additional measure, we tested bees' ability to discriminate the two objects in the dark without being able to touch the objects. One group of bees (n = 20 bees) was trained to discriminate the two objects visually in a lighted arena (Fig. IB), and another group of bees (n = 21 bees) was trained to discriminate the two objects tactilely in the dark (Fig. 1A). Subsequently, during an unreinforced test in the dark and without being able to touch the objects (Fig. 2A), both groups showed no difference in the duration in contact with the two different objects [GLMM: trained in light, 95% CI = -1.81 (-8.98 to 5.35), n = 20 bees, P = 0.62; trained in dark 95% CI = -0.85 (-1.90 to 0.21), n = 21 bees, P = 0.12] (Fig. 2, B and C). Perception of objects results from the integration of information from various sensory modalities (12,19). Two high-order centers in the insect protocerebrum, the mushroom bodies and central complex, have been shown to receive both visual and mechanosensory information (20-22). Solvi et ed., Science 367, 910-912 (2020) 21 February 2020 1 of 3 RESEARCH | REPORT Corrected 3 March 2020. See full text. tu (0 C o u Ol Z5 -o o o rewarding object training test Fig. 1. Cross-modal recognition in bumble bees. (A and B) Setups for training and testing Bumble bees were trained to find 50% sucrose solution in one of two differently shaped objects (sphere or cube) in one setup and then tested in the other. (A) In the dark setup, bees entered a dark arena and found two Petri dishes containing four spheres each and two Petri dishes containing four cubes each. (B) In the lighted setup, bees found the same objects, but placed under the Petri dishes so that the bees could see but not touch the objects. Bees accessed the reinforcement solution (rewarding sucrose solution or aversive quinine solution) through small holes in the top of each shape. (C and D) After being trained in the dark, bumble bees that were tested in the dark [(C) uni-modal] or in the light [(D) cross-modal] spent more time in contact with the previously rewarding object. (E and F) Similarly, after being trained in the light, bumble bees that were tested in the light [(E) uni-modal] or in the dark [(F) cross-modal] spent more time in contact with the previously rewarding object. Bars indicate mean; vertical lines indicate SEM; open circles indicate individual bees' data points (random x axis displacement for individual discernment). We thus surmise that these areas are possible candidates for the multisensory integration necessary for the cross-modal object recognition behavior observed here. Our results indicate more than just direct associations formed between two stimuli across senses, as is the case in simpler forms of cross-modal information transfer {23-26). Cross-modal object recognition requires that the two different senses provide information about the same object property (such as shape); that the information provided is encoded in such a way that it can be identified as related, even though it is temporally and physically distinct; and that this information is stored in a neuronal representation that is accessible by both senses (8). Whether bumble bees solve the task by storing internal representations of entire object shapes (cube or sphere) or local object features (curved or flat edge) remains unknown. In either case, our experiments show that bumble bees are capable of recognizing objects across modalities, even though the received sensory inputs are temporally and physically distinct. Bumble bees show a kind of information integration that requires a modality-independent internal representation (7,9,10). This suggests that similar to humans and other large-brained animals, insects integrate information from multiple senses into a complete, globally accessible, gestalt perception of the world around them (12,26,27). 0.0 rewarding object training test Fig. 2. Bumble bees were unable to see in the dark experimental conditions. (A) Setup for testing in control experiments. Bumble bees had no tactile information regarding the objects during these tests in the dark. (B and C) After being trained in the light (Fig. IB) or in the dark (Fig. 1A), bumble bees that were tested in the dark, while not being able to touch the objects, had no difference in the amount of time they were in contact with the two different objects. Bars indicate mean; vertical lines indicate SEM; open circles indicate individual bees' data points (random x axis displacement for individual discernment). references and notes 1. M. 0. Ernst, M. S. Banks, Nature 415, 429-433 (2002). 2. S. Toprak, N. Navarro-Guerrero, S. Wermter, Cognit. Comput. 10, 408-425 (2018). 3. H. F. Gaydos, Am. J. Psychol. 69, 107-110 (1956) 4. R. K. Davenport, C. M. Rogers, Science 168, 279-280 (1970) 5. A. Cowey, L. Weiskrantz, Neuropsychologia 13,117-120 (1975). 6. B. D. Winters, J. M. Reid, J. Neurosci. 30, 6253-6261 (2010) 7. L. M. Herman, A. A. Pack, M. Hoffmann-Kuhnt, J. Comp. Psychol. 112, 292-305 (1998). 8. S. Schumacher, T. Burt de Perera, J. Thenert, G. von der Emde, Proc. Natl. Acad. Sei. U.S.A. 113, 7638-7643 (2016). 9. B. E. Stein, M. A. Meredith, Ann. N. Y. Acad. Sei. 608, 51-70 (1990). 10. S. M. Kosslyn, in Visual Cognition: An Invitation to Cognitive Science (MIT Press, ed. 2, 1995), vol. 2, pp. 267-296. 11. M. Stoltz-Loike, M. H. Bornstein, Psychol. Res. 49, 63-68 (1987). 12. C. Spence, in Stevens' Handbook ot Experimental Psychology and Cognitive Neuroscience (American Cancer Society, 2018), op. 1-56. 13. B. Nanay, Cortex 105, 125-134 (2018) Solvi et ed., Science 367, 910-912 (2020) 21 February 2020 2 of 3 RESEARCH | REPORT Corrected 3 March 2020. See full text. 14. J. Pearson, T. Naselaris, E. A. Holmes, S. M. Kosslyn Trends Cogn. Sei. 19, 590-602 (2015). 15. D. F. Marks, Br. J. Psychol. 90, 567-585 (1999) 16. D. F. Marks, Brain Sei. 9,107 (2019). 17. J. S. B. T. Evans, Thinking and Reasoning (Psychology Revivals): Psychological Approaches (Psychology Press, 2013). 18. L. Chittka, N. M. Williams, H. Rasmussen, J. D. Thomson Proc. Biol. Sei. 266, 45-50 (1999). 19. B. De Gelder, P. Bertelson, Trends Cogn. Sei. 7, 460-467 (2003). 20. P. G. Mobbs, The connections and spatial organization ot the mushroom bodies, Philos. Trans. R. See. London B Biol. Sei 298, 309-354 (1982). 21. N. J. Strausfeld, J. Comp. Neurol. 450, 4-33 (2002). 22. K. Pteitter, U. Homberg, Organization and functional roles of the central complex in the insect brain, Annu. Rev. Entomoi 59,165-184 (2014). 23. A. L. Yehle, J. P. Ward, Psychon. Sei. 16, 269-270 (1969). 24. J. Guo, A. Guo, Science 309, 307-310 (2005). 25. L. Proops, K. McComb, D. Reby, Proc. Natl. Acad. Sei. U.S.A 106, 947-951 (2009). 26. L. Mudrik, N. Faivre, C. Koch, Trends Cogn. Sei. 18, 488-496 (2014). 27. 0. Deroy ef al., Multisens. Res. 29, 585-606 (2016) ACKNOWLEDGMENTS Funding: This work was supported by European Research Council (ERC) grant SpaceRadarPollinator (grant 339347) and Engineering and Physical Sciences Research Council (EPSRC) grant Brains-on-Board (grant EP/P006094/1) awarded to L.C. Author contributions: CS. conceived and designed the study, with input from L.C; S.G.A.-K. and CS. performed experiments and behavioral data analyses; CS. performed statistical analyses; and CS. and L.C. wrote the paper Competing interests: None declared. Data and materials availability: Data are available in the supplementary materials. SUPPLEMENTARY MATERIALS science.sciencemag.org/content/367/6480/910/suppl/DCl Material and Methods Supplementary Text Database SI Reference (28) View/request a protocol for this paper from Bio-protocol 19 July 2019; accepted 15 January 2020 10.1126/science.aay8064 D O S O a o ho o Solvi et ed., Science 367, 910-912 (2020) 21 February 2020 3 of 3 Science Bumble bees display cross-modal object recognition between visual and tactile senses Cwyn Solvi, Selene Gutierrez Al-Khudhairy and Lars Chittka Science 367 (6480), 910-912. DOI: 10.1126/science.aay8064 These bees have "seen" that before Humans excel at mental imagery, and we can transfer those images across senses. For example, an object out of view, but for which we have a mental image, can still be recognized by touch. Such cross-modal recognition is highly adaptive and has been recently identified in other mammals, but whether it is widespread has been debated. Solvi etal. tested for this behavior in bumble bees, which are increasingly recognized as having some relatively advanced cognitive skills (see the Perspective by von der Emde and Burt de Perera). They found that the bees could identify objects by o shape in the dark if they had seen, but not touched, them in the light, and vice versa, demonstrating a clear ability to 9 transmit recognition across senses. 2. Science, this issue p. 910; see also p. 850 g o. CD Q. O 3 IT -p u> article tools http://science.sciencemag.org/content/367/6480/910 §■ O CD Ul o. MATEmALSNTARY http://science.sciencemag.org/content/suppl/2020/02/19/367.6480.910.DC1 3 content http://science.sciencemag.org/content/sci/367/6480/850.full references This article cites 24 articles, 5 of which you can access for free http://science.sciencemag.Org/content/367/6480/910#BIBL g O permissions http://www.sciencemag.org/help/reprints-and-permissions Use of this article is subject to the Terms of Service Science (print ISSN 0036-8075; online ISSN 1095-9203) is published by the American Association for the Advancement of Science, 1200 New York Avenue NW, Washington, DC 20005. The title Science is a registered trademark of AAAS. Copyright © 2020 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works