Research reportSelective role of lingual/parahippocampal gyrus and retrosplenial complex in spatial memory across viewpoint changes relative to the environmental reference frame
Highlights
► Lingual/parahippocampal gyrus and retrosplenial complex prefer to encode space within stable frames. ► Retrosplenial complex updates locations across viewpoint changes within stable frames. ► Retrosplenial cortex represents a crucial area in coding one's own position and heading in familiar environments. ► Parieto-frontal regions update locations independently from frame stability.
Introduction
Human beings have an outstanding capacity to seamlessly recognize scenes and to remember spatial locations despite intervening changes in point of view, such as those occurring when we walk or are passively transported through a familiar environment. This ability, which is fundamental to spatial orientation, can be experimentally measured by asking observers to study a scene from a given viewpoint, and then, after a true, imagined, or virtual self displacement, to detect whether a given object has been moved, or to point to a memorized location [1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17]. In principle, people could simply match the studied and the test view through some process of mental rotation. However, memory across viewpoint changes involves some more specific process: people perform better when they (physically or virtually) move in the environment around a stationary object array, than when they are still and an object array rotates around them [5], [6], [7], [18], [19], [20], [21], although the two situations appear identical from a geometrical standpoint. This and other lines of evidence [22], [23], [24] show that comparing different views of a scene involves some form of mental self-rotation, which is distinct from the process of mentally rotating objects.
In terms of spatial reference frames, mental rotation of objects requires transforming an object-based frame relative to a fixed egocentric reference, while compensating for occurred viewpoint changes requires transforming an egocentric frame relative to an external (i.e., allocentric) reference (a “perspective transformation”: [21]). In principle, any set of objects can be used to establish an allocentric reference frame relative to which to evaluate a viewpoint change, and the operations involved in encoding spatial locations and then in updating memorized locations after a perceived viewpoint change may not depend on the particular set of objects chosen. However, human neuroimaging studies have shown not only that perceptual coding in egocentric and allocentric reference frames have distinguishable neural signatures (reviewed in [25]), but also that two different forms of allocentric representations can be distinguished: object-based reference frames, encoding spatial locations relative to arbitrary objects, and environmental reference frames, encoding spatial locations relative to some fixed features of the environment [21], [25]. A particular set of cortical regions are selectively activated when a spatial judgment is referenced to enduring environmental features but not to unstable objects [26]. These regions include portions of the medial temporal cortex and adjacent antero-medial occipital lobe (fusiform, lingual and posterior parahippocampal gyrus), the retrosplenial cortex and the precuneus. Their selectivity for environmental referencing depends on the stability of the spatial location of environmental features over time and not on their perceptual features or orienting value [25]. Intracerebral recordings within the posterior medial temporal lobe recently showed that this kind of allocentric selectivity constitutes a separate, later processing stage relative to early visual scene processing [27].
On the basis of the distinction between environmental and object-based allocentric reference frames, we suggest that perspective transformations may be preferentially associated with the former. Indirect evidence for this idea comes from neuroimaging studies which have shown that the cortical regions selective for environmental reference frames are involved in recognizing a scene across different views (see [28] for a recent review) and in perspective vs. object-based transformations [29]. However, previous studies requiring to memorize and recall object locations across viewpoint changes have used either stable environmental features [16], [29] or unstable configurations of objects [18], [20]. In the present event-related functional magnetic resonance imaging (fMRI) study, we adapted a viewpoint-change paradigm often used in behavioral research [7], [13] and compared perspective transformations relative to object-based vs. environmental reference frames, under the hypothesis that the neural circuit described above is actively exploited either during memory encoding or when updating memorized locations after a viewpoint change, but only if the established reference frame includes stable environmental features.
We asked observers to encode the spatial location of a target object in a virtual room with respect either to stable, familiar features of the scene (room frame), or to an arbitrary, unstable object set (objects frame). Observers then experienced a certain amount of viewpoint change, with “views” defined either relative to the room or to the object set, and asked them to judge whether the target was in the same spatial location as before, with “same” locations again defined relative to either the room or the object set. Importantly, the manipulation of the amount of viewpoint change allowed us to test a further prediction: regions actively involved in perspective transformations relative to the environmental reference frame should be selectively modulated by the amount of experienced viewpoint change, only when this is defined in room-based, but not in object-relative coordinates.
Section snippets
Subjects
Fifteen neurologically normal volunteers (all males, mean age 25.4 yrs, s.d. 3.9) participated to the fMRI study. All subjects were right handed, as assessed by the Edinburgh Handedness Inventory ([30]: mean index = 0.65, s.d. 0.19) and had normal or corrected-to-normal vision. The protocol was approved by the ethical committee of Fondazione Santa Lucia, Roma, and written informed consent was obtained from each participant before starting the study.
Stimuli
The virtual environment was designed using
Behavior: dissociations in spatial memory across environmental and object-based reference frames
Error rates (Fig. 2A) and response times (Fig. 2B) were analyzed as a function of task (position, color), reference frame (room, objects, viewer), and viewpoint change (0, 45, 135 deg), in a repeated-measures analysis of variance (ANOVA). The analysis on error rates revealed main effects of task (F1,14 = 7.50; p < 0.05), reference frame (F2,28 = 11.67; p < 0.001), and viewpoint change (F2,28 = 13.76, p < 0.0001). Also the task by reference frame (F2,28 = 22.12; p < 0.0001), task by viewpoint change (F2,28 =
Discussion
Remembering object locations across different views is a fundamental competence for keeping oriented in large-scale space. Compensating for changes in point of view appears to be a specific cognitive ability, relying on a process of mental self-rotation, or perspective transformation, which is distinct from the process of mentally rotating objects. Differently from previous neuroimaging studies which have compared perspective transformations to mental rotation of objects (e.g., [29]), here we
Conclusions
During the complex process of encoding and updating a spatial location across viewpoint changes there are regions (RSC and LPHG/PPA) selectively involved in environment-based computations and regions (fronto-parietal areas) whose activity do not discriminate between environmental and objects-based reference frames. While both RSC and LPHG/PPA are particularly involved in encoding spatial locations relative to fixed landmarks, only the RSC shows a preference for retrieving locations from a new
Acknowledgments
This experiment was supported by grants from Italian Ministry of Health – Fondazione Santa Lucia (RC2008-2009) to GG. We are grateful to Mohamed Zaoui for technical support in creating the virtual environment.
References (73)
- et al.
A temporoparietal and prefrontal network for retrieving the spatial context of lifelike events
Neuroimage
(2001) - et al.
Orientational manoeuvres in the dark: dissociating allocentric and egocentric influences on spatial memory
Cognition
(2004) Spatial memory: How egocentric and allocentric combine
Trends in Cognitive Sciences
(2006)- et al.
Interactions between ego- and allocentric neuronal representations of space
Neuroimage
(2006) - et al.
Visuospatial working memory and changes of the point of view in 3D space
Neuroimage
(2007) - et al.
Modulation of neural activity by angle of rotation during imagined spatial transformations
Neuroimage
(2006) - et al.
Imagined rotations of self versus objects: an fMRI study
Neuropsychologia
(2005) - et al.
A dissociation between mental rotation and perspective-taking spatial abilities
Intelligence
(2004) Parahippocampal and retrosplenial contributions to human spatial navigation
Trends in Cognitive Sciences
(2008)The assessment and analysis of handedness: The Edinburgh inventory
Neuropsychologia
(1971)
A selective representation of the meaning of actions in the auditory mirror system
Neuroimage
A probabilistic atlas of the human brain: theory and rationale for its development. The International Consortium for Brain Mapping (ICBM)
NeuroImage
Separating processes within a trial in event-related functional MRI
Neuroimage
A critique of functional localisers
Neuroimage
An area within human ventral cortex sensitive to “building” stimuli: evidence and implications
Neuron
Switching attention and resolving interference: Fmri measures of executive functions
Neuropsychologia
The well-worn route and the path less traveled: distinct neural bases of route following and wayfinding in humans
Neuron
The mind's eye: precuneus activation in memory-related imagery
Neuroimage
Directional disorientation following left retrosplenial hemorrhage: a case report with fMRI studies
Cortex
Different roles of the parahippocampal place area (PPA) and retrosplenial cortex (RSC) in panoramic scene perception
Neuroimage
Pure topographical disorientation: a definition and anatomical basis
Cortex
Population-average, landmark- and surface-based (PALS) atlas of human cerebral cortex
Neuroimage
Access to knowledge of spatial structure at novel points of observation
Journal of Experimental Psychology Learning, Memory, and Cognition
Object-array structure, frames of reference, and retrieval of spatial knowledge
Journal of Experimental Psychology Learning, Memory, and Cognition
Viewpoint dependence in scene recognition
Psychological Science
Knowing where things are parahippocampal involvement in encoding object locations in virtual large-scale space
Journal of Cognitive Neuroscience
Perceiving real-world viewpoint changes
Psychological Science
Active and passive scene recognition across views
Cognition
Active and passive scene recognition across views
Cognition
Intrinsic frames of reference in spatial memory
Journal of Experimental Psychology Learning, Memory, and Cognition
The human hippocampus and viewpoint dependence in spatial memory
Hippocampus
Egocentric and geocentric frames of reference in memory of large-scale space
Psychonomic Bulletin & Review
What is my avatar seeing? The coordination of “out-of-body” and “embodied” perspectives for scene recognition across views
Visual Cognition
Distinct visual perspective-taking strategies involve the left and right medial temporal lobe structures differently
Brain
Updating displays after imagined object and viewer rotations
Journal of Experimental Psychology Learning, Memory, and Cognition
Transformations of visuospatial images
Behavioral and Cognitive Neuroscience Reviews
Cited by (80)
Functional organization of the caudal part of the human superior parietal lobule
2023, Neuroscience and Biobehavioral ReviewsRethinking retrosplenial cortex: Perspectives and predictions
2023, NeuronCitation Excerpt :For example, Lambrey and colleagues showed that mentally rotating one’s viewpoint to the position of an avatar or an arrow resulted in activation of RSC/POS.38 In a series of experiments, Sulpizio et al. showed that RSC activity is related to the amount of viewpoint change relative to the environmental frame47 and that RSC activity is modulated by the magnitude of a viewpoint shift.46 These findings indicate that RSC has a role in orienting and processing view-based information from a first-person perspective, but they do not provide any indication that RSC is used for allocentric location or transformation between egocentric and allocentric reference frames.
Sensory system-specific associations between brain structure and balance
2022, Neurobiology of AgingA putative human homologue of the macaque area PEc
2019, NeuroImageVR interventions aimed to induce empathy: a scoping review
2024, Virtual RealityCommon and specific activations supporting optic flow processing and navigation as revealed by a meta-analysis of neuroimaging studies
2024, Brain Structure and Function