Elsevier

Behavioural Brain Research

Volume 242, 1 April 2013, Pages 62-75
Behavioural Brain Research

Research report
Selective role of lingual/parahippocampal gyrus and retrosplenial complex in spatial memory across viewpoint changes relative to the environmental reference frame

https://doi.org/10.1016/j.bbr.2012.12.031Get rights and content

Abstract

Remembering object locations across different views is a fundamental competence for keeping oriented in large-scale space. Here we investigated such ability by comparing encoding and retrieval of locations across viewpoint changes relative to different spatial frames of reference. We acquired functional magnetic resonance images while subjects detected target displacements across consecutive views of a familiar virtual room, reporting changes in the target absolute position in the room (stable environmental frame), changes in its position relative to a set of movable objects (unstable object-based frame), and changes relative to their point of view (control viewer-centered frame). Behavioral costs were higher for the stable environmental frame, and a cortical network including the lingual/parahippocampal gyrus (LPHG) and the retrosplenial complex (RSC) selectively encoded spatial locations relative to this frame. Several regions, including the dorsal fronto-parietal cortex and the LPHG, were modulated by the amount of experienced viewpoint change, but only the RSC was selectively modulated by the amount of viewpoint change relative to the environmental frame, thus showing a special role in coding one's own position and heading in familiar environments.

Highlights

► Lingual/parahippocampal gyrus and retrosplenial complex prefer to encode space within stable frames. ► Retrosplenial complex updates locations across viewpoint changes within stable frames. ► Retrosplenial cortex represents a crucial area in coding one's own position and heading in familiar environments. ► Parieto-frontal regions update locations independently from frame stability.

Introduction

Human beings have an outstanding capacity to seamlessly recognize scenes and to remember spatial locations despite intervening changes in point of view, such as those occurring when we walk or are passively transported through a familiar environment. This ability, which is fundamental to spatial orientation, can be experimentally measured by asking observers to study a scene from a given viewpoint, and then, after a true, imagined, or virtual self displacement, to detect whether a given object has been moved, or to point to a memorized location [1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17]. In principle, people could simply match the studied and the test view through some process of mental rotation. However, memory across viewpoint changes involves some more specific process: people perform better when they (physically or virtually) move in the environment around a stationary object array, than when they are still and an object array rotates around them [5], [6], [7], [18], [19], [20], [21], although the two situations appear identical from a geometrical standpoint. This and other lines of evidence [22], [23], [24] show that comparing different views of a scene involves some form of mental self-rotation, which is distinct from the process of mentally rotating objects.

In terms of spatial reference frames, mental rotation of objects requires transforming an object-based frame relative to a fixed egocentric reference, while compensating for occurred viewpoint changes requires transforming an egocentric frame relative to an external (i.e., allocentric) reference (a “perspective transformation”: [21]). In principle, any set of objects can be used to establish an allocentric reference frame relative to which to evaluate a viewpoint change, and the operations involved in encoding spatial locations and then in updating memorized locations after a perceived viewpoint change may not depend on the particular set of objects chosen. However, human neuroimaging studies have shown not only that perceptual coding in egocentric and allocentric reference frames have distinguishable neural signatures (reviewed in [25]), but also that two different forms of allocentric representations can be distinguished: object-based reference frames, encoding spatial locations relative to arbitrary objects, and environmental reference frames, encoding spatial locations relative to some fixed features of the environment [21], [25]. A particular set of cortical regions are selectively activated when a spatial judgment is referenced to enduring environmental features but not to unstable objects [26]. These regions include portions of the medial temporal cortex and adjacent antero-medial occipital lobe (fusiform, lingual and posterior parahippocampal gyrus), the retrosplenial cortex and the precuneus. Their selectivity for environmental referencing depends on the stability of the spatial location of environmental features over time and not on their perceptual features or orienting value [25]. Intracerebral recordings within the posterior medial temporal lobe recently showed that this kind of allocentric selectivity constitutes a separate, later processing stage relative to early visual scene processing [27].

On the basis of the distinction between environmental and object-based allocentric reference frames, we suggest that perspective transformations may be preferentially associated with the former. Indirect evidence for this idea comes from neuroimaging studies which have shown that the cortical regions selective for environmental reference frames are involved in recognizing a scene across different views (see [28] for a recent review) and in perspective vs. object-based transformations [29]. However, previous studies requiring to memorize and recall object locations across viewpoint changes have used either stable environmental features [16], [29] or unstable configurations of objects [18], [20]. In the present event-related functional magnetic resonance imaging (fMRI) study, we adapted a viewpoint-change paradigm often used in behavioral research [7], [13] and compared perspective transformations relative to object-based vs. environmental reference frames, under the hypothesis that the neural circuit described above is actively exploited either during memory encoding or when updating memorized locations after a viewpoint change, but only if the established reference frame includes stable environmental features.

We asked observers to encode the spatial location of a target object in a virtual room with respect either to stable, familiar features of the scene (room frame), or to an arbitrary, unstable object set (objects frame). Observers then experienced a certain amount of viewpoint change, with “views” defined either relative to the room or to the object set, and asked them to judge whether the target was in the same spatial location as before, with “same” locations again defined relative to either the room or the object set. Importantly, the manipulation of the amount of viewpoint change allowed us to test a further prediction: regions actively involved in perspective transformations relative to the environmental reference frame should be selectively modulated by the amount of experienced viewpoint change, only when this is defined in room-based, but not in object-relative coordinates.

Section snippets

Subjects

Fifteen neurologically normal volunteers (all males, mean age 25.4 yrs, s.d. 3.9) participated to the fMRI study. All subjects were right handed, as assessed by the Edinburgh Handedness Inventory ([30]: mean index = 0.65, s.d. 0.19) and had normal or corrected-to-normal vision. The protocol was approved by the ethical committee of Fondazione Santa Lucia, Roma, and written informed consent was obtained from each participant before starting the study.

Stimuli

The virtual environment was designed using

Behavior: dissociations in spatial memory across environmental and object-based reference frames

Error rates (Fig. 2A) and response times (Fig. 2B) were analyzed as a function of task (position, color), reference frame (room, objects, viewer), and viewpoint change (0, 45, 135 deg), in a repeated-measures analysis of variance (ANOVA). The analysis on error rates revealed main effects of task (F1,14 = 7.50; p < 0.05), reference frame (F2,28 = 11.67; p < 0.001), and viewpoint change (F2,28 = 13.76, p < 0.0001). Also the task by reference frame (F2,28 = 22.12; p < 0.0001), task by viewpoint change (F2,28 = 

Discussion

Remembering object locations across different views is a fundamental competence for keeping oriented in large-scale space. Compensating for changes in point of view appears to be a specific cognitive ability, relying on a process of mental self-rotation, or perspective transformation, which is distinct from the process of mentally rotating objects. Differently from previous neuroimaging studies which have compared perspective transformations to mental rotation of objects (e.g., [29]), here we

Conclusions

During the complex process of encoding and updating a spatial location across viewpoint changes there are regions (RSC and LPHG/PPA) selectively involved in environment-based computations and regions (fronto-parietal areas) whose activity do not discriminate between environmental and objects-based reference frames. While both RSC and LPHG/PPA are particularly involved in encoding spatial locations relative to fixed landmarks, only the RSC shows a preference for retrieving locations from a new

Acknowledgments

This experiment was supported by grants from Italian Ministry of Health – Fondazione Santa Lucia (RC2008-2009) to GG. We are grateful to Mohamed Zaoui for technical support in creating the virtual environment.

References (73)

  • G. Galati et al.

    A selective representation of the meaning of actions in the auditory mirror system

    Neuroimage

    (2008)
  • J.C. Mazziotta et al.

    A probabilistic atlas of the human brain: theory and rationale for its development. The International Consortium for Brain Mapping (ICBM)

    NeuroImage

    (1995)
  • J.M. Ollinger et al.

    Separating processes within a trial in event-related functional MRI

    Neuroimage

    (2001)
  • K.J. Friston et al.

    A critique of functional localisers

    Neuroimage

    (2006)
  • G.K. Aguirre et al.

    An area within human ventral cortex sensitive to “building” stimuli: evidence and implications

    Neuron

    (1998)
  • C. Sylvester et al.

    Switching attention and resolving interference: Fmri measures of executive functions

    Neuropsychologia

    (2003)
  • T. Hartley et al.

    The well-worn route and the path less traveled: distinct neural bases of route following and wayfinding in humans

    Neuron

    (2003)
  • P.C. Fletcher et al.

    The mind's eye: precuneus activation in memory-related imagery

    Neuroimage

    (1995)
  • T. Ino et al.

    Directional disorientation following left retrosplenial hemorrhage: a case report with fMRI studies

    Cortex

    (2007)
  • S. Park et al.

    Different roles of the parahippocampal place area (PPA) and retrosplenial cortex (RSC) in panoramic scene perception

    Neuroimage

    (2009)
  • M. Habib et al.

    Pure topographical disorientation: a definition and anatomical basis

    Cortex

    (1987)
  • D.C.A. Van Essen

    Population-average, landmark- and surface-based (PALS) atlas of human cerebral cortex

    Neuroimage

    (2005)
  • J.J. Rieser

    Access to knowledge of spatial structure at novel points of observation

    Journal of Experimental Psychology Learning, Memory, and Cognition

    (1989)
  • R.D. Easton et al.

    Object-array structure, frames of reference, and retrieval of spatial knowledge

    Journal of Experimental Psychology Learning, Memory, and Cognition

    (1995)
  • V.A. Diwadkar et al.

    Viewpoint dependence in scene recognition

    Psychological Science

    (1997)
  • E.A. Maguire et al.

    Knowing where things are parahippocampal involvement in encoding object locations in virtual large-scale space

    Journal of Cognitive Neuroscience

    (1998)
  • D.J. Simons et al.

    Perceiving real-world viewpoint changes

    Psychological Science

    (1998)
  • D.J. Simons et al.

    Active and passive scene recognition across views

    Cognition

    (1999)
  • R.F. Wang et al.

    Active and passive scene recognition across views

    Cognition

    (1999)
  • W. Mou et al.

    Intrinsic frames of reference in spatial memory

    Journal of Experimental Psychology Learning, Memory, and Cognition

    (2002)
  • J.A. King et al.

    The human hippocampus and viewpoint dependence in spatial memory

    Hippocampus

    (2002)
  • T.P. McNamara et al.

    Egocentric and geocentric frames of reference in memory of large-scale space

    Psychonomic Bulletin & Review

    (2003)
  • M.-A. Amorim

    What is my avatar seeing? The coordination of “out-of-body” and “embodied” perspectives for scene recognition across views

    Visual Cognition

    (2003)
  • S. Lambrey et al.

    Distinct visual perspective-taking strategies involve the left and right medial temporal lobe structures differently

    Brain

    (2008)
  • M. Wraga et al.

    Updating displays after imagined object and viewer rotations

    Journal of Experimental Psychology Learning, Memory, and Cognition

    (2000)
  • J.M. Zacks et al.

    Transformations of visuospatial images

    Behavioral and Cognitive Neuroscience Reviews

    (2005)
  • Cited by (80)

    • Rethinking retrosplenial cortex: Perspectives and predictions

      2023, Neuron
      Citation Excerpt :

      For example, Lambrey and colleagues showed that mentally rotating one’s viewpoint to the position of an avatar or an arrow resulted in activation of RSC/POS.38 In a series of experiments, Sulpizio et al. showed that RSC activity is related to the amount of viewpoint change relative to the environmental frame47 and that RSC activity is modulated by the magnitude of a viewpoint shift.46 These findings indicate that RSC has a role in orienting and processing view-based information from a first-person perspective, but they do not provide any indication that RSC is used for allocentric location or transformation between egocentric and allocentric reference frames.

    View all citing articles on Scopus
    View full text