Elsevier

Acta Psychologica

Volume 116, Issue 2, June 2004, Pages 185-203
Acta Psychologica

Action priming by briefly presented objects

https://doi.org/10.1016/j.actpsy.2004.01.004Get rights and content

Abstract

Three experiments investigated how visual objects prime the actions they afford. The principal concern was whether such visuomotor priming depends upon a concurrent visual input––as would be expected if it is mediated by on-line dorsal system processes. Experiment 1 showed there to be essentially identical advantages for making afforded over non-afforded responses when these were made to objects still in view and following brief (30 or 50 ms) object exposures that were backward masked. Experiment 2 showed that affordance effects were also unaffected by stimulus degradation. Finally, Experiment 3 showed there to be statistically equal effects from images of objects and their names. The results suggest that an active object representation is sufficient to generate affordance compatibility effects based on associated actions, whether or not the object is concurrently visible.

Introduction

How do objects come to activate the patterns of motor activity associated with their affordances? In this paper we aim to clarify the results of earlier experiments showing the activation of affordances from visual objects. In particular we wanted to examine the requirement that an on-line visual object needs to be present in order to activate motor patterns associated with the object affordances.

There is a growing body of evidence that the observation of an object merely to categorise it, or comprehend it, is sufficient to partially activate motoric representations (Chao & Martin, 2000; Gerlach et al., 2002; Grafton et al., 1997; Grèzes & Decety, 2002; Grèzes, Tucker, Armony, Ellis, & Passingham, 2003; Martin, Wiggs, Ungerleider, & Haxby, 1996). Assuming that this activation, in motor and motor related areas of the brain, relates to the actions associated with the object, one would expect to be able to observe behavioural effects on actions. (This assumes, in addition, that this motor and visuomotor activity involves the same, or similar, neural systems to those involved in planning and executing a real action––a plausible assumption given, for instance, the evidence for the similarity between the brain systems underlying both vision and visual imagery (Kosslyn, 1994) and action and motor imagery (Jeannerod, 1994).) There is increasing evidence for just such effects: The execution of an action whilst viewing a manipulable object is affected by the congruency of the action to the object (Craighero, Fadiga, Rizzolatti, & Umilta, 1999; Ellis & Tucker, 2000; Klatzky, Fikes, & Pellegrino, 1995; Riddoch, Edwards, Humphreys, West, & Heafield, 1998; Tucker and Ellis, 1998, Tucker and Ellis, 2001). If one also includes the relations between object location and effector position amongst this evidence, it is considerably broadened by the extensive literature on the Simon effect (for a review see Hommel, 1995; Kornblum, Hasbroucq, & Osman, 1990).

The state of the motor system also affects the visual system. Preparing to grasp an object results in enhanced processing of similarly shaped and oriented objects (Craighero et al., 1999). The classic mental rotation task (Shepard & Meltzer, 1971) is also executed faster and more accurately when people concurrently perform a manual rotation in the same direction as the required `mental' rotation (Wexler, Kosslyn, & Berthoz, 1998).

The premotor cortex and related areas become activated when we look at manipulable objects and this activity itself is part of a reciprocal system that in turn influences the way we attend to, and parse, objects in a scene (see e.g. Handy, Grafton, Shroff, Ketay, & Gazzaniga, 2003). Under exactly what conditions, and by what routes, this kind of motor area activity (that might contribute to the representation of object affordances) becomes active is undetermined at present. We have previously suggested that the automatic and dedicated nature of the visuomotor networks within the dorsal stream (Milner & Goodale, 1993) would make this a good candidate for generating affordance-based representations within the motor and visuomotor areas (Tucker and Ellis, 1998, Tucker and Ellis, 2001). Thus even when an object is not, or is not yet, a target for action, attending to it could partially activate motor representations appropriate to reaching, grasping and manipulating it. It is partly the automatic nature of these transformations that make this system suitable for such a role. For example, spatial perturbations during reaching automatically result in on-line trajectory changes to the new target position even when participants are required to abort a reach when such perturbations occur (Pisella et al., 2000). In contrast the same authors found that target perturbations defined by a colour change did not invoke this automatic tendency to rapidly adjust and carry on a reaching movement.

The dorsal system has been framed as a network dedicated to transforming visual information into motor output with minimal influence from other (e.g. ventral) systems (Goodale & Milner, 1992; Milner & Goodale, 1993). Goodale and Humphrey (1998) suggest that one of the roles of the ventral system is to direct the dorsal system to a suitable target. Once so directed, the target object's parameters will be transformed into motor output automatically, and with minimal influence from the ventral system. Their theory does not deny ventral influence but assigns it essentially a `steering' role. Functional knowledge about the appropriate part of a tool to grasp, for instance, could be used merely to direct the dorsal system to that part of the object. Indeed, where knowledge is required to direct the dorsal system to an appropriate object part then concurrent tasks which tax the semantic system also disrupt the accurate steering of the dorsal system (see Creem & Proffitt, 2001). This naturally leads to the question of how much long-term object–action knowledge contributes to the generation of affordances, and whether the motor patterns underlying this knowledge can be activated without the need for any `on-line' dorsal processing of a currently visible object. The particular specialisation of the dorsal system is the on-line control of an unfolding action (see Glover (in press) for a recent review based on the control–planning distinction and its relation to the dorsal–ventral systems). The monitoring of object properties to direct an action needs to be computed quickly and very precisely––not broad `categories of action' but detailed and constantly updated spatio-temporal instructions to, for example, guide the fingers and thumb to appropriate locations on an object's surface. Necessarily, viewpoint dependent object properties such as location and orientation must be computed on-line as this information is subject to continuous variation as we (or the target) move. This is not to say, however, that we do not retain any spatial information off-line. We can reach for and navigate around objects based on memory (albeit relatively poorly and cautiously). In fact, spatial information that is temporarily bound to an object has been shown to exert correspondence effects on manual responses. Hommel (2002) showed that reactivating a single target from a multi-object display––by cueing its colour up to several seconds after the display had been masked––yielded reliable spatial compatibility effects (see also Tlauka & McKenna, 1998).

More intrinsic properties, such as actual object size, are more complex. Whilst the proximal stimulus varies with distance the actual stimulus does not––allowing it to meaningfully be associated with the object together with higher level functional knowledge. Thus we know that a grape is small and requires a particular kind of grasp (including preserved visuomotor knowledge about the force requirements, Flanagan et al., 2001). Whilst the precise guidance of the thumb and index finger during prehension will rely more and more on the specialised control circuits within the dorsal stream (Glover, in press), our knowledge of the object allows information about the type of grasp to be available before precise parameterisation takes place.

For a property such as object size there are thus at least two routes to the activation of its affordance for a particular grasp: an on-line route––relying on immediate visual input and based on the physical stimulus size––with little influence of higher level knowledge (although the two will naturally covary), and a `knowledge route' based on the semantics of the object and the history of past interactions. Within the latter category further subdivision is possible. For example Humpreys and colleagues (Phillips, Humphreys, Noppeney, & Price, 2002; Riddoch, Humphreys, & Price, 1989; Rumiati & Humphreys, 1998) have developed a theory of routes to action that incorporates both direct vision-to-action and mediated vision-to-semantics-to-action routes. For instance Rumiati and Humphreys (1998) showed that under time-pressured conditions subjects were more likely to make miming errors based on the visual attributes of an object (i.e. via the direct vision-to-action route) when cued by pictures, but to make errors based on the semantic associates of the object when miming to words. Neither the direct or indirect route in this model is based on the kind of fast and automatic processing within the dorsal system. The direct route here involves a direct linkage between stored associations of visual attributes and particular actions (including high level actions dependent upon knowledge of object function). Patients with ideomotor ataxia also reveal specific deficits in the long-term component of object action knowledge. They are, for example, better at producing appropriate hand shapes to novel objects (as these rely only upon dorsal or `on-line' information about shape) than to familiar objects where functional use has been able to modify the style of manipulation away from that which would be derived from their physical properties alone (Buxbaum, Sirigu, Schwartz, & Klatzky, 2003; see also Sirigu et al., 1995).

In a previous study of affordances Tucker and Ellis (2001) employed a Go–NoGo paradigm to examine the time-course of grasp compatibility effects to pictures of objects that would normally be grasped either with precision or whole-hand grips. We found that affordance-based compatibility effects tended to become larger the longer the visual object was present and to rapidly diminish once it disappeared. The rapid disappearance, following object offset, is consistent with the `on-line' or dorsal route for affordance extraction, as this would rely on on-line computations that are transient and rapidly updated. Terminating the visual object also terminates the visuomotor transformations responsible for activating the compatible or incompatible motor response. We therefore concluded that it was most likely the on-line dorsal system that was responsible for the compatibility effects obtained. A recent fMRI study of the same paradigm showed a correlation between the size of the compatibility effect across subjects and the amount of activation in a left parietal network that included the dorsal premotor cortex (Grèzes et al., 2003). Such activation is consistent with but does not imply that it arose via the on-line dorsal route.

The current experiments attempted to re-examine grasp affordance effects under time-course conditions that would enable or disable the availability of an on-line visual object during response selection and execution. A critical feature of the previous experiments was the fact that response selection was not cued by the visual objects (whose category determined whether the trial was a Go or NoGo trial) but by a high or low pitched tone. When the tone onset triggered the offset of the visual object any grasp affordance effects were dramatically reduced, and they were completely abolished when the tone occurred after the visual object had already been offset. Whilst we interpreted this rapid decay as broadly consistent with the motor patterns being activated via the dorsal route they could not rule out alternative explanations.

Section snippets

Experiment 1

Our previous observations of the grasp compatibility effect were obtained with a minimum stimulus exposure time of 300 ms. In the current experiment precision and whole-hand grasp responses were elicited to objects exposed within the 20–300 ms range with and without backward masking by a low spatial frequency noise stimulus. A previous pilot experiment found evidence for grasp compatibility effects after brief (20–80 ms) masked stimulus presentations. Under such conditions the target object

Experiment 2

The data from Experiment 1 provide no reason to suppose that the on-line presence of a visual object alters the pattern or magnitude of grasp based compatibility. In this experiment we included two further manipulations within the visible (i.e. long-term presentation) condition. The procedure was similar to the previous experiments. There were four presentation conditions––a brief exposure condition where the stimuli were presented for 50 ms but without masking (50), a long-term presentation

Experiment 3

Whether or not a graspable object is visible during response selection and execution has relatively little impact on the affordance effect elicited––as long as the object itself can be reliably identified. Similarly, degrading the viewing conditions, or adding a further visual stimulus that occludes the object and represents an obstruction to the affordance, makes little difference to the effect produced. This all points to their being a sufficient locus of the effect in the object

General discussion

Experiment 1 showed that there was little evidence to support the notion that the on-line presence of a visual object was necessary to produce compatibility effects from an object's affordance for action. Once the combined conditions of masking and stimulus duration allowed reliable object identification the compatibility effect remained constant. There was almost no evidence that increased viewing times increased the size of the effect.

Experiment 2 examined the effect under two additional

Acknowledgements

This work was supported by ESRC grant R222709 held by R. Ellis. We would like to thank Bernhard Hommel, Wilfried Kunde and an anonymous reviewer for helpful comments concerning the article.

References (40)

  • M Tlauka et al.

    Mental imagery yields spatial stimulus–response compatibility

    Acta Psychologica

    (1998)
  • M Wexler et al.

    Motor processes in mental rotation

    Cognition

    (1998)
  • H Chainay et al.

    Privileged access to action for objects relative to words

    Psychonomic Bulletin & Review

    (2002)
  • L Craighero et al.

    Action for perception: a motor-visual attentional effect

    Journal of Experimental Psychology: Human Perception and Performance

    (1999)
  • S.H Creem et al.

    Grasping objects by their handles: a necessary interaction between cognition and action

    Journal of Experimental Psychology––Human Perception and Performance

    (2001)
  • M Eimer et al.

    Effects of masked stimuli on motor activation: behavioral and electrophysiological evidence

    Journal of Experimental Psychology: Human Perception and Performance

    (1998)
  • R Ellis et al.

    Micro-affordance: the potentiation of components of action by seen objects

    British Journal of Psychology

    (2000)
  • Glover, S. (in press). Separate visual representations in the planning and control of actions. Behavioral and Brain...
  • J Grèzes et al.

    Objects automatically potentiate action: an fMRI study of implicit processing

    European Journal of Neuroscience

    (2003)
  • T.C Handy et al.

    Graspable objects grab attention when the potential for action is recognized

    Nature Neuroscience

    (2003)
  • Cited by (0)

    View full text