Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Neural Correlates of Own Name and Own Face Detection in Autism Spectrum Disorder

  • Hanna B. Cygan,

    Affiliation Nencki Institute of Experimental Biology, Department of Neurophysiology, Laboratory of Psychophysiology, Warsaw, Poland

  • Pawel Tacikowski,

    Affiliations Nencki Institute of Experimental Biology, Department of Neurophysiology, Laboratory of Psychophysiology, Warsaw, Poland, Karolinska Institute, Department of Neuroscience, Brain, Body and Self Laboratory, Stockholm, Sweden

  • Pawel Ostaszewski,

    Affiliation University of Social Sciences and Humanities, Department of Psychology, Warsaw, Poland

  • Izabela Chojnicka,

    Affiliation Medical University of Warsaw, Department of Medical Genetics, Warsaw, Poland

  • Anna Nowicka

    a.nowicka@nencki.gov.pl

    Affiliation Nencki Institute of Experimental Biology, Department of Neurophysiology, Laboratory of Psychophysiology, Warsaw, Poland

Abstract

Autism spectrum disorder (ASD) is a heterogeneous neurodevelopmental condition clinically characterized by social interaction and communication difficulties. To date, the majority of research efforts have focused on brain mechanisms underlying the deficits in interpersonal social cognition associated with ASD. Recent empirical and theoretical work has begun to reveal evidence for a reduced or even absent self-preference effect in patients with ASD. One may hypothesize that this is related to the impaired attentional processing of self-referential stimuli. The aim of our study was to test this hypothesis. We investigated the neural correlates of face and name detection in ASD. Four categories of face/name stimuli were used: own, close-other, famous, and unknown. Event-related potentials were recorded from 62 electrodes in 23 subjects with ASD and 23 matched control subjects. P100, N170, and P300 components were analyzed. The control group clearly showed a significant self-preference effect: higher P300 amplitude to the presentation of own face and own name than to the close-other, famous, and unknown categories, indicating preferential attentional engagement in processing of self-related information. In contrast, detection of both own and close-other's face and name in the ASD group was associated with enhanced P300, suggesting similar attention allocation for self and close-other related information. These findings suggest that attention allocation in the ASD group is modulated by the personal significance factor, and that the self-preference effect is absent if self is compared to close-other. These effects are similar for physical and non-physical aspects of the autistic self. In addition, lateralization of face and name processing is attenuated in ASD, suggesting atypical brain organization.

Introduction

Autism spectrum disorder (ASD) is a heterogeneous neurodevelopmental disorder which affects, according to various sources, from 1 in 160 children (WHO: www.who.int) to 1 in 88 (CDC: www.cdc.gov). The main clinical hallmarks of ASD include impairments in social functioning and communication, existence of stereotyped repetitive behaviors, and a highly restricted scope of interests. The precise neuropathophysiology of ASD is still unclear [1], [2].

While previous research has been largely focused on deficits in interpersonal (social) interaction in ASD [3][6], the current approach emphasizes the need for also understanding alterations in intrapersonal (self-referential) cognition [7], [8]. This need seems to be fully justified, as the term ‘autism’ (derived from the Greek word ‘autos’, meaning ‘self’) was first applied by Kanner to describe young patients from his clinic who were extremely self-focused [9]. Recently, Lombardo and Baron-Cohen [10] proposed that individuals with ASD can be both egocentric and impaired in self-referential cognition. The authors also pointed to the atypical neural circuitry underlying the processing of self-relevant information in ASD. In addition, Rogers and Pennington [11] suggested that a disturbed process of forming and coordinating representations of the self and the other may be linked to specific deficits observed in ASD.

Despite distinct methodological approaches and operationalizations of self-concept, many studies on autistic self consistently point to a lack of differences between representations of self and other [8], [12][14]. For example, in Gunji's event-related potentials (ERP) study [12], children with pervasive developmental disorder (PDD; PDD includes ASD) were passively viewing their own, familiar, and unfamiliar faces. Their ERP response to own face did not differ from ERP responses to familiar and unfamiliar faces, whereas in typically developing participants ERPs were enhanced in the self face condition in comparison to the familiar face condition. Parallel effects were found by Lombardo et al. [8] in a study using functional magnetic resonance imaging (fMRI). They asked individuals with ASD and control participants to make reflective mentalizing or physical judgments about themselves and an “other” (the British Queen). In ASD participants, the self and other conditions resulted in similar ventromedial prefrontal cortex activations, and the middle cingulate cortex responded even stronger to other-mentalizing than self-mentalizing. In contrast, neurotypical individuals preferentially recruited those regions in response to self when compared to other referential processing. Reduced or even absent self-preference effects in ASD participants were also reported in studies on the self-reference effect in memory (i.e. enhanced memory for stimuli encoded in reference to oneself). In Henderson's et al. [13] study, participants read a list of words and decided whether the word described something about themselves, something about Harry Potter, or contained a certain number of letters. In the following session, subjects were asked to recognize previously presented words on a long list. Consistent with previous studies [7], [14], ASD subjects showed a reduced or absent self-reference effect.

The aforementioned studies revealed some significant alterations in self-related information processing in ASD individuals and in the associated neuronal circuitry. These alterations may be viewed in the light of the absent-self hypothesis [15][21] and the impaired I-concept hypothesis [22]. The absent-self hypothesis proposes that a specific kind of higher order self-awareness, possibly involved in top-down control, may be missing in autism. The second hypothesis posits that development of I-concept in patients with autism is disturbed or even absent. Specifically, Glezerman [22] pointed to the impairment of the ‘symbolic’ self that is developed in the neurotypical population through lifetime experience and enables the perception of self as unique and separate from others.

One may hypothesize, however, that explanations referring to attentional processes seem to be reasonable. The impaired self-preference observed in many ASD studies may be related to weaker engagement of attentional resources in processing of self-related information. It is well-documented that in a typically developing population, stimuli referring to one's own person attract attention automatically [23] and are selectively detected among other stimuli in the environment. A good example is the so called ‘cocktail party’ effect. Even when engaged in another cognitive task, a person can still detect own name in the unattended ear or visual field [24][27]. Moreover, one's own name is particularly resistant to attentional blink [28], and is preferentially processed even without reaching conscious awareness [29]. Studies on the neural processing of self-related cues (one's own name or face) generally support this notion of preferential attention allocation [30][36].

Thus the question arises whether attention allocation for self-related stimuli is also disturbed in ASD patients. In order to answer this question, the present study investigated detection of one's own face and one's own name in ASD participants and matching control subjects. We decided to use a simple detection task because such tasks do not require any intentional discrimination between presented stimuli, engage attention automatically, and require the same motor reaction (i.e., pressing the same button) for all stimuli. As a result we can imply that any plausible differences between stimuli, conditions, or groups can be related to different activation of attentional processes. It is important to note that detection is an obligatory initial stage in the processing of any incoming stimulus. One may suppose that any impairment present at this stage of information processing determines alterations at later stages.

Up to now, no studies have compared the processing of own face and own name in the same group of ASD participants while using the same experimental paradigm. Such a comparison would enable us to relate to Uddin's hypothesis [37], stating that ‘physical’ aspects of the autistic self are less disturbed than ‘psychological’ (i.e. non-physical) aspects. Whereas self-face directly refers to the ‘physical self’, self-name refers rather to the ‘non-physical self’. If there is an atypical pattern of attention allocation for all self-related stimuli in ASD, similar results for names and faces should be observed; otherwise, some alterations might be observed for one type of stimulus only.

It is noteworthy that little is known about neural processing of one's own name in ASD. This is quite surprising given that names are highly relevant stimuli in the context of communication, i.e., the domain which is clearly impaired in this clinical group. To the best of our knowledge, there is only one published study on the neural basis of the own name processing in ASD. Carmody et al. [38] compared neural correlates of processing of one's own name, numbers, and the word ‘Hello’ in one ASD patient. Own name was associated with activations in the right frontal medial and middle gyri. Interestingly, self-name processing typically results in increased medial prefrontal cortex activation in normal populations [39][42]. However, because it was a single case study and because the subject was sedated during scanning, the results need to be treated with caution. Thus the investigation of own-name processing in ASD is of interest per se and may be viewed as a novel contribution to the field of ASD research.

In our study, control conditions consisted of names and faces belonging to three categories: unfamiliar, famous, and related to significant other. The latter category was introduced because ‘me’ vs. ‘not-me’ distinction may be modulated by the level of familiarity of the person used in the self-other comparison; it might be stronger for distant (not personally known and significant) other and weaker for close-other. Such modulation was reported in our previous studies of the neurotypical populations [42], [43]. The name and face of the close-other share many features with own name and face: their emotional load is very high, they are very familiar, and they are encountered extremely often in every-day life. Thus attention allocation for self and close-other related information may be similar, and processing of information related to close-other may resemble processing of self-related information.

The goal of this ERP study is to investigate the neural correlates of name and face detection in ASD. Specifically, we aim to verify our hypothesis stating that attention allocation for self-related stimuli is disturbed in ASD patients. This hypothesis would be confirmed if the self-preference effect expected in the control group is absent in the ASD group. Moreover, we are interested in whether the attentional involvement in detection of stimuli related to the ‘physical’ self (i.e., the own face) differs from attentional engagement in detection of stimuli related to the ‘non-physical’ self (i.e., the own name). Showing that the processing of own face is less disturbed than processing of own name would be supportive of Uddin's hypothesis [37].

The ERP method was chosen because it provides insights into the neural mechanisms that underlie covert cognitive processing that may not be evident in overt behaviors. Therefore, this method is particularly helpful when there might be no difference in a measured behavior between groups despite the supposition that the underlying neural substrate of that behavior may be different [44]. Amplitudes and latencies of the following ERP components were analyzed: P100 (a positive component peaking approximately 100 ms after the stimulus onset), N170 (a negative deflection reaching its maximum 170 ms after the stimulus onset), and P300 (an ERP component starting around 300 ms after the stimulus onset).

P100 and N170 components reflect exogenous processes modulated by the physical attributes of stimuli but not by cognitive processes [45]. Herrmann and Knight [46] proposed that these components are related to attention processes, operating at the early stage and influencing stimulus processing at the later stage. Many studies have shown that P100 reflects a facilitation of early sensory processing of attended stimuli [47], [48] and it may serve as a marker of early stimulus-driven attention allocation [49], [50]. In addition, it has been also proposed that P100 component may serve as a sign of processing effort [51]. In other words, the higher P100 amplitude (and/or the longer latency) the stronger need for engagement of brain resources.

In typical adults, the N170 component is related to early stage encoding of faces [52], [53]. The N170 is maximal over posterior areas and is faster and larger in response to face stimuli compared to non-face stimuli [53]. N170 was shown to be sensitive to face inversion [52]. Some studies revealed that it is affected by face familiarity whereas other did not find such effect [52][55]. It is now generally acknowledged that N170 represents the analysis of structural information of faces [53], [56][59]. Importantly, N170 is also specific to other stimuli processing that required expertise and was associated with word form analysis in case of names [43], [60].

The P300 component, in turn, has been mainly associated with the processes of attention and is often treated as an index of ability to sustain attention on targets [61]. Attention allocation reflected in P300 is independent of stimulus modality and is influenced by the familiarity factor [34], [36], [43]. The P300 also seems to vary with the emotional value of the stimulus - emotionally charged stimuli (regardless of their valence) produced larger P300 then neutral ones [62], [63]. There is still much debate on the underlying generator(s) but the prevailing opinion is that multiple neural sources contribute to the P300 [64].

These ERP components (i.e., P100, N170, P300) were observed and analyzed in previous studies on processing of faces and names in typically developing population [31], [32], [36], [65][74]. However, in the case of adult individuals with ASD, studies on face processing (there is no ERP study on name processing in the ASD) reported findings mainly related to P100 and N170 components. It has been demonstrated that adults with autism had delayed P100 and N170 latencies and lower N170 amplitudes for faces [75]. Other studies confirmed longer N170 latency in response to face stimuli in individuals with ASD but no significant effects for P100 and N170 amplitude and P100 latency were identified [76], [77]. In the recent Webb et al. study [78] no group differences in early ERP correlates of attention (P100) and structural face processing (N170) were found, suggesting that the P100 and N170 responses to upright faces in adults with ASD can resemble those seen in controls.

The majority of ERP studies on the topic of own name and own face processing report the self-preference effect in amplitudes of P300 in typically developing population [31], [32], [36], [72][74] (but see [79]). While P100 and P300 are the main candidates to differentiate attentional processes involved in detection of names and faces in the ASD group and the control group, only the latter component is associated both with the self-preference effect and attention allocation. Therefore, we expect that amplitudes of P300 in the ASD group will reflect plausible impairment of attentional processes involved in the processing of self-related stimuli. Such impairment should be manifested as a lack of differences between P300 amplitudes in the self vs. other condition and may be influenced by the personal relevance of ‘the other’.

Methods

Ethics statement

The experimental protocol was approved by the Bioethics Committee of Warsaw Medical University (Warsaw, Poland). Informed written consent was obtained prior to the study from all participants and their legal caregivers.

Participants

Twenty three adolescents and young adults with ASD and 23 control subjects participated in this study (age range 17–27 years). ASD subjects were recruited from the SYNAPSIS Foundation which provides diagnosis and therapy for people with ASD. The subjects' IQs were evaluated on the basis of the Wechsler Intelligence Scale for Adults - Revised (WAIS-R) Polish adaptation [80]. The control group was matched in terms of age, handedness, and IQ-score (see Table 1). ASD subjects were clinically diagnosed by psychiatrists prior to the experiment and the clinical diagnosis was confirmed using standardized tests: the Autism Diagnostic Observation Schedule (ADOS) and Autism Diagnostic Interview-Revised (ADI-R) (see Table 1). Handedness was confirmed with the Edinburgh Inventory [81]. Subjects had normal or corrected-to-normal vision. All subjects were financially compensated for their participation in the experiment.

thumbnail
Table 1. Participants' characteristics in the ASD group (age, handedness, IQ, ADOS, and ADI-R scores) and in the control group (age, handedness, and IQ scores).

https://doi.org/10.1371/journal.pone.0086020.t001

Stimuli

Faces and names (first and last names) were presented visually on a computer screen in two separate sessions. The sequence of the two sessions was randomized between subjects: half started with the name-detection task, while the other half with the face-detection task.

In the face-detection session, grey-scaled images of faces were presented against a black background. All photos were extracted from the original background using Adobe Photoshop CS5® software (Adobe Systems Incorporated), so that only the face, ears, and hair were visible. Faces belonged to four categories: (1) subjects own, (2) close-other's, (3) famous person (e.g., actor), and (4) unknown face. A face from each category was presented 32 times. The photos of famous and unknown people were downloaded from the internet. The luminance of pictures was matched to color statistics of one image, eliminating possible differences between stimuli. The size of the face stimuli ranged from 6°×6° to 6°×5°, and did not differ between categories or groups.

Names were written in white, capital letters and presented against a black background. Categories of names were analogous to categories of faces: (1) subjects own, (2) close-other's, (3) famous person (e.g., actor), and (4) unknown name. A name from each category was presented 32 times. The size of the name stimuli ranged from 3°×6° to 3°×9° and did not differ between categories or groups. Stimuli in both series were presented in pseudo-random order, so that no more than three stimuli of the same category or type were presented consecutively.

The set of all stimuli was individually tailored. Different famous and unknown faces/names were chosen for each subject to match gender of faces and length of the own and close-other's names. Names and faces of analogous categories referred to the same person. Before the experiment each participant was asked to confirm that he knows the famous person and does not know the unknown names. No restriction was put on the subjects choice of the close-other because we wanted to avoid a situation where predefined the ‘close-other’ is not really close to the subject. In the ASD group 16 participants chose their parent, three their sibling, three their grandmother, and one their best friend. In the control group, seven participants chose their parent, four their sibling, three their best friend, and nine their girlfriend.

Experimental procedure

Stimuli were displayed in central vision on a 19-inch NEC MultiSync LCD 1990Fx monitor. Presentation® software (Neurobehavioral Systems, Albany, CA, USA) was used for stimuli presentation and measurement of the subject responses. The participants were seated in an acoustically and electrically shielded dark room at a distance of 60 cm from the computer monitor. The subjects performed a simple detection task: they were to respond to each stimulus as quickly as possible by pressing the same button with their index finger on a Cedrus response pad (RB-830, San Pedro, USA).

After reading instructions displayed on the computer screen, each session began with the participant completing a trial session in which feedback information was displayed (i.e., “correct”, “response too slow”). During this session stimuli from each category were presented twice. After succesful completion, subjects began the actual study.

The sequence of events in each trial was as follows: presentation of a fixation point (a white “+” against a black background) for 100 ms, a blank screen for 300 to 1200 ms, and a target item displayed for 500 ms. Onset of the consecutive trial was driven by the subjects response and appeared 2000 ms after pressing the response button. Following the first session, the second one (preceded by the training session) was initiated by the subject by pressing a response button. Each session lasted about 7 minutes.

EEG recordings

EEG was continuously recorded from 62 scalp sites using a 136-channel amplifier (QuickAmp, Brain Products, Enschede, the Netherlands) and BrainVisionRecorder® software (Brain Products, Munich, Germany). Ag-AgCl electrodes were mounted on an elastic cap (ActiCAP, Munich, Germany) and positioned according to the extended 10–20 system. Electrode impedance was kept below 5 kΩ. The EEG signal was recorded against an average of all channels calculated by the amplifier hardware. The sampling rate was 500 Hz.

Behavioral data analysis

Responses were scored as correct if the button was pressed within 150–1000 ms after the stimulus onset. Response times (RTs) were analyzed using mixed-model ANOVA, with the following factors: group (ASD, control), type (face, name), and category (own, close-other, famous, and unknown). RTs were averaged across correct trials only.

ERP analysis

Off-line analysis of the EEG signal was performed using BrainVisionAnalyzer® software (Brain Products, Gilching, Germany). The first step was the implementation of butterworth zero phase filters: high-pass – 0.1 Hz, 12 dB/oct; low-pass – 30 Hz, 12 dB/oct; and notch filter – 50 Hz. Next, we corrected ocular artifacts using Independent Component Analysis [82]. After the decomposition of each data set into maximally statistically independent components based on visual inspection of the component map [83], components representing eye blinks were rejected. Ocular-artifact-free EEG data were obtained by multiplying the remaining ICA components using the reduced component-mixing matrix. Then, the EEG signal was segmented to obtain epochs extending from 100 ms before to 1000 ms after the stimulus onset (baseline correction from −100 to 0 ms). In the automatic artifact rejection, the maximum permitted voltage step per sampling point was 50 µV. The maximum permitted absolute difference between two values in the segment was 200 µV. The minimum and maximum permitted amplitudes were −200 µV and 200 µV respectively, and the lowest permitted activity in the 100 ms interval was 0.5 µV. Finally, the data were re-referenced to the mean from both earlobes and averaged for each stimuli category.

The ERPs for own, close-other's, famous, and unknown faces/names were computed for correct trials only. The mean number of segments used to compute ERPs in the ASD group was 29 for names and 30 for faces. In the control group, 31 for names and 31 for faces. We did not find significant differences in the number of epochs used to compute ERPs between types and categories of stimuli or between groups.

In the statistical analysis, peak latencies and amplitudes were used. We analyzed amplitudes and latencies of P100, N170, and P300 previously reported in studies with visual presentation of faces and names, at previously reported locations [31], [32], [36], [65], [71][74], [79]. Based on the visual inspection of grand-average ERPs and on the existing literature, the peak detection was performed for the following time-windows: P100 (50–150 ms after the stimulus onset), N170 (150–220 ms), and P300 (250–450 ms). We focused on scalp regions in which those ERP components had their maximum amplitudes. P100 and N170 were analyzed in the left and right occipital regions (PO7 and PO8). P300 was analyzed in the central-parietal region (CPz, CP3 and CP4). Including two lateral electrodes (i.e., CP3, CP4) into the sub-set of centro-parietal electrodes enabled us to relate to the issue of plausible differences in the left and right hemisphere involvement in processing of names and faces in the ASD group. Our choice of electrodes was confirmed by the topography of brain activity in the time windows corresponding to P100, N170, and P300.

Taking into account that P100 and N170 were analyzed at the same electrode sites, amplitude of the second component was analyzed as a peak-to-peak against P100. Visual analysis of the P300 revealed double maxima within chosen time window. Thus peak detection was performed in two time windows: early P300 (250–350 ms) and late P300 (350–450 ms). Epochs were visually inspected to ensure that for each participant ERP components reached their maximum/minimum within the selected time window.

We performed mixed-model ANOVA on amplitudes and latencies of each component with the following factors: group (a between-subject factor at two levels: ASD, control), type (a within-subject factor at two levels: face, name), category (a within-subject factor at four levels: own, close-other, famous and unknown), and location. This within-subject factor was at two levels (left, right) in the case of P100 and N170 and at three levels (left, central, right) in the case of P300. All effects with more than one degree of freedom in the numerator were adjusted for violations of sphericity according to the Greenhouse and Geisser formula [84]. T-tests with Bonferroni correction for multiple comparisons were applied to post-hoc analyses. Only interactions involving the between-subjects factor of group that were necessary to address the main aims of the present study were further analyzed.

Results

Behavioral data

No significant main effects or interaction were found.

Electrophysiological data

P100.

Statistical analysis on P100 amplitudes revealed a significant main effect of the type of stimulus (F1,44 = 20.968; p<.0001; η2 = .323) and two interactions: type × location (F3,90 = 4.971; p = .031; η2 = .102) and type × group (F1,44 = 3.916; p = .05; η2 = .820). All other main factors and interactions were insignificant. Faces, in general, were associated with higher P100 amplitudes than names. Post hoc tests showed that only face amplitudes were significantly higher in the right than in the left hemisphere (p = .019). Between-group differences referring to the type of stimulus showed that face amplitudes of P100 were higher in ASD than in the control group (p = .036) (see Figure 1). Analysis of the P100 latencies revealed a main effect of the type factor (F1,44 = 67.581; p<.0001; η2 = .606). Latencies for faces were significantly longer than for names.

thumbnail
Figure 1. Grand average event-related potentials (ERP) in the P100 time window in the ASD group vs. the control group, pooled for PO7 and PO8 electrodes.

left panel presents response to face stimuli, right panel presents response to name stimuli, with all categories taken together.

https://doi.org/10.1371/journal.pone.0086020.g001

N170.

Statistical analysis of N170 amplitudes revealed a main effect of category (F1,44 = 2.837; p = .04; η2 = .061) and a significant 3-way interaction: group × type × location (F1,44 = 5.138; p = .028; η2 = .105). Post hoc analysis showed that in the control group N170 amplitudes for names were higher in the left hemisphere than in the right (p = .014), and N170 amplitudes for faces were marginally higher in the right hemisphere than in the left (p = .085). Moreover, N170 amplitudes in the left hemisphere were higher in the control than in the ASD group (p = .004). Analysis of N170 latencies revealed a main effect of the type of stimuli (F1,44 = 12.954; p = .001; η2 = .227). N170 latency for faces was significantly longer than for names (see Figure 2).

thumbnail
Figure 2. Grand average event-related potentials (ERP) in the N170 time window in the ASD group vs. the control group, separately for each analyzed scalp position (PO7, PO8).

upper panels present response to face stimuli, bottom panels present response to name stimuli, with all categories taken together. For faces, amplitudes of N170 (peak-to-peak vs. P100) were higher on PO8 than on PO7 and for names amplitudes were higher on PO7 than on PO8.

https://doi.org/10.1371/journal.pone.0086020.g002

P300.

ANOVA for early P300 (see Figure 3) amplitudes revealed a main effect of the type of stimuli (F1,44 = 9.558; p = .003; η2 = 0.178), category of stimuli (F3,42 = 9.741; p<.001; η2 = .181), electrode location (F2,43 = 12.099; p<.001; η2 = .216), and interactions: type × category (F3,42 = 4.621; p = .004; η2 = .095), group × type × category (F3,42 = 3.997; p = .013; η2 = .083), group × type × location (F2,43 = 5.013; p = .009; η2 = .102), and group × category × location (F6,39 = 3.451; p = .005; η2 = .073). All other effects were insignificant. Post hoc tests of ‘group × type × category’ interaction showed that in both groups there were no differences in early P300 between categories of names. However, significant differences appeared in response to faces. In the ASD group, response to own face did not differ from the response to close-other's face and unknown face. However, amplitudes to own and close-other's face were higher than for famous face (p = .005 and p = .001, respectively). In the control group, response to own face was significantly higher than to the close-other's (p = .001), famous (p<.001), and unknown face (p = .001).

thumbnail
Figure 3. Grand average event-related potentials (ERP) in the ASD group vs. the control group, presented separately for each category of stimuli (own, close-other, famous, unknown), each analyzed centro-parietal scalp position (CP3, CPz, CP4), and each type of stimuli (face, name).

https://doi.org/10.1371/journal.pone.0086020.g003

Post hoc tests of ‘group × type × location’ interaction showed that in the ASD group amplitudes for faces were higher than for names at each investigated location: left (p = .008), right (p = .007), and central (p = .002). In the control group, this effect appeared only on the right side of the scalp (p = .041). Post hoc analyses of the ‘group × category × location’ interaction revealed that in typically developing participants, amplitudes of early P300 to all categories of faces and names were higher at the right (CP4) and central (CPz) electrode sites in comparison to the left (CP3). In contrast, this effect (CPz, CP4>CP3) was observed in ASD subjects only for the close-other category.

Analysis of early P300 latencies revealed a main effect of the group (F1,44 = 4.776; p = .034; η2 = .970). Longer latencies of early P300 latencies were observed in the ASD group than in the control.

ANOVA on late P300 peak amplitudes (see Figure 3) revealed a main effects of the type of stimuli (F1,44 = 4.904; p = .032; η2 = .100), category of stimuli (F3,42 = 19.522; p<.001; η2 = .307), location (F2,43 = 8.978; p<.001; η2 = .169), and statistical significance of interactions: group × category (F3,42 = 3.243; p = .024; η2 = .068), category × location (F6,39 = 3.022; p = .013; η2 = .064), type × location (F2,43 = 3.168; p = .047; η2 = .067), and group × category × location (F6,39 = 3.699; p = .004; η2 = .078). All other main factors and interactions were insignificant. Post hoc tests of ‘group × category’ interaction indicated that in the ASD group, regardless of the type of stimuli, late P300 response to own face/name was higher than for famous (p = .002) and unknown face/name (p<.001). Peak amplitude for the close-other category was higher than for famous (p = .023) and unknown (p = .008). No differences between the self and close-other category were found. In the control group, response for the own face/name was higher than for the close-other's (p = .001), famous (p<.001), and unknown face/name (p = .001). In this group, peak amplitude for close-other's face/name was higher than for famous face/name (p = .014). Post hoc tests for the ‘group × category × location’ interaction showed that the described effects in each group were significant at each investigated electrode location. However, they were the strongest at the central site. No significant differences regarding P300 peak latencies were observed in the late time window.

Discussion

The goal of this ERP study was to investigate the neural correlates of name and face detection in ASD. Names and faces differed in respect to their personal significance (own, close-other's, famous, unknown). Specifically, we were interested whether preferential attention allocation for self-related stimuli was impaired in ASD participants, and whether the same effects could be observed for the ‘physical self’ (one's own face) and the ‘non-physical self’ (one's own name).

On the behavioral level, we did not find differences in RTs between groups and experimental conditions. It should be stressed, however, that the task we used was very simple and did not require discriminating between stimuli. No in-depth processing of incoming information was required to successfully accomplish the task, and only high-functioning ASD participants were tested.

On the neural level, several significant effects were observed. In general, faces were associated with higher P100 amplitudes than names in both groups. It has been well documented that the amplitude of P100 is sensitive to perceptual features of visual stimuli, such as brightness, contrast, visual acuity, and size [85], [86]. Interestingly, some studies report face-sensitive effects at the level of P100 [73], [87]. It has been argued, however, that this may just result from perceptual differences between faces and visual stimuli used for comparison [88]. Thus increased P100 to faces in both our groups is possibly a consequence of the size and complexity of these stimuli. We observed no self-preference effect in this early ERP component for both the ASD and control group.

Besides the effects common for the two groups, some between-group differences appeared about 100 ms after the stimulus onset: P100 amplitudes in response to all categories of faces were higher in the ASD group than in the control group. This effect may reflect the enhanced visual processing often reported in ASD [89]. Alternatively, it could be also attributed to early stimulus-driven attention allocation [49], [50], i.e., enhanced P100 to faces in the ASD group may reflect increased orienting/attention to these stimuli. However, increased attention operating at the early stage of face processing did not exert influence on the later stages, related to face recognition (see discussion referring to the P300 findings below). Actually, one may speculate that this enhanced P100 indicates the higher processing effort present at earliest stage of face perception in this clinical group [51]. However, none of previously published studies on face processing reported P100 amplitudes to faces higher in individuals with ASD than in control individuals [75][78]. This discrepancy between our P100 results and findings of previously published studies may result from crucial differences in experimental paradigms, i.e. different attentional requirements, different stimuli and different subject's tasks. Specifically, in Webb et al. [78] and MacPartland et al. [77] studies, presentation of faces was task-irrelevant, thus faces were out of focus of attention: subjects were supposed to detect (i.e., press a button) houses [78] or butterflies [77]. In two other studies, emotional faces were presented [75], [76] and subjects were asked either to indicate whether the face was neutral or sad [76] or to verbalize the word which described how the person in the photograph was feeling [75].

Subsequently, amplitudes of N170 differentiated the two groups: N170 recorded in the left hemisphere to names was higher in the control group than in the ASD group. In addition, lateral effects related to the type of presented information were present only in the control group. Specifically, N170 amplitude to names was higher in the left hemisphere than in the right one and to faces higher in right than in the left one, however the second effect was only marginally significant. Although general enhancement of N170 amplitudes to faces is typically observed when faces are compared to other visual objects [90], some studies revealed higher amplitudes of N170 component in the right hemisphere only [43], [52], [91]. Increased N170 for names in the left hemisphere, in turn, was also previously reported in healthy subjects [73], [86], [92] and seems to be in line with typical dominance of the left side of the human brain in language processing.

While in the control group on the early stage of information processing, left hemisphere dominance appeared for visually presented names and right hemisphere dominance – for faces, such effects were absent in the ASD group. In the case of names, the lack of lateral effects is generally in line with the previously found dysfunction of the left hemisphere in the ASD [93], [94] (but see [95]) and atypical patterns of lateralization of language processing in this clinical group [96], [97]. In the case of faces, presence of the right hemispheric dominance in ASD groups is still unclear. For example lack of lateral differences in ASD was previously reported in N170 component by [98] while other studies report such lateralization for both control and ASD subjects [75], [76], [78].

With regard to the main aim of our study, the most important findings were between-group differences found in the 250–350 ms and 350–450 ms time windows. In both groups, P300 amplitudes in the early time window were significantly modulated by categories of faces, not names. In the control group, the own face was associated with higher P300 amplitudes than all other faces, whereas in the ASD group the own and close-other's face did not differ. Both resulted in enhanced P300 in comparison to the famous but not unknown face. The latter is in line with findings of a study reporting that children with autism fail to show differential late positive ERP component to their mother's face versus an unfamiliar face [99]. These higher amplitudes of early P300 to own and the close-other's face but not to the famous face seem to reflect the personal relevance of those stimuli. The lack of significant differences between personally relevant faces (i.e., own and close-other faces) and unknown faces possibly results from equivalent attention allocation for those stimuli. A similar effect in ASD children was found by Gunji et al. [12]. In this study, no significant differences in P300 components were observed among the own, familiar, and unfamiliar face conditions. We argue that this elevated level of attention may be due to an elementary adaptive mechanism that guarantees that events/information with a potentially high survival value would not be missed [100]. It might be the case that novel objects (unknown faces) attracted ASD participants' attention to the same extent as personally relevant but not famous faces. Latency of early P300 also differentiated the two groups. It was significantly longer in the ASD group than in the control group for all names and faces, indicating some delay in processing of these socially-relevant stimuli.

Importantly, in the late time window we observed common patterns of P300 amplitudes for both types of stimuli, indicating the self-preference effect in the control group and the personal relevance effect in the ASD group. Specifically, in the control group own name/face processing resulted in the highest amplitudes of P300 in comparison to all other names/faces (i.e., close-other's, famous, unknown name/face). In the ASD group, P300 for own name/face did not differ from P300 for close-other name/face. However, P300 to own and close-other's name/face was significantly higher than P300 to other (famous and unknown) name/face.

Our P300 results showing preferential processing of the self-related stimuli in the control group are in line with previous ERP studies in healthy subjects [32], [34], [36], [68], [72], [73]. The novel finding in the control group is that amplitudes of P300 to own name and face were also higher than P300 to close-other's name and face. To our knowledge, none of the previous ERP studies used such stimuli together with self, famous, and unknown names and faces. Enhanced P300 in a own face condition in comparison to a friend's face condition was reported in one study only [12].

In contrast to the control group, late P300 findings in the ASD group revealed that the self-preference effect was present only when own name/face was compared to distant others' names/faces and absent when close-other's name/face was used as a reference to the self-related stimuli. The latter suggests equivalent attention allocation for own and close-other's faces and names in the ASD group. Thus, at the level of detection, the self-related stimuli are not differentiated from the close-other related stimuli, but they are differentiated from stimuli related to the more distant other (i.e. a famous person and the unknown person). It seems that in the case of ASD individuals, preferential processing was not restricted only to the self-related stimuli but to all personally relevant stimuli. Enhanced P300 to both own and close-other's name and face in the ASD group may reflect not only similar attentional characteristics [61] of these stimuli but also emotional ones [63]. Attention and emotion may complement each other as the model of motivated attention [101] states that emotional cues prompt motivational regulation and draw attentional resources. This is supported by findings of behavioral [100] and electrophysiological studies [102]. Although lower impact of emotion in guiding attention to socially-relevant stimuli might be expected in ASD [103], it is plausible that higher P300 amplitudes to the self and close-other related stimuli in the ASD group reflect similar emotionally motivated attentional load of these stimuli. One may speculate that in the case of ASD participants, motivated attention allocation to those stimuli might be associated with a kind of behavioral learning. This supposition is supported by the fact that most participants from our ASD group (22 out of 23) chose a family member as the ‘close other’. Taking into account that they spent most of the time at home, extensive contact with that person and his/her significance in fulfilling daily needs may result in intensive stimulus- reward learning [104].

Alternative interpretations of P300 findings, not referring to the attentional processes, are also plausible. For example, our P300 findings may be interpreted in the context of the person recognition model [105][108] and its ERP adaptations [73], [92], [109], [110]. P300 is considered to reflect activation of semantic knowledge about a person [110]. Thus, P300 findings in the ASD group may suggest similar levels of person-specific semantic knowledge, referring to the self and the significant other. In contrast, the control group mainly displayed activation of self-knowledge. This result may support the theoretical view of a poorly developed or even absent ‘I-concept’ [22]. This is related to a distorted perception of oneself as unique and distinct from others. As a result, we can observe insufficient elaboration of the self-concept and impaired differentiation of the self from the significant other in autistic individuals [10], [22].

The P300 findings in the ASD group may also be viewed in the light of Uddin's [37] hypothesis, stating that psychological but not physical aspects of the self are altered in ASD. In other words, it might be expected that processing of ‘symbolic’ self-related stimuli (e.g. own name) is more impaired in ASD patients than processing of ‘physical’ self-related stimuli (e.g. own face). However, investigating these two types of stimuli at the same time, using the same experimental procedure, the same modality of stimuli, and with the same participants, we observed an analogous pattern of late P300 amplitudes for own name and own face. Thus our P300 results indicate that the ‘physical self’ and ‘non-physical self’ are processed in a similar way not only in the control group but also in the ASD group. Although our findings seems to disprove Uddin's hypothesis one may speculate that some in-depth processing, absent in our detection task, would be required to reveal disturbances in the ‘psychological self’. The only difference between detection of names and faces (including one's own name and one's own face) observed in both groups was related to the temporal delay of the former in comparison to the later. P300 amplitude differentiated categories of faces in the early time window and categories of names in the late time window. This may be linked to the time consuming semantic processing of name stimuli.

ERP findings of this study also reveal attenuated lateralization of face and name processing in ASD. In the control group only, name detection in general was associated with higher activity in the left hemisphere whereas face detection was associated with enhanced activity in the right hemisphere, as revealed by N170 and P300 amplitudes, respectively. In contrast, lateral differences were absent in the ASD group. All of these effects support the notion of atypical functional brain organization [111], [112] in ASD participants during social stimuli processing.

Finally, one may hypothesize that aurally presented names should bring more ecologically valid findings. The auditory version of a name is more adequate in the context of communication and social interactions. However, we used the visual version of names in order to investigate self-related stimuli that differed only in respect to their domain (‘physical’, i.e., face vs. ‘non-physical’, i.e. name), but not their modality. Results from our own neuroimaging study [42] on healthy participants suggest that the involvement of the medial prefrontal cortex, is largely independent from the modality of one's own name. However, in some other brain regions (e.g. inferior frontal gyri) the preference in processing of one's own name vs. the close-other's name was present only for the auditory modality [42]. Therefore, it cannot be ruled out that using auditory presentations of the names would reveal a different pattern of results.

In conclusion, the present study provides evidence indicating equivalent engagement of attentional resources in detection of visually presented stimuli related to the self and to the close-other in adolescent and young adults with ASD. Similar effects were observed for names and for faces. In contrast, preferential attention allocation for the self-face and self-name was observed in typically developing individuals. Further research with different tasks and stimuli is needed to fully explain the impaired ‘me’ vs. ‘not-me’ distinction in autism.

Acknowledgments

We would like to thank all participants and their families, as well as Michał Wroniszewski, Joanna Grochowska, and Urszula Wójcik from the SYNAPSIS Foundation for their help in selecting the groups of participants.

Author Contributions

Conceived and designed the experiments: HBC PT PO IC AN. Performed the experiments: HBC PT IC PO AN. Analyzed the data: HBC AN. Wrote the paper: HBC PT PO IC AN.

References

  1. 1. Amaral DG, Schumann CM, Nordahl CW (2008) Neuroanatomy of autism. Trends in Neurosciences 31: 137–145
  2. 2. Williams DL, Minshew NJ (2007) Understanding autism and related disorders: what has imaging taught us. Neuroimaging Clinics of North America 17: 495–509
  3. 3. Hadjikhani N, Joseph RM, Snyder J, Tager-Flusberg H (2006) Anatomical differences in the mirror neuron system and social cognition network in autism. Cerebral Cortex 16: 1276–1282
  4. 4. Baron-Cohen S, Baldwin DA, Crowson M (1997) Do children with autism use the speaker's direction of gaze strategy to crack the code of language? Child Development 681: 48–57
  5. 5. DePape AMR, Chen A, Hall GBC, Trainor LJ (2012) Use of prosody and information structure in high functioning adults with autism in relation to language ability. Frontiers in Psychology 3: 72
  6. 6. Wang AT, Lee SS, Sigman M, Dapretto M (2006) Neural basis of irony comprehension in children with autism: the role of prosody and context. Brain 129: 932–943
  7. 7. Lombardo MV, Barnes JL, Wheelwright SJ, Baron-Cohen S (2007) Self-referential cognition and empathy in autism. PLoS ONE 2: e883
  8. 8. Lombardo MV, Chakrabarti B, Bullmore ET, Sadek SA, Pasco G, et al. (2010) Atypical neural self-representation in autism. Brain 133: 611–624
  9. 9. Kanner L (1943) Autistic disturbances of affective contact. Nervous Child 2: 217–250.
  10. 10. Lombardo MV, Baron-Cohen S (2010) Unraveling the paradox of the autistic self. Wiley Interdisciplinary Reviews: Cognitive Science 1: 393–403
  11. 11. Rogers SJ, Pennington BF (1991) A theoretical approach to the deficits in infantile autism. Development and Psychopathology 3: 137–162
  12. 12. Gunji A, Inagaki M, Inoue Y, Takeshima Y, Kaga M (2009) Event-related potentials of self-face recognition in children with pervasive developmental disorders. Brain and Development 31: 139–147
  13. 13. Henderson HA, Zahka NE, Kojkowski NM, Inge AP, Schwartz CB, et al. (2009) Self-Referenced Memory, Social Cognition, and Symptom Presentation in Autism. Journal of Child Psychology and Psychiatry 50: 853–861
  14. 14. Toichi M, Kamio Y, Okada T, Sakihama M, Youngstrom EA, et al. (2002) A lack of self-consciousness in autism. American Journal of Psychiatry 159: 1422–1424
  15. 15. Hurlburt RT, Happe F, Frith U (1994) Sampling the form of inner experience in three adults with Asperger syndrome. Psychological Medicine 24: 385–395
  16. 16. Frith U, Happe F (1999) Theory of mind and self-consciousness: what it is like to be autistic. Mind and Language 14: 1–22
  17. 17. Frith U (2003) Autism: explaining the enigma. 2nd. Malden, MA: Blackwell.
  18. 18. Happe F (2003) Theory of mind and the self. Annals of the New York Academy of Sciences 1001: 134–144
  19. 19. Baron-Cohen S (2005) Autism – ‘autos’: Literally, a total focus on the self? In: Feinberg TE, Keenan JP, editors. The lost self: pathologies of the brain and identity. Oxford: Oxford University Press.
  20. 20. Frith U, de Vignemont F (2005) Egocentrism, allocentrism, and Asperger syndrome. Consciousness and Cognition 14: 719–738
  21. 21. Hobson PR, Chidambi G, Lee A, Meyer J (2006) Foundations for self-awareness: an exploration through autism. Monographs of the Society for Research in Child Development 71: vii–166
  22. 22. Glezerman T (2013) Autistic person's sense of self. Autism and the brain. New York: Springer. p. 194.
  23. 23. Alexopoulos T, Muller D, Ric F, Marendaz C (2012) I, me, mine: Automatic attentional capture by self-related stimuli. European Journal of Social Psychology 42: 770–779
  24. 24. Cherry EC (1953) Some experiments on the recognition of speech, with one and with two ears. Journal of the Acoustical Society of America 25: 975–979
  25. 25. Moray N (1959) Attention in dichotic listening: Affective cues and the influence of instructions. Quarterly Journal of Experimental Psychology 11: 56–60
  26. 26. Wolford G, Morrison F (1980) Processing of unattended visual information. Memory and Cognition 8: 521–527
  27. 27. Wood N, Cowan N (1995) The cocktail party phenomenon revisited: how frequent are attention shifts to one's name in an irrelevant auditory channel? Journal of Experimental Psychology: Learning, Memory and Cognition 21: 255–260
  28. 28. Shapiro KL, Caldwell J, Sorensen RE (1997) Personal names and the attentional blink: A visual “cocktail party” effect. Journal of Experimental Psychology-Human Perception and Performance 23: 504–514
  29. 29. Pfister R, Pohl C, Kiesel A, Kunde W (2012) Your unconscious knows your name. PLoS One 7: e32402
  30. 30. Berlad I, Pratt H (1995) P300 in response to the subject's own name. Electroencephalography and Clinical Neurophysiology 96: , 472–474. doi: 10.1016/0168-5597(95)00116-A.
  31. 31. Müller HM, Kutas M (1996) What's in a name? Electrophysiological differences between spoken nouns, proper names name. NeuroReport 8: 221–225
  32. 32. Folmer RL, Yingling CD (1997) Auditory P3 responses to name stimuli. Brain and Language 56: 306–311
  33. 33. Gray HM, Ambady N, Lowenthal WT, Deldin P (2004) P300 as an index of attention to self-relevant stimuli. Journal of Experimental and Social Psychology 40: 216–224
  34. 34. Sui J, Zhu Y, Han S (2006) Self-face recognition in attended and unattended conditions: an event-related brain potential study. NeuroReport 17: 423–427
  35. 35. Scott LS, Luciana M, Wewerka S, Nelson CA (2005) Electrophysiological correlates of facial self-recognition in adults and children. Cognitie, Creier, Comportament (Romanian Journal-Translation: Cognition, Brain, Behavior) 9: 211–238.
  36. 36. Tacikowski P, Nowicka A (2010) Allocation of attention to self-name and self-face: An ERP study. Biological Psychology 84: 318–324
  37. 37. Uddin LQ (2011) The self in autism: An emerging view from neuroimaging. Neurocase 17: 201–208
  38. 38. Carmody DP, Moreno R, Mars AE, Seshadri K, Lambert GH, et al. (2007) Brief report: brain activation to social words in a sedated child with autism. Journal of Autism and Developmental Disorders 37: 1381–1385
  39. 39. Carmody DP, Lewis M (2006) Brain activation when hearing one's own and others' names. Brain Research 1116: 153–158
  40. 40. Holeckova I, Fischer C, Morlet D, Delpuech C, Costes N, et al. (2008) Subject's own name as a novel in a MMN design: a combined ERP and PET study. Brain Research 1189: 152–165
  41. 41. Kampe KK, Frith CD, Frith U (2003) Hey John: Signals conveying communicative intention toward the self active brain regions associated with mentalizing, regardless of modality. Journal of Neuroscience 23: 5258–5263.
  42. 42. Tacikowski P, Brechmann A, Nowicka A (2013) Cross-modal pattern of brain activations associated with the processing of self- and significant other's name. Human Brain Mapping 34: 2069–2077
  43. 43. Tacikowski P, Brechmann A, Marchewka A, Jednoróg K, Dobrowolny M, et al. (2011) Is it about the self or the significance? An fMRI study of self-name recognition. Social Neuroscience 6: 98–107
  44. 44. Jeste SS, Nelson CA 3rd (2009) Event related potentials in the understanding of Autism Spectrum Disorders: An analytical review. Journal of Autism and Developmental Disorders 39: 495–510
  45. 45. Coles MGH, Rugg MD (1995) Event-related brain potentials: An introduction. In: Rugg MD, Coles MGH, editors. Electrophysiology of mind. Event-related brain potentials and cognition. Oxford: Oxford University Press, pp. 40–85.
  46. 46. Herrmann CS, Knight RT (2001) Mechanisms of human attention: Event related potentials and oscillations. Neuroscience and Biobehavioral Reviews 25: 465–476
  47. 47. Hillyard SA, Annlo-Vento L (1998) Event-related brain potentials in the study of visual selective attention. Proceedings of the National Academy of Sciences of the United States of America 95: 781–787
  48. 48. Luck SJ, Heinze H, Mangun GR, Hillyard SA (1990) Visual event-related potentials index focused attention within bilateral stimulus arrays. II. Functional dissociation of P1 and N1 components. Electroencephalography and Clinical Neurophysiology 75: 528–542
  49. 49. Luck SJ, Woodman GE, Vogel EK (2000) Event-related potential studies of attention. Trends in Cognitive Sciences 4: 432–440
  50. 50. Mangun GR (1995) Neural mechanisms of visual selective attention. Psychophysiology 32: 4–18
  51. 51. Hileman CM, Henderson H, Mundy P, Newell L, Jaime M (2011) Developmental and individual differences on the P1 and N170 ERP components in children with and without autism. Developmental Neuropsychology 36: 214–236
  52. 52. Bentin S, Allison T, Puce A, Perez E, McCarthy G (1996) Electrophysiological studies of face perception in humans. Journal of Cognitive Neuroscience 8: 551–565
  53. 53. Eimer M (2000) Event-related brain potentials distinguish processing stages involved in face perception and recognition. Clinical Neurophysiology 111: 694–705
  54. 54. Rossion B, Gauthier I, Tarr MJ, Despland P, Bruyer R, et al. (2000) The N170 occipito-temporal component is delayed and enhanced to inverted faces but not to inverted objects: an electrophysiological account of face-specific processes in the human brain. Neuroreport 11: 69–74
  55. 55. Caharel S, Poiroux S, Bernard C, Thibaut F, Lalonde R, et al.. (2002) ERPs associated with familiarity and degree of familiarity during face recognition. International Journal of Neuroscience 112 : Pages 1499–1512. doi: 10.1080/00207450290158368
  56. 56. Carbon CC, Schweinberger SR, Kaufmann JM, Leder H (2005) The Thatcher illusion seen by the brain: an event-related brain potentials study. Cognitive Brain Research 24: 544–555
  57. 57. Herzmann G, Schweinberger SR, Sommer W, Jentzsch I (2004) What's special about personally familiar faces? A multimodal approach. Psychophysiology 41: 688–701
  58. 58. Schweinberger SR, Pickering EC, Jentzsch I, Burton M, Kaufmann JM (2002) Event-related brain potential evidence for a response of inferior temporal cortex to familiar face repetitions. Cognitive Brain Research 14: 398–409
  59. 59. Bentin S, Deouell LY (2000) Structural encoding and identification in face processing; ERP evidence for separate mechanisms. Cognitive Neuropsychology 17: 35–54
  60. 60. Bentin S, Mouchetant-Rostaing Y, Giard MH, Echallier JF, Pernier J (1999) ERP manifestations of processing printed words at different psycholinguistic levels: time course and scalp distribution. Journal of Cognitive Neuroscience 11: 235–260
  61. 61. Polich J (2007) Updating P300: An integrative theory of P3a and P3b. Clinical Neurophysiology 118: 2128–2148
  62. 62. Johnston VS, Miller DR, Burleson MH (1986) Multiple P3s to emotional stimuli and their theoretical significance. Psychophysiology 23: 684–694
  63. 63. Dietrich DE, Waller C, Johannes S, Wieringa BM, Emrich HM, et al. (2001) Differential effects of emotional content on event-related potentials in word recognition memory. Neuropsychobiology 43: 96–101
  64. 64. Polich J, Kok A (1995) Cognitive and biological determinants of P300: An integrative review. Biological Psychology 41: 103–146
  65. 65. Dering B, Martin CD, Moro S, Pegna AJ, Thierry G (2011) Face-sensitive processes one hundred milliseconds after picture onset. Frontiers in Human Neuroscience 5: 93
  66. 66. Rossion B, Caharel S (2011) ERP evidence for the speed of face categorization in the human brain: Disentangling the contribution of low-level visual cues from face perception. Vision Research 51: 1297–1311
  67. 67. Mitsudo T, Kamio Y, Goto Y, Nakashima T, Tobimatsu S (2011) Neural responses in the occipital cortex to unrecognizable faces. Clinical Neurophysiology 122: 708–718
  68. 68. Dawson G, Webb SJ, McPartland J (2005) Understanding the nature of face processing impairment in autism: Insights from behavioral and electrophysiological studies. Developmental Neuropsychology 27: 403–424
  69. 69. Herrmann MJ, Ehlis AC, Ellgring H, Fallgatter AJ (2005) Early stages (P100) of face perception in humans as measured with event-related potentials (ERPs). Journal of Neural Transmission 112: 1073–1081
  70. 70. Caharel S, Courtay N, Bernard C, Lalonde R, Rebai M (2005) Familiarity and emotional expression influence an early stage of face processing: An electrophysiological study. Brain and Cognition 59: 96–100
  71. 71. Caharel S, Poiroux S, Bernard C, Thibaut F, Lalonde R, et al. (2002) ERPs associated with familiarity and degree of familiarity during face recognition. International Journal of Neuroscience 112: 1499–512
  72. 72. Perrin F, Maquet P, Peigneux P, Ruby P, Degueldre C, et al. (2005) Neural mechanisms involved in the detection of our first name: a combined ERPs and PET study. Neuropsychologia 43: 12–19
  73. 73. Tacikowski P, Jednorog K, Marchewka A, Nowicka A (2011) How multiple repetitions influence the processing of self-, famous and unknown names and faces: an ERP study. International Journal of Psychophysiology 79: 219–230
  74. 74. Zhao K, Yuan J, Zhong Y, Peng Y, Chen J, et al. (2009) Event-related potential correlates of the collective self-relevant effect. Neuroscience Letters 464: 57–61
  75. 75. O'Connor K, Hamm JP, Kirk IJ (2005) The neurophysiological correlates of face processing in adults and children with Asperger's syndrom. Brain and Cognition 59: 82–95
  76. 76. O'Connor K, Hamm JP, Kirk IJ (2007) Neurophysiological responses to face, facial regions and objects in adults with Asperger's syndrome: An ERP investigation. International Journal of Psychophysiology 63: 283–293
  77. 77. McPartland J, Dawson G, Webb SJ, Panagiotides H, Carver LJ (2004) Event-related brain potentials reveal anomalies in temporal processing of faces in autism spectrum disorder. Journal of Child Psychology and Psychiatry 45: 1235–1245
  78. 78. Webb SJ, Jones E, Merkle K, Murias M, Greenson J, et al. (2010) Response to familiar faces, newly familiar faces, and novel faces as submitted by ERPs is intact in adults with autism spectrum disorders. International Journal of Psychophysiology 77: 106–117
  79. 79. Höller Y, Kronbichler M, Bergmann J, Crone JS, Ladurner G, et al. (2011) EEG frequency analysis of responses to the own-name stimulus. Clinical Neurophysiology 122: 99–106
  80. 80. Brzeziński J, Gaul M, Hornowska E, Jaworowska A, Machowski A, et al.. (2004) WAIS-R(PL) – Skala inteligencji Wechslera dla dorosłych – wersja zrewidowana. [Wechsler Adult Intelligence Scale - revised] Warsaw: Pracownia Testów Psychologicznych Polskiego Towarzystwa Psychologicznego.
  81. 81. Oldfield RC (1971) The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia 9: 97–113
  82. 82. Bell AJ, Sejnowski TJ (1995) An information-maximization approach to blind separation and blind deconvolution. Neural Computation 7: 1129–1159
  83. 83. Jung TP, Makeig S, Westerfield M, Townsend J, Courchesne E, et al. (2001) Analysis and visualization of single-trial event-related potentials. Human Brain Mapping 14: 166–185
  84. 84. Greenhouse SW, Geisser S (1959) On Methods in the Analysis of Profile Data. Psychometrika 24: 95–112
  85. 85. Allison T, Puce A, Spencer DD, McCarthy G (1999) Electrophysiological studies of human face ferception. I: Potentials generated in occipitotemporal cortex by face and non-face stimuli. Cerebral Cortex 9: 415–430
  86. 86. Pfütze EM, Sommer W, Schweinberger SR (2002) Age-related slowing in face and name recognition: Evidence from event-related brain potentials. Psychology and Aging 17: 140–160
  87. 87. Itier RJ, Taylor MJ (2004) Effects of repetition learning on upright, inverted and contrast-reversed face processing using ERPs. Neuroimage 21: 1518–1532
  88. 88. Rossion B, Jacques C (2008) Does physical interstimulus variance account for early electrophysiological face sensitive responses in the human brain? Ten lessons on the N170. NeuroImage 39: 1959–1979
  89. 89. Samson F, Mottron L, Soulières I, Zeffiro TA (2012) Enhanced visual functioning in autism: an ALE meta-analysis. Human Brain Mapping 33: 1553–1581
  90. 90. Eimer M (2011) The face sensitivity in the N170 components. Frontiers in Human Neuroscience 5: 119
  91. 91. Sadeh B, Zhdanov A, Podlipsky I, Hendler T, Yovel G (2008) The validity of the face-selective ERP N170 component during simultaneous recording with functional MRI. Neuroimage 42: 778–786
  92. 92. Schweinberger SR, Ramsay LA, Kaufmann JM (2006) Hemispheric asymmetries in font-specific and abstractive priming of written personal names: Evidence from event-related brain potentials. Brain Research 1117: 195–205
  93. 93. Chiron C, Leboyer M, Leon F, Jambaqué I, Nuttin C, et al. (1995) SPECT of the brain in childhood autism: evidence for a lack of normal hemispheric asymmetry. Developmental Medicine and Child Neurology 37: 849–860
  94. 94. Eyler LT, Pierce K, Courchesne E (2012) A failure of left temporal cortex to specialize for language is an early emerging and fundamental property of autism. Brain 135: 949–960
  95. 95. Floris DL, Chura LR, J. Holt RJ, Suckling J, Bullmore ET, et al. (2013) Psychological correlates of handedness and corpus callosum asymmetry in autism: The left hemisphere dysfunction theory revisited. Journal of Autism and Developmental Disorders 43: 1758–1772
  96. 96. Kleinhans NM, Muller RA, Cohen DN, Courchesne E (2008) Atypical functional lateralization of language in autism spectrum disorders. Brain Research 1221: 115–125
  97. 97. Knaus TA, Silver AM, Kennedy M, Lindgren KA, Dominick KC, et al. (2010) Language laterality in autism spectrum disorder and typical controls: a functional, volumetric, and diffusion tensor MRI study. Brain and Language 112: 113–120
  98. 98. McPartland JC, Wu J, Bailey CA, Mayes LC, Schultz RT, et al. (2011) Atypical neural specialization for social percepts in autism spectrum disorder. Social Neuroscience 6: 436–451
  99. 99. Dawson G, Carver L, Meltzoff AN, Panagiotides H, McPartland J, et al. (2002) Neural correlates of face and object recognition in young children with autism spectrum disorder, developmental delay, and typical development. Child Development 73: 700–717
  100. 100. Öhman A, Flykt A, Esteves F (2001) Emotion drives attention: detecting the snake in the grass. Journal of Experimental Psychology: General 130: 466–478
  101. 101. Lang PJ, Bradley MM, Cuthbert BN (1997) Motivated attention: Affect, activation and action. In: Lang PJ, Simons RF, Balaban MT, editors. Attention and Orienting: Sensory and Motivational Processes. Manhwah, New Jersey: Lawrence Erlbaum Associates Publishers, pp. 97–135.
  102. 102. Briggs KE, Martin FH (2009) Affective picture processing and motivational relevance: arousal and valence effects on ERPs in an oddball task. International Journal of Psychophysiology 72: 299–306
  103. 103. Nuske HJ, Vivanti G, Dissanayake C (2013) Are emotion impairments unique to, universal, or specific in autism spectrum disorder? A comprehensive review. Cognition & Emotion 27: 1042–1061
  104. 104. Pierce K, Heist F, Sedaghat F, Courchesne E (2004) The brain response to personally familiar faces in autism: findings of fusiform activity and beyond. Brain 127: 2703–2716
  105. 105. Bruce V, Young A (1986) Understanding face recognition. British Journal of Psychology 77: 305–327
  106. 106. Morton J (1969) Interaction of information in word recognition. Psychological Review 76: 165–178
  107. 107. Morton J (1979) Facilitation in word recognition: experiments causing change in the logogen model. In: Kolers PA, Wrolstal M, Bouma H, editors. Processing of Visible Language. New York: Plenum Press, pp. 259–268.
  108. 108. Valentine T, Moore V, Brédart S (1995) Priming production of people's names. The Quarterly Journal of Experimental Psychology Section A: Human Experimental Psychology 48: 513–535
  109. 109. Herzmann G, Sommer W (2007) Memory-related ERP components for experimentally learned faces and names: characteristics and parallel-test reliabilities. Psychophysiology 44: 262–276
  110. 110. Paller KA, Gonsalves B, Grabowecky M, Bozic VS, Yamada S (2000) Electrophysiological correlates of recollecting faces of known and unknown individuals. NeuroImage 11: 98–110
  111. 111. Escalante-Mead PR, Minshew NJ, Sweeney JA (2003) Abnormal brain lateralization in high-functioning autism. Journal of Autism and Developmental Disorders 33: 539–543
  112. 112. D'Cruz AM, Mosconi MW, Steele S, Rubin LH, Luna B, et al. (2009) Lateralized response timing deficits in autism. Biological Psychiatry 66: 393–397