Introduction
Children with profound sensorineural hearing loss (SNHL) experience delays in learning to understand the speech of others and to produce intelligible speech. The source of their delays is rooted in a lack of refined access to the spectral and temporal cues of the acoustic–phonologic components of speech. That is, to learn to understand the speech of others and to speak themselves, a young child must hear the sounds of speech. When armed with such access through hearing technologies and, through the influences of a highly dynamic system, children with SNHL can begin to take command of the basic structures of their native, spoken language (Smith and Thelen
2003). Without such access, these children face challenges in their cognitive and psychosocial development and academic performance. Together, such cascading consequences carry downstream implications for employment and quality of life (Summerfield and Marshall
1999; Cheng et al.
2000).
When traditional amplification devices (hearing aids) are unable to restore access to the full range of phonemic components of speech, a cochlear implant (CI) is a widely used treatment option for children with SNHL (Bradham and Jones
2008). CI stimulation of the auditory pathway is made feasible by robust reserves of auditory neurons that persist in deafness. On average, about half of the peripheral neuronal complement of the cochlea survives even when deafness is profound and of early onset (Nadol
1997). Furthermore, surviving neurons retain responsivity to electrical stimulation. Electrical contacts of a CI device implanted into the cochlea can generate currents that stimulate subpopulations of auditory neurons. When configured across channels (to convey pitch information), variations in the power and tempo of electrical currents can encode sound via spike trains carried by auditory neurons. Acoustic inputs are thus conveyed to CNS auditory stations for encoding. Advances in sound processors and related software have enhanced the fidelity with which complex sounds are processed into physiologically meaningful codes.
Evidence of basic perceptual gains following cochlear implantation is found in consistent improvements in hearing thresholds (Pulsifer et al.
2003). However, improved thresholds for sound awareness represent only a preliminary measure of the intervening effect of a cochlear implant. A vast range of levels of hearing and communication ability are observed in children who receive cochlear implants, and the true impact is measured by more consequential outcomes than awareness of sound.
Because early-onset deafness is typically recessive in its genetic transmission, or is acquired as a result of infection, most deaf children grow up in hearing households. Manual strategies of signing can overcome many barriers to communication; however, the prevalence of signing in society is limited, and the depth of engagement with sign language in hearing families does not systematically expose deaf children to abstract semantic content (Mitchell and Quittner
1996). The resulting communication mismatch between a deaf infant and a hearing family associates with higher levels of parenting stress, less developmental scaffolding, and reduced sensitivity in parent–child interactions, resulting in negative consequences for later linguistic and psychosocial development (Polat
2003; Nikolopoulos et al.
2004b).
The notion of a “successful implant” often relates to the parents’ perception of how their child will best relate to the outside world. Hearing parents who seek alleviation of their child’s deafness will commonly express a goal of providing their child with options for real engagement with the mainstream, specifically in play and school at an early age, and in vocational options and life chances in adulthood. The pervasive nature of communication, within the family and in society, suggests a standard metric of outcome. Because the goal of restored hearing in a deaf child is to enable useful hearing, a key measure of outcome should reflect how a deaf child’s experience with a CI develops into the effective use of spoken language. Parental surveys indicate that the outcome of their greatest concern after surgical intervention in children with SNHL is the level of spoken language achieved (Nikolopolous et al.
2004a).
Despite its importance, the study of language development in children with cochlear implants presents methodological challenges. Useful experimental approaches must address the high variability of language performance observed in pediatric populations in general and attempt to control for confounds in smaller implanted populations. Despite the methodological challenges associated with high variability in results, the potential for key research insights exists. Clinically, properly assessed variability offers an opportunity to understand modifiers of outcome of this treatment for deaf children and to predict and promote success in language acquisition. More general research questions related to neurodevelopment arise, as well; understanding the factors that contribute to variation in CI outcomes in this population offers insight into the interaction of influences that contribute to language learning in general. Children identified as candidates for CIs represent a population that has experienced significant auditory deprivation during a period when communication growth normally advances at an accelerated rate. With intervention and an ultimate restoration of auditory inputs, studies of developmental effects can offer key neurobiological perspectives (Pisoni et al.
2008; Smith et al.
1998). The enormous variation observed in measures of early speech and language development in children with a CI thus calls for multivariable assessment of intervening and modifying variables in which baseline variables are captured with accuracy and their longitudinal associations are measured in a way that averts floor and ceiling effects.
Prior clinical studies have supported the use of an early CI, though their case series designs have generally been limited by lack of generalizability, insufficient sample sizes to support conclusions, inability to account for confounds, lack of assessment of parental influences, and absence of parallel observations from a control group (Fink et al.
2007). Our current study attempts to address prior limitations. The Childhood Development after Cochlear Implantation (CDaCI) study is a prospective, multidimensional, multisite trial that examines several dimensions of language learning in children receiving a CI under the age of 5 years and uses normal hearing age-matched controls. A longitudinal design enables the development of growth curves examining modifiers of language learning. The design of the CDaCI study also enables adjustment based on factors known to contribute to language learning and examination of novel predictors of implant outcomes, such as quality of parent–child interactions.
Here, we summarize findings that demonstrate prognostic value for language learning for variables of baseline language development, parent–child interactions, and age and hearing level at the time of CI surgery. Against a background of the dynamics of language learning and epigenetics, we consider a multivariable model of predictive and modifying factors that demonstrate significantly improved developmental trajectories for verbal language in children implanted before 18 months of age in the context of epigenetic control of language learning and variable impact of modifying factors across age-at-implantation groupings of CDaCI participants.
Language learning
Language can be defined as “an internalized, abstract knowledge that is the basis for communication” and functions as “a window on thought” (Jackendoff
1994). Language provides the tools to reveal ourselves to others in establishing and maintaining relationships and drives perceptual learning that contributes to cognition.
Language arises through successive, organizational adaptations. Thus, the study of how children learn a language brings together myriad influences and activities that enable a child to become linguistically engaged (Mellon
2009). Children learn spoken language by developing through knowledge and skills based in the phonology of the sound system, semantics (meaning), the rules of grammar, and the pragmatics of interaction (Rescorla and Mirak
1997). A child’s eventual mastery of language entails a timely convergence of the systems of skills. Mastery in each system contributes to full communicative competence (Rice
1989) as language acquisition flows from the effects of relative success in one sphere (e.g., phonology) on others (e.g., vocabulary, morphosyntax, and pragmatics).
Early stages of language learning require a child to extract acoustic representations from speech streams. Through such experiences, a child discovers regularities that enable meaning and insight into grammatical rules of spoken language. A typical infant’s first year’s experience entails behavioral and innate perceptual abilities that provide a framework for later acquisition of language. Because speech production is not yet manifest, infants and young toddlers with SNHL are likely to go undiagnosed during this period and hence may remain isolated from early linguistic experience (Marschark
1997). Unfortunately, the delay in exposure to appropriate language models is often reflected in poor language outcomes (Yoshinaga-Itano et al.
1998). Age of acquisition affects language outcomes regardless of modality. Both signed and spoken languages appear subject to timing constraints for optimal learning (Mayberry and Fischer
1989; Newport
1988,
1990; Padden and Ramsey
1998). If language is introduced after this period, deaf children typically must be painstakingly taught language instead of the experience-based acquisition language that characterizes typical development (Bench
1992). As this process is less efficient, most hearing-impaired children will be unable to fully overcome the linguistic, social, and cognitive challenges associated with delayed exposure to language (Vernon and Wallrabenstein
1984; Vaccari and Marschark
1997).
Cochlear implants can improve access to ambient language but are usually provided at ages after early development stages for the domains of language have begun. For example, ordinarily, a toddler of 3 years understands three fourths of the vocabulary that will ultimately support his daily conversation (White
1979). By age 4, most children have achieved sufficient mastery of the phonological, grammatical, and pragmatic systems to be considered a native speakers or signers (Crystal
1997).
Epigenetic background
The human cortex has substantial potential for epigenetic modification of function (Panksepp J.
2000; Sur and Leamey
2001). An example comes from studies of the rodent visual system demonstrating that vision can be rescued following removal of the visual cortex in utero possibly due to the epigenetic modifications that reorganize the surrounding parietal cortex, which then takes on a visual function (Horng and Sur
2006). Similarly, human studies have demonstrated the ability to recover fundamental aspects of vision in congenitally blind adults who have their vision restored (Ostrovsky et al.
2009; Mandavilli
2006). The same phenomenon has been observed in the auditory system, where cortical reorganization compensates for alterations induced by cochlear dysfunction (Rajan et al.
1993; McDermott et al.
1998). We can extract from this the possibility that cortical functions, including the encoding of sound that subserves spoken language acquisition, are modified by epigenetic modifications (Sur and Rubenstein
2005). Auditory information is initially collected by the external ear and transmitted through the middle ear to the inner ear where the information is converted from vibrations in the cochlea to electrical signals in the auditory nerve. We are born with a developed cochlea, and brain processing of information from the cochlea develops substantially in postnatal periods (Gordon et al.
2008). This period of CNS modification entails adjustments that are likely guided by a combination of environmental exposure and epigenetic expression.
During postnatal development of the central nervous system, synaptogenesis plays a key role in learning and synaptic connectivity is the primary neuronal correlate of the representation of knowledge within the brain (Elman et al.
1996; Kral and Eggermont
2007). During this period, cognitive development is determined largely by experience as gene expression specifies the function and ultimate fate of neurons and their synaptic connectivity. With changes in synaptic number and patterns of connectivity, inputs to cortical regions and thalamic nuclei and modulatory controls are established. Such connections transmit neurochemicals implicated in states of arousal and reward. While such models fail to account for a lockstep table of correlates between language learning and correlates of neuronal connectivity, we can infer that certain stages of neurodevelopment set the stage of time-sensitive readiness for learning based on perception and amenability to experience-driven change (wherein learning itself contributes to the complexity of brain structure).
Examination of the transcription factors involved in synaptic modification have demonstrated the important epigenetic role of chromatin in neuronal function as well as the function of transcriptional programs that ultimately direct synaptic maturation, the definitive regulator of sensitive periods (Hong et al.
2005). Genome-wide analyses suggest, for example, that the activity-dependent, ubiquitously expressed transcription factor MEF2 regulates a transcriptional program in neurons that controls synapse development (Flavell et al.
2008). The role of interneurons and their associated proteins has also been discussed in the regulation of these periods (Morishita and Hensch
2008). The postnatal environment appears to have a large impact on the length of sensitive periods for development. For example, when rodents are separated from their mothers but subsequently put in an enriched environment, the effects of separation (e.g., stress responses and poor cognitive performance) are normalized (Mohammed et al.
1993; Nithianantharajah and Hannan
2006). This reversal suggests the ability of appropriate environmental cues to promote epigenetic changes that rescue normal cognitive function (Francis et al.
2002; Hannigan et al.
2007). The environmental impact on cognitive function and learning ability is also evident in rodent studies. Socially enriched environments increase exploration of novel environments as well as the rate of conditioning whether the rats spent time with biological or foster mothers (Kiyono et al.
1985; Dell and Rose
1987).
Caregivers, especially mothers, have extended prenatal and postnatal interactions with their children with direct implications for behavioral phenotype. During gestation and after birth, maternal health status and care of offspring has substantial effects on exploration of novel situations and generalized social behavior (Weinstock
2005; Martin-Gronert and Ozanne
2006; Meaney
2001; Chapman and Scott
2001; Parker
1989; Pederson et al.
1998; Vanijzendoorn
1995). Interestingly, higher levels of parenting stress have also been documented in hearing parents of deaf toddlers and preschoolers (Quittner et al.
2010).
Studies of licking and grooming behaviors in rodents offer evidence for nongenomic transmission. This behavioral repertoire is acquired through the maternal care of offspring (Weaver et al.
2004; Champagne et al.
2003a; Fleming et al.
2002). Cross-fostering studies of rodents demonstrate plasticity in these generational trends and indicate that the phenotype arises from environmental exposure rather than genetic predetermination (Maestripieri et al.
2005). Even though genetics are not fundamentally altered, persistent behavioral changes continue into adulthood and are observed to be associated with neurobiological modifications such as oxytocin receptor density—a marker that correlates with rodent licking and grooming (Champagne
2008; Champagne et al.
2003b).
Multiple mechanisms by which epigenetics can influence development of cortical regions have been identified and more are likely to be found. For example, DNA methylation is a heritable modification of genomic DNA. Patterns of DNA methylation may play a large role in controlling development, imprinting, transcriptional regulation, chromatin structure, and overall genomic stability (Okano et al.
1999; Strathdee and Brown
2002). Methylation can prevent access of transcription factors and RNA polymerase to DNA as well as attract protein complexes which act to silence genes (Strathdee and Brown
2002). Quantitative assessment of DNA methylation levels suggest that DNA methylation signatures distinguish brain regions and may help account for region-specific, functional specialization (Ladd-Acosta et al.
2007). This model offers one mechanism wherein phenotypic plasticity is manifest—cell’s ability to change their behavior in response to internal or external environmental cues (Feinberg
2007).
Epigenetic models of learning
Epigenetic models offer paradigms for understanding the acquisition of a skill set that is shaped by ongoing learning wherein learning itself affects the subsequent ability to learn something new. One application of such a model describes the “cognitive development” of robots that learn through a developmental algorithm. Emergent, self-programming allows a robot to continuously expand its functional capacity based on experiences by previously acquired skills (Pfeifer et al.
2007). Here, robotic operation is guided by a software program (or ‘genome’) that is inherently modifiable by developmental experiences (creating an ‘epigenome’). The overall result is a model of learning in which learning itself affects the later capacity of the brain to acquire new information.
Three components are considered necessary for a robot to accomplish ongoing, emergent abilities: (1) abstractions—to focus attention on relevant inputs, (2) anticipation—to predict environmental change, and (3) self-motivation—to push beyond extant capacity toward more complex understanding (Blank et al.
2005). Such models of robotic learning have been used to model infant–caregiver interactions (Breazeal and Scassellati
2000), as well as language development (Cangelosi and Riga
2011). Robotic models reveal how the principles of language development and epigenetics can be successfully merged. Both the environment and innate factors contribute to one another in a dynamic fashion to promote language learning through biological motivation, multidimensional experiences, and as influences from bidirectional interactions.
Discussion
In the CDaCI study, we have observed the effects of an apparent sensitive period such that greater benefit for spoken language acquisition after a CI is significantly associated with earlier implantation. Based in this prospective dataset, significantly greater trajectory of spoken language learning occurs in children implanted in infant and early toddler stages relative to implantation in later toddler stages. Outcomes, however, are significantly modified by a range of factors based in a child’s pre- and post-implant experience. Our observations are consistent with a growing body of evidence that epigenetic modification of the CNS subserves periods for learning of complex tasks such as those related to learning the subsystems of spoken language and ultimately are important in, if not definitive of, effective language comprehension and expression.
Sensitive periods in the development of auditory cortex terminate with reductions in overall synaptic activity and are associated with an inability to completely restore hearing function (Kral
2007). Changes in synaptic plasticity are likely due to genetic timing of brain sensitivity to language combined with epigenetic features that are guided by the availability of adequate sensory input (Kral
2007; Panksepp
2008). Though the closure of a sensitive period that occurs without development of auditory circuits is evident in cat models of congenital deafness, exposure to auditory stimuli by means of cochlear implantation appears capable of producing evoked potentials in more cortical areas, at higher amplitudes, and with the longer latency responses that resemble those of normal-hearing cats (Klinke et al.
1999). This suggests the ability of cochlear implantation to restore or potentially preserve normal auditory input to cortical areas. EEG studies of auditory-evoked potentials have also demonstrated normal latencies of cortical responses in children implanted (CI), but only if they received an implant before 3.5 years of age, suggesting a watershed age of implantation that affects the capacity for cortical processing (Sharma et al.
2007).
Two key observations are of interest to the development of an epigenetic model of spoken language development when hearing restoration with a CI is pursued: (1) elements of the language system (e.g., phonetics, vocabulary, grammar, and pragmatics) appear to be differentially affected by delayed exposure to spoken language and by modifying factors, and (2) delayed exposure can cause disruptions in the social/affective process of parentally guided language learning.
The significance of sensitive periods within this model comes from the possibility of a CI to restore normal auditory learning capacity in the context of cortical plasticity. We observed trends of dissociation between the domains of vocabulary and receptive syntax vs. expressive syntax, with implantation prior to 18 months having a larger positive impact on the development of receptive and expressive syntax than on vocabulary acquisition. Importantly, children who received a CI prior to 18 months of age also demonstrate relatively strong development of expressive syntax and pragmatic use of spoken language; implantation at later stages of toddler development was associated with vulnerabilities in pragmatics and expressive syntax. Though generally highly associated in their developmental patterns (Bates and Dick
2002), impaired acquisition of grammar relative to vocabulary has previously been noted in deaf children, suggesting the potential for differential development across the subdomains of spoken language (Tomblin et al.
2007).
Children with hearing loss possess specific deficits in grammar development that are similar to children with specific language impairment, demonstrating that such grammar-specific deficits can be observed in children with cortical neurosystems that developed in the presence of normal auditory inputs (Norbury et al.
2001; Briscoe et al.
2001; Watkins and Rice
1994). These results suggest that, in children with hearing loss, normally linked dimensions of language can become dissociated from one another. The aspects of learning specifically associated with grammar must be analyzed to understand the basis for this dissociation.
As events in the real world generally result in the stimulation of multiple sensory modalities (e.g., auditory and visual), it is important to consider that developmental outcomes may reflect interactions between the auditory system and other sensory modalities (Kral et al.
2000). Multisensory integration can be thought of in terms of salience—the ability of a stimulus to capture attention. Multisensory inputs may enhance the salience of a particular stimulus that would have otherwise evaded detection and subsequent response. These interactions are therefore most relevant when a stimulus has low salience (Calvert et al.
2001). Detecting and subsequent learning of the rules of grammar relies on attention to the more subtle “little words” and (morphosyntactic) endings of words and phrases (Bates and Dick
2002). Thus, reduced access to acoustic–phonetic cues may inhibit the natural attentional enhancement of grammatical cues (Dick et al.
2001; Singer Harris et al.
1997).
Having considered the importance of multisensory integration of auditory and visual cues, we can consider its relationship to sensitive periods. Though auditory perception is restored with cochlear implantation, these co-activated processes in deaf children demonstrate a bias toward visual rather than auditory stimuli (Bergeson et al.
2005). The persistence of a visual bias suggests that multisensory integration may not develop normally when a single sense dominates in early development. Auditory stimulation in an early sensitive period may, therefore, be necessary to ensure adequate influence on central circuits that enable multisensory integration. Cochlear implantation within the first year of life may rescue these circuits and enable matching of auditory and visual cues (Bergeson et al.
2010). Evidence for this comes from an examination of implanted congenitally deaf children who were more likely to fuse auditory and visual information processing if they received their cochlear implant within 2.5 years of age (Schorr et al.
2005). This observation reinforces the idea that early, effective auditory stimulation is necessary to establish multisensory connections and preserve the attentional resources necessary for learning in the subdomains of spoken language.
Our observations suggest evidence that the auditory system communicates with the visual system in circuits that are established early on and affect learning within language subdomains. Detection and learning of grammar requires multisensory interactions due to the low perceptual salience of grammatical cues. We suggest that, unlike vocabulary, grammar substantially improved for the group of CI who received implants prior to 18 months because the early activation of auditory cortex was able to rescue the development of multisensory integration circuits that ultimately amplified the salience of grammatical cues.
Observations gained from the CDaCI study can also be considered in the context of an epigenetic perspective by considering the multiple ways a child interacts with her environment, specifically the impact of limited verbal language on parent–child interactions. The development of language necessitates and derives from encounters with the world through childhood (Panksepp
2008). The affective components of these experiences have a measurable impact on the trajectory that language development follows. Joy from play, nurturance from care, and panic from separation distress are just a few of the many emotional aspects of the relationship between the mother and child that shape language development (Schore
2003; Trevarthen
2001). Such experiences associate with a child’s desire to engage with the world in an exploratory fashion, which is inevitably accompanied by exposure to a diverse range of sounds, including utterance material (Panksepp
2008). “Motherese,” the high-pitched, melodic, and repetitive form of speech with exaggerated intonation, appears well-suited for the acquisition of language (Fernald
1989; Trevarthen and Aitken
2001). While this form of the speech has been known to engage infants, it further appears that it typifies the affective bond shared between mother and child and plausibly promotes profound neurobiological changes to support the development of language.
Aspects of motivation are critical to an understanding of a model by which epigenetic changes are associated with parental nurturing to promote the development of spoken language (MacLean
1990). Self-motivation is highly associated with activity of the anterior cingulate regions that appear to enact social–emotional response. Activity within these regions associates both with experience of separation distress and the formation of social bonds (Panksepp
2003). Interestingly, bilateral damage to the same regions results in akinetic mutism, a deficit of language despite adequate motor function (Devinsky et al.
1995). This suggests the potential of these regions to “gate” interactions between affective interactions during childhood and development of lifelong language skills. Though neocortical regions ultimately process linguistic information, it is important to note that non-linguistic areas can provide attention and motivation in promoting, or inhibiting, language-associated activities (Panksepp
2008).
Recent discoveries in molecular genetics have begun to elucidate the patterns of genetic expression that underlay emergent CNS circuitry that supports language learning. For example, one gene that has been implicated in language (FOXP2) is concentrated in the basal ganglia. Evidence from songbirds suggests that this gene’s product may be necessary for trial and error vocal learning (Scharff and Haesler
2005; Ölveczky et al.
2005). Motivation to pursue trial and error for such exploration is essential for acquiring language and is likely dependent on encouragement derived from supportive, affective social interactions. Preliminary evidence suggests that FOXP2 may impact neuronal plasticity in an epigenetic fashion. Regulating mRNAs support neurite outgrowth and synapse formation of circuits that are involved in motor learning in rodents and song learning in birds (Fisher and Scharff
2009; Vernes et al.
2011). Furthermore, FOXP2 expression is associated with auditory inputs. Mutations in FOXP2 in rodents appear to specifically affect either synchrony of synaptic transmission from the cochlea to the auditory brainstem or the activation of auditory nerve fibers that carry auditory information to the brainstem (Kurt et al.
2009).
Epigentic modifications in these same subcortical regions demonstrate a possible mechanism that controls specific cortical functions. Selective lesions to cholinergic system in the basal forebrain of rats results in shift from long-term potentiation to long-term depression—a transition that is accompanied with a loss of synaptic plasticity in the visual cortex (Kuczweski et al.
2005). Such observations suggest mechanisms by which epigenetic modifications may influence the duration of sensitive periods (Hanganu-Opatz
2010; van Ooyen
2011).
We can hypothesize a basic mechanism by which experience acts through epigenetic means to promote cortical differentiation and regulate sensitive periods. The results of the CDaCI study fit well into the proposed model. The maximal effect of implantation is seen in the group implanted earliest, suggesting a sensitive period that begins to close for the other groups that experienced constrained access to the key acoustic–phonetic perceptions that normally initiate spoken language learning early in life. Selective effects on the domain of grammar highlight the role that attention likely plays in the acquisition of grammar skills. The necessity for the environment to provide sensory information and for this information to be recognized by the nervous system appears to be absolute, though a critical time frame exists during which intervention allows at least partial recovery of function. Ongoing and emerging factors contribute to early development of behaviors of interest in the CDaCI study, with the primary outcome variable being the development of spoken language. There are important contributions to language development from multiple sources (family, social interactions) as well as synergistic effects of one developing system on another (e.g., low language level affecting behavioral organization).
Our observations of the key role played by parent–child interactions in shaping outcomes after a CI provide the most powerful example of how epigenetic changes could be regulated by the environment. A bidirectional relationship can be hypothesized between language development and parent–child interactions. In a child with SNHL, although innate language systems may be intact, with a sole deficit located in the perception of sound, the child may have either an inadequate store of utterance material or inadequate experience with meaning-interpretation experiences to fully engage with language tasks. A child’s cognitive skills, parent–child interactions, social adjustment, behavioral skills, parental well-being, and social skills interact within the home milieu early on and, over time, with information in the outside environment, all are also nested within the framework of environmental experience affected by socioeconomics and societal influences.
The appropriation and command of spoken language directly help children regulate their attention, and to communicate in ways that affect emotion and behavior and facilitates caregiver and, later, peer communications to enable further refinement and nuanced use of spoken language. When a child’s command of language is lacking, the result is inevitably impairment in communication with parents and heightened risk of greater parental stress. Parental perception of their child’s language skills therefore predictably results in a change in the way they interact. Such parental interpretation of their child’s abilities and how this, in turn, affects the development of further verbal (and written) interactions are key questions that can be answered with longitudinal follow-up.
In the same way that a rodent raised without licking and grooming undergoes epigenetic changes that ultimately affect behavior, one can hypothesize that a young child developing without sufficient affective and social interaction may experience epigenetic modification that closes optimal periods by inhibiting synaptic plasticity. Consider, for example, a mother who is frustrated by a perceived lack of language development in her child with a recent CI. Her interpretation may prevent her from using “motherese” and aggressively communicating with her child through speech as she otherwise would have done. Data from the CDaCI study, as well as those from field studies of hearing children, indicate that the lack of such affective stimulation can stifle motivation for the child to speak and to explore novel applications of spoken language. Furthermore, as neurobiological changes decrease attention directed at language, neurobiological observations suggest that there is likely an associated diminution in synaptic plasticity that will ultimately inhibit future progress in language acquisition. In this model, the result can be a harmful circle of poor language skills causing parental stress and disappointment, with resultant negative and multidimensional influences on the development of spoken language skills in a child with a CI.
This model demonstrates the clinical importance of promoting parental support and intervening when a communicatively inactive home environment and parental stress are detected. Parents of children with hearing loss love their children and, though they seek to nurture them in different ways, it is essential that they are encouraged to emphasize the same language-based affection provided to children with normal hearing. Additionally, this model provides a concrete example for hypothesized environmental impact on cortical function and plasticity. We envision multiple epigenetic changes, such as one regulating attention to language based on affective social interactions, combine to impact the development of higher order cognitive functions of spoken language after surgical intervention in deafness.
Summary
A convergence of the biological, cognitive, and communication sciences potentially unifies our approach to the complexities of developmental learning. Within a multidimensional, epigenetic framework, this report addresses childhood acquisition of spoken language after cochlear implantation—a process that represents an interplay between general learning mechanisms, auditory perception, and ongoing environmental experience with the statistical regularities of auditory input. From such an interplay, a child gains operational insight into the meaning and communicative intent conveyed by the sounds of speech of others.
The CDaCI study represents variance in naturally occurring circumstances that affect language learning reflected in inhomogeneities in the baseline biological factors and the environments of participants. In such variability, however, are opportunities to identify dependent variables of clinical importance in addressing how SNHL-challenged children can learn to receive and produce more adequate speech and language. CDaCI data indicate that a range of factors associate with the pattern of the acquisition of spoken language skills after cochlear implantation. Earlier exposure to sound via the CI associated with a faster rate of spoken language growth. Phonological, semantic, grammatical, and pragmatic development differed with age of implantation. Such results support models of language learning that predict that with earlier onset of access to acoustic–phonemic inputs, growth rates in spoken language can approach those of normal-hearing children, whereas delayed access associates with slower growth rates, particularly within language domains of syntax and pragmatics. Multivariable analyses suggest that language learning involves complex interactions in which modifying factors vary in their impact on language learning with age of onset of effective hearing, and the impact of biological and experiential factors varies with the age at which perceptual capabilities are introduced via cochlear implantation. A wealth of data indicates that neurodevelopmental phenomena related to language learning are driven by time-sensitive, bidirectional events. If environmental cues and interaction are not provided in a timely manner, developmental potential narrows. Conversely, Bates et al. (
2003) have observed that brain maturation affects experience, and experience returns the “favor” by altering brain structure. In periods of exponential bursts that are characteristic of early language learning, there are compelling data that underscore the role of mutually beneficial, bidirectional interactions between brain and behavior.
Key advances will come from a fuller understanding of the specific neural events that drive language acquisition, and the genetic control that promotes learning from experience. For example, if we are able to make deductions about epigenetic controls of brain development through an understanding of how synaptogenesis and regression, synaptic refinement and cortical connectivity are influenced by the transmission, reception, and production of speech, we can inform approaches to rehabilitation of the child with early-onset SNHL to promote the remarkable achievement represented by spoken language development in the typical child.