• Keine Ergebnisse gefunden

The auditory system bears the task of selecting and converting the signal present in the variations of the sound pressure into relevant information for the brain. Each animal adapted to hear the best in a certain frequency range to better adapt for survival in his own environment. Mammals have wide hearing frequency range, for example in humans it is from 2 Hz to 20 kHz. Animal to animal communication, discrimination of the nature of a sound and its provenance require exquisite temporal analysis and a very sophisticated apparatus.

Detection, processing and analysis of sound by the auditory system occur in several stages. Firstly, a mechanical stage in the outer and middle ear; secondly, a transduction stage from mechanical movement into neural encoding in the inner ear; and finally analysis stages in the brainstem, thalamus and cortex (Fig. 1.3). The sound signal is decomposed in frequencies in the inner ear and for each frequency, temporal changes in intensity and the absolute intensity are encoded.

9

Figure 1.3 Organization of the auditory system Source: (Gelfand, 2004).

The outer and middle ears transform the air vibrations into mechanical vibrations.

Air vibrations are first transformed into vibrations of the tympanic membrane, then of 3 bones, and finally transmitted into fluid and basilar and tectorial membranes vibration in the cochlea, situated in the inner ear (Fig. 1.4). In the cochlea, vibrations are spectrally decomposed by the basilar membrane that has different resonating frequencies along its length. At each frequency, vibrations are non-linearly amplified by outer hair cells (Fettiplace and Hackney, 2006; Ashmore, 2008; Hudspeth, 2008; Peng et al., 2011). Thus, the auditory system performs a Fourier transform of the incoming sound and subsequently analyzes it in parallel frequency channels. The coordinated vibration of the basilar and the tectorial membranes moves the stereocilia of the inner hair cells (IHCs) and produces the opening and closure of mechanotransduction channels. This elicits the entry of ion current, which changes the IHC membrane potential. The IHC voltage thus follows the pressure oscillations (Russell and Sellick, 1978; Dallos, 1985; Kössl and Russell, 1992) up to frequencies inverse to the IHC membrane time constant (τ = R C, where R and C are the resistance and capacitance of the IHC, respectively), which can be as low as 0.1 ms (Johnson et al., 2011). IHCs exhibit both a AC and a DC component in their voltage responses (Kössl and Russell, 1992). In humans, there are around 3500

IHCs that are responsible for encoding the whole hearing frequency range in their respective membrane potentials.

Figure 1.4 Cochlea and organ of Corti in the inner ear

Top left and middle: the cochlea inside the inner ear is a coiled structure, which contains the basilar membrane, which spectrally decomposes sound.

Top right and bottom right: A cross section of the cochlea reveals the presence of one row of inner hair cells (IHC) and 3 rows of outer hair cells (OHC). The OHCs work as mechanical amplifiers by varying their length with changes in their membrane potential (electromotility) and with active hair bundle mechanics. Each IHC encodes the pressure time course at a certain sound frequency in its membrane potential. Each IHC synapses with 5-20 auditory nerve fibers (ANF), which sent sound information encoded in spikes trains to the brain.

Source: (Purves et al., 2001).

Voltage gated Ca2+ channels present at ribbon synapses of the IHCs open and close stochastically in response to changes in the IHC membrane potential. The influx of Ca2+ into synapses triggers the exocytosis of neurotransmitter (glutamate) filled vesicles.

Glutamate binds to the AMPA receptors of auditory nerve fibers (ANFs, around 5-20 per IHC) and initiate ANF spiking. At this stage of auditory processing, there is a radical change in sound signal representation: the continuous analog signal of IHCs (their membrane potential) is transcoded into discrete spike timings in ANFs. In humans, there

11

are about 30’000 ANFs that encode the entire sound information into trains of action potentials and they form the bulk part of the VIIIth cranial nerve. Due to the tonotopic segregation of IHC and to the fact that each ANF connects to only one IHC, each ANF has a sound frequency at which it responds the best. Thus, the brain can deduce the sound frequency from the set of ANFs that are activated (place code). In addition, since the IHC voltage follows pressure oscillations, ANF’s firing rates also exhibits oscillations.

It is said that ANF spiking is phase locked to the stimuli. This locking can also be read out by the brain (temporal code) and also contains sound frequency information (Evans, 1978).

The ANFs, also called spiral ganglion neurons (SGNs), have their soma in the spiral ganglion, situated inside the cochlea (Fig. 1.4, lower right). They are bipolar neurons and their axon projects to the cochlear nucleus located in the lower brainstem.

Each ANF contacts multiple postsynaptic neurons. Mainly 4 types of cells are present in the cochlear nuclear (Cao and Oertel, 2010): small spherical bushy cells, globular bushy cells, T stellate cells and octopus cells. Each of them receives the input from a different number of ANFs and enhances various features of the sound representation. For example, stellate cells have a wider dynamic range of the stimuli intensity representation than the ANFs; and globular bushy cells have a greater precision during phase locking.

Higher stages in the auditory system deal with the further analysis and processing of the auditory information.

Biophysical specializations in synapses and neurons have evolved to meet the high demands on temporal precision in the auditory system. In the next section we will focus on the synaptic transmission from the IHC to the ANFs.