The model network we used assumes sharp frequency tuning, where frequency channels do not interact with one another. Cortical neurons have been found to have various sharpness in frequency tuning (Sen et al.
2001), and the model performance may depend on frequency tuning width. For these reasons, we explored the effects of frequency tuning curves on the network performance for single-target reconstructions. We modeled the spread of information across frequency channels with a Gaussian-shaped weighting function, centered around the center frequency (CF) of each frequency channel:
$$ {w}_{i,j}=\exp \left(-\frac{{\left(C{F}_j-C{F}_i\right)}^2}{2{\sigma}_i^2}\right) $$
where
i and
j are the indices of frequency channels, and
σ is the standard deviation. The spread of information is modeled by having the relay neurons centered at
CFi receive inputs from its neighboring frequency channels, centered at
CFj,weighted by
wi, j. The values of
σi used in this simulation was determined by introducing the variable
Q, defined as the ratio of CF to the full-width at half-maximum (FWHM) of a tuning curve (Sen et al.
2001). Here, we formulate
Q in terms of the Gaussian weighing function’s FWHM, which can then be related to
σi:
\( Q=\frac{C{F}_i}{FWHM}=\frac{C{F}_i}{2\sqrt{.2\ln (2)}{\sigma}_i} \). We tested
Qs ranging from
Q = 0.85 (broad tuning) to
Q = 23 (sharp tuning). For reference,
Q values from field L in the zebra finch have been reported to range from 0.4 and 7.8 (Sen et al.
2001). This is the only simulation where there are interactions between frequency channels. Due to this cross-frequency interaction, we re-trained the reconstruction filter for each
Q, using the same training sentences previously described as the training stimulus, the corresponding spike trains for each
Q as the training target.