Auditory Nerves
In the past two weeks, I’ve focused on building the remaining audio CMU components after the IHC. This includes the spiral ganglion cell and neurotransmitter release in the auditory nerve.
Development Activity - https://github.com/akhil-reddy/beads/graphs/commit-activity
Building the Auditory Nerve components - https://github.com/akhil-reddy/beads/blob/main/beads/core/cmu/transportation/audio.py
Please note that some code (class templates, function comments, etc) is AI generated, so that I spend more of my productive time thinking and designing. However, I cross-verify each block of generated code with its corresponding design choice before moving ahead.
Algorithms / Development
Push implementation and transportation
Auditory Nerve Fiber implementation v1- Define biologically accurate constants for the auditory nerve fibers
- If the AN fiber is in the refractory (cool down) period, do nothing
-
Else,
- The change in potential dV is defined according to the leaky integrate model
- Add a stochastic component to dV (optional)
- Add dV to the instantaneous potential of the membrane
- If the spike condition is met, generate a spike!
- Generate spike trains for all vesicle releases
Next Steps
Building the Environmental Response System (ERS)
- Building the visual cortex
- Building the auditory cortex
- Neurotransmitters - Fed by vision’s bipolar and amacrine cells, for example, to act on contrasting and/or temporal stimulus. Neurotransmitters (and their lifecycles) can be added later during ERU development
Deployment
- Overlaying video frames onto the retina, including code optimization for channel processing
- Overlaying audio clips onto the cochlea, including optimization for wave segment processing
- Parallelization / streaming of cellular events via Flink or equivalent
Phase Locking and Auditory Nerve Spikes
As the mammalian ear evolved over time, and with the introduction of artificial sounds (electronic music, media, city noises, etc), the ability to discern subtle changes in pitches / frequencies has become paramount. Phase locking is a key mechanism that preserves these subtleties until they reach the auditory cortex. This mechanism handles sounds at different frequencies according to the below -
- Phase locking for frequencies up to 1 kHz - the IHC receptor potential moves with the “bending” of the stereocilia on the basilar membrane. As the potential rises and falls, so does the Ca2+ current generated in the IHC for vesicle departure. Although the vesicles’ departure from the ribbon synapse is stochastic, over time, it synchronizes with the sound wave from the basilar membrane. Once the vesicles dump neurotransmitters (glutamate to be specific) into the cleft harbor, a postsynaptic current is created and the auditory nerve fiber membrane’s potential crosses the threshold at exactly the same phase as the basilar membrane waveform. Therefore, the phase is “locked” with the incoming waveform and pitch subtleties are preserved
- Phase locking for frequencies up to 4-5 kHz - For these frequencies, everything upto vesicle release stays the same. However, as spiral ganglion cells cannot fire at frequencies above 1 kHz (due to their refractory / recoil period being around 800 microseconds), the ear uses the Volley principle to ensure that spike generation isn't affected - a group of nerve fibers fire at consecutive cycles but at the same phase angle, so the auditory cortex perceives these frequencies in the same way as those under 1 kHz
- Tonotopy for frequencies > 5 kHz - As these frequencies are less subtle, the brain interprets them through tonotopy (which was discussed in an earlier blog post)