Data

In the past two weeks, I’ve focused on the data required to test the current architecture / system, and its availability, snapshot, frequency and expected transformations (as it moves through intricate subcomponents). I’ve also continued to fix bugs and bridge components in the audio and visual pipelines.


Data

Availability

Online (free) videos (sample) help with defining the range of possible stimuli. However, a video of activity outside my window helps with realism - lack of activity, jumps in attention, glare, bad angles, odd / unexplained sounds, etc. This data is important to mirror the fidelity of human senses. Here’s a sample frame from both videos -

Courtesy - Eduardo Lewis https://www.pexels.com/video/video-aereo-15966208/

Taken from my phone camera at 6 PM

Stored vs stream

In the ideal situation, a stream should be linked for hyperrealistic and grounded data but stored information is easier to manage while tuning the components.

Why is this data helpful?

This data defines the raw stimulus that the brain uses to get a sense of the world. Without testing these transforms (despite the lack of spike-train-data explainability post cortex), the noise / entropy of the system increases even before the interneurons can process said stimulus. Thus, this step is crucial to improve the signal to noise ratio.

Expected transformation(s) of this data when it passes through the visual pipeline

Using the earlier video taken from my phone as reference, I’ve split it into frames to approximately match the “frame rate” of rods. Given a subset of sample frames, here is the expected behavior after each component -

Frame 63

  • After receive - For a still image, photoisomerizations are stable for rods and cones; the fovea has higher color opponency while the outer edges are being integrated over time
  • After combine - Contrasting elements like the sunset facade of the building or bright cars & trees stand out, slowly enhancing the stimulus

Frame 68

  • After transform - The visual CMU will notice the red car on the left corner from changing contrast and the starburst amacrine cells, to immediately enhance spike frequency. In subsequent frames, as the car moves on the paved road, spike frequency is increased throughout that strip in the field of vision
  • After transport - Ganglion cells package spike trains and transport them through the optic nerve
  • After the cortex - Using population coding, the cortex constructs shapes and objects from stimulus. While it may not understand the significance of any of those objects initially, the spike train signature / “memory" is stored for future association and creativity by the inner brain. This is equivalent to teaching someone the labels for objects they’ve seen earlier on

Algorithms / Development

CMU Development

Minor tweaks across the CMU and cortex, to streamline cell methods into distinct cell creation, organization and functioning. Please find the changes in the commits (linked below for reference).

Development Activity - https://github.com/akhil-reddy/beads/graphs/commit-activity

Please note that some code (class templates, function comments, etc) is AI generated, so that I spend more of my productive time thinking and designing. However, I cross-verify each block of generated code with its corresponding design choice before moving ahead.


Next Steps

Deployment

  1. Code optimization for channel processing
  2. Post processing in the visual cortex
  3. Overlaying audio clips onto the cochlea, including optimization for wave segment processing
  4. Post processing in the auditory cortex
  5. Parallelization / streaming of cellular events via Flink or equivalent

Building the Environmental Response System (ERS)

  1. Building the ERUs
  2. Neurotransmitters - Fed by vision’s bipolar and amacrine cells, for example, to act on contrasting and/or temporal stimulus
  3. Focus - Building focus and its supporting mechanisms (of which acetylcholine is one)

Created Oct 31, 2025