Something I’ve been baking in my head for awhile is how to do cognitive assessments which don’t have the subjective tester biases, don’t rely on test security so they can be verified outside of their domain, and directly tests functional units vs. requiring some level of post processing.
My brain has been tinkering with the idea of measuring colliculi response through saccade measurements to peripheral visual stimuli (using eye tracking camera(s)). We’d be looking for rate of change in saccade return and general smoothness/tracking rate in response to new stimuli.
I just saw this: https://imgur.com/gallery/wocYPP1 and holy crap is that a wonderful idea. Maybe even throw in some diagonal lined objects for good measure.
Getting a differential between full color tracking and this would tell us a lot about dorsal vs. ventral visual network performance (and general network balance). We should also be able to infer really interesting things about memory performance with this. I suspect this could provide an objective benchmark of “raw” memory performance before behavioral detractors/enhancers (e.g. trauma or memory palaces).
Similarly, my brain is still churning around the idea of using brainstem auditory responses in much the same away, segregating the dorsal and ventral processing (music and lyrics) and tweaking the loads to get a sense of brainstem performance.
Thinking about this more, I probably should buy one of those Apple VR headsets. I need to find out how much access they allow to the actual hardware, can we directly access the eye tracking cameras directly or are they behind an abstraction layer?
The ultimate goal of this conceit is that you’d pop on the headset, it would flash some images and audio for a couple minutes (and hopefully not trigger epileptiform activity) and when mature enough, would have a somewhat detailed cognitive description of the individual. No batteries or anything, just five minutes. With this level of convenience, we could expand this type of assessment into standard practice and create histories which could pre-emptively find issues.
Since it doesn’t appear that imaging is going to be something that will be available in this way any time soon, this might be the next best thing. Some really interesting catches for this would be things like detecting strokes or concussions/hemorrhages.
Damnit, wish there wasn’t so much already on my plate, I already have food starting to rot.
edit: Should note that despite the title of the image being for color blind accessibility, this would make a horrible color blindness mode.
—
Involvement of the superior colliculi in crossmodal correspondences
Human subcortical pathways automatically detect collision trajectory without attention and awareness
Population temporal structure supplements the rate code during sensorimotor transformations
Response to change in the number of visual stimuli in zebrafish:A behavioural and molecular study
Population coding of time-varying sounds in the non-lemniscal Inferior Colliculus
Auditory Corticofugal Neurons Transmit Auditory and Non-auditory Information During Behavior
Subcortical coding of predictable and unsupervised sound-context associations
Sensitivity of neural responses in the inferior colliculus to statistical features of sound textures