If we were going to solve some of the problems that the Neuralink team is trying to address, what pathways show the most promise?
First, from a construct perspective, the device itself needs to philosophically be viewed as an astrocyte/oligo package. In conditions like Parkinsons (side rant: I really wish medicine would get over naming things after people and adopt a more descriptive naming convention. I can decode “Amyotrophic Lateral Sclerosis” into something somewhat tangible, but what the fuck is “Parkinson’s” or “Alzheimer’s”?), for example, the issue isn’t that the signal doesn’t exist, it’s that the metabolic modulation of the signal is lost. We can induce more neurons all day, or even worse induce glial conversion to neurons, but “grasping” is more than just a static signal, it’s a bidirectional signal that must be modulated in real time. Every single node of Ranvier (ugh) and Schwann (UGH) cell has astrocytes which modulate the metabolic feedback for these actions at the local level.
Imagine a “global” instruction is sent down from the brainstem with a target, then along the chain astrocytes modify the signal itself to match that target goal. This allows localized adjustments to the signal while requiring other nervous system engagement only when an error state goes out of the bounds of localized adjustment abilities. This is a metabolically efficient way to increase “slop” correction.
So our target isn’t conducting the signal itself, it’s adjusting the signal to match brain stem targets. As astrocytes degrade in metabolic efficiency, their “slop” response range also decreases (since that’s what they are doing to the signal, signalling oligos to juice/decay the chain to match the target transmitted over the calcium side chain). Even “disorders of perception” can be viewed in a similar way, a signalling chain which allows either too much or too little “slop” to match brain stem targets.
A big part of the reason why paralysis hasn’t been solved IMO (thus far) is that we are trying to match two completely different sets of metabolic instructions together and wondering why they don’t work. Astrocyte chains “learn” from each other, and modulate their modulations based on the modulations of other astrocytes. Yay bidirectionality. When we try to jump from local group A18492 to A18994 for example, we lose all of the metabolic adjustments between those two which previously modulated that signal.
A neuralink device would be useful here in examining the signalling environment at point A and and modifying the signal to match what point B expects (and vice versa). Unfortunately, they are still so focused on the bleep bloops that they have no tools to accurately measure calcium signalling let alone the more complex peptide/protein interactions required to pull this off. They could probably pull it off with really discrete calcium measurements (maybe), but this is hard because each signal may be bespoke to every astrocytes (or maybe clusters) – e.g. a peptide which “decreases” the signal in a specific way on one part of the chain may “increase” it in another depending on how the body developed and “learned”.
There has been some success over short time spans by rebridging the gaps before glial apoptosis occurs, and some success in inducing radial/pluripotent cells sometimes allow glia to “relearn” the metabolic signalling chain over a gap, but simply bridging a closed gap has never worked. Nervous systems aren’t frog legs bros.
The second challenge is way harder, restoring sight to the sighted. First, the visual cortex is DOWNSTREAM of vision. The phosphenes that are being injected into the stream at that point are contextual, not actual vision. And honestly, I’d be “shocked” (christ I’m clever) if any of the macaques tolerated that shit for long.
Saccades are generated before anything even hits the visual cortex, and at the very least we’d be looking at LGN or colliculous targets if we were going to truly inject anything close to actual vision with accurately targeted saccades. Just not going to get there fucking around in cortical areas.
So humans already process color and orientation without the use of the visual cortex. In fact, by the time it does reach the V1 for further processing, the signal has already been “mapped” and “decisions” have been made about the data. Hippocampal stream processing starts (particularly the dorsal stream) before V1 activation some 50-60 ms later.
So for vision itself, we have upstream targets we can inject data directly into, and frankly, the best target might be the optic chiasm? It would look weird as hell until density got high enough, but the further down the chain we follow the nerves, the more “condensed” the signal gets and the more likely we are to get away with using far fewer probes to produce a “usable” image. This would allow the brain to use already existing processing areas similar to the mechanics in “blindsight”. We likely would also need to intercept and modify the tectal pathway, which also might be a pretty decent target.
The question is, even if we inject a really high quality visual signal, can we loop that signal adequately enough for the context phase of the vision system?