Cognitive processes seem to have two discrete stages, “input/output” and “integration”.
The I/O stage is well preserved and exists across in almost all animals (sponges gotta sponge), and is the simple process of linking stimuli with “memory/behavior”.
In mammals this I/O occurs in the first three cortical layers (birbs as well, but their palliums have a different enough organization that there’s too many caveats to include them here).
Specific to humans, the metabolic efficiency of these outer layers determines how we perceive memory, both in the speed and breadth of recall.
The deeper V-VI layers (sometimes people refer to the entire six layer structure as the “neocortex”, but I prefer referring to just these two layers as the “neocortex”), are where the integration portion occurs, which appears to work as a buffer to stack multiple behavioral responses on top of each other. This allows combining behaviors together rather than the strictly sequential behavioral responses in the older/outer cortical layers.
Looking at conditions like dementias (particularly cortical dementias like FTD or Alzheimers), we usually see metabolic issues first arising in layers I-IV well before the deeper V-VI layers.
The important point here is that “memory” issues are distinct from “cognitive” issues, but these can be confused because cognitive processes require “memory” to execute behavior (behavior is expressed memory).
Memory circuits are asynchronous. The level of asynchronicity can vary depending on other local activity, circuit insult, local lactose levels, distance from initiating nuclei, etc.
One of the mechanics nervous systems use to ensure consistent behavioral output is the use of reciprocal circuits, connected to “time keeping circuits” in the issuing nuclei and local astrocytes.
In EEG, when we see regional synchrony occur (e.g. hippocampal “theta waves” are pretty well studied), what is being observed are is metabolic clock synchronization between regions, which allows proper ordering of the stimuli/behavior responses.
The speed at which these clocks can sync largely determines the performance of the I/O stage of cognition.
The integration stage relies on this synchrony to determine the weight and order of each behavior being integrated into the stack. In the same fashion as our top level stack, the order and weight determine exactly how the “memory” is constructed.
Ultimately, memory is stored (and behavior is expressed) as a set of sequential metabolic responses to stimuli, whether it’s a raw “involuntary” response or a complex stream which is the integrated processing product of many functional modules.
(apologies if this is over-explaining)
We should be able to “piggy back” on top of this dorsal/ventral input-output/integration mechanic in brains to either build new “synchronization” behaviors or replace them altogether using a combination of stimulation and sensory “manipulation” (like AR/VR, audio, tactile feedback, etc technologies).
The key element to this is leveraging the stimulation to provide the metabolic push toward the behavior we want to modify the association of. Rather than a blanket “dump stimulation in this area for x amount of time”, we bind to the timescale and location of the stimuli processing point to retrain the synchronization clock and more strongly weight “desired” behavior.
The challenge here is that we need to figure out how to simulate “ideal” behavior in order to create the timing and sequence pattern that we want to encode. Can this be generalized in any way? Can we cheat and manipulate the encode over multiple stimuli pathways, e.g. associate a particular behavior with a smell, sound, and visual cue?
Could we for example, retrain someone with an “out of sequence” stack of functional regions, behavior by behavior to better output? Could we inject necessary missing stimuli in cases of degradation to produce more stable behavior in dementia? For individuals with missing or insulted functional modules, can we “program” that functionality into other modules by re-routing through different stimuli pathways?
Wondering if we can create a stable enough fUS rig to support such a thing?
Edit: Trying to work around the difficulty of figuring out exactly what peptide/proteins are associated with any particular “memory” or “behavior”. Because each can have multiple associations depending on the functional module processing it, that route, without some type of invasive monitoring, seems really really hard. We can test “macro” organizational states maybe, but I can’t figure out how to get any type of real granularity with something like a blood test (at least not over reasonably short time scales and with ridiculously sensitive equipment).