One of the strangest artifacts of perception to me is how our brains are able to transform two dimensional input into a three dimensional “feeling” for space. The assumption I’ve had in the past is that brains use pairs of place and space maps to generate coordinates in 3D space like standard euclidean spatial representations. This seems an intuitive solution, especially since it easily conforms with two dimensional holographic “reality”.
There’s a couple of significant issues with this approach however. The first is that we simply don’t see any evidence of these overlapping fields anywhere, even in animal models (zebra fish) where we can map a good portion of activity simultaneously. While this type of scale imaging is still relatively young, synchronous data against a noisy background tends to pop out pretty quickly. I’ve also yet to see any work which shows evidence of brains simultaneously accessing multiple maps of the same type (context shifts may initiate new maps, but they are accessed independently) regardless of content in the map. The lack of physiological evidence to support the euclidean representation seems pretty odd.
Shifting up the scale a bit, we also don’t see much difference in activation when viewing optical illusions. Brains don’t generally seem to process two dimensional plane manipulations any differently than “three dimensional” representations. It’s actually kind of weird, once we get over the idea that our visual stream is fixed to a single two dimensional plane and we have no mechanism to observe three dimensional stimuli then it becomes pretty obvious that we’ve never “seen” three dimensional anything. We can manipulate this pretty obviously with forced perspective effects, however the advent of TV/computer screens probably demonstrates this even more clearly. Whether the spinning ballerina or rotating rings, our visual system is clearly modifying the two dimensional stimuli into a 3D representation in some way that is imperceptible to the individual.
But how?
If we consider all of these effects together, a possible explanation may be that our hippocampus generates a prediction/expectation map which modifies our perception in a pretty fundamental way. It’s also possible that this map may be generated at lower levels, however the lack of differential processing for illusions seems to suggest against this. I propose that the function of the hippocampal CA2 region is primarily as an integration center for predictive/expectation functions in brains.
The effect of this map is pretty profound in generating our longitudinal sense of time (must be recalled memory vs. short term/working memory which uses local clocks). All temporal memory recollection is essentially a context evaluation, brains compare points in stored data to generate a prediction of chronological time. When constructing episodic memories we use this comparison method pretty freely, referencing other “fixed” pieces of data to set “time”.
We also use this prediction data to generate and guide social interactions/perception, as social behavior is entirely an engramatic prediction/expectation response to external behavior. Essentially, we behave a certain way because we expect certain types of responses.
This conforms the CA2’s function into something similarly global in function compared to the other hippocampal CA fields. This concept is agnostic to the type of stimuli, it performs a consistent mechanical transform against all data.
Research follow up points – Is there physiological/imaging data available for individuals estimating distance, particularly focused on the hippocampus or connected association cortexes like the pre-hippocampal or perirhinal cortex? Is there similar work for individuals who have a very detailed recollection? Are there examples in our autism models which explore temporal or social distortions in multiple directions which we can use to get a sense of the range of function here? What can we pull from Alzheimer’s or other hippocampal dementias to examine function?
Edit: What if we viewed hallucinations in general as prediction/expectation effects? Whoa, might be on to something here.
Huh, this model seems like it can cleanly reconcile predictive coding brain function models into other models?