One of the most confounding and frustrating things in brain research has to be seeing a hypothesis that looks very promising in animal models fail to translate to humans (or even other animals). Why is there such a consistent disconnect between promising research and real world application?
To survive challenges over replicability teams rely on absurdly specific study construction paired with increasingly bespoke or esoteric technology requirements. Reading the study setup in many high impact neuroscience research articles. it feels like the difficulty of replication is a feature at times. Ultimately, the overwhelming majority of this work fails to provide any actual clarity on the mechanics of brains that can be demonstrated at any level.
An example of our addiction to overspecification is the reliance on cloned animal models. We understand intuitively that brains of animals vary in natural populations, sometimes dramatically. Yet we setup tests that rely on an extremely narrow construction within a species. Many studies assume that the study construction will be generalizable, and of course they are rarely are.
I think neuroscience locked itself into this conceit because of it’s misconceptions about neurons and neuron function. By assuming neurons were the base quanta of calculation in brains, we run into the problem of wildly heterogeneous results with similar neuron configurations. Two species of mice can have extremely similar structural configurations when looking at neurons only, but produce wildly different results in practice.
Because neuroscience is built tree style (that is, dependent on prior knowledge for all understanding) instead of map style (associations are made from reconciliation of current data), it’s especially prone to carrying poor assumptions forward without critical examination of faults. Because critical examination of major faults would undermine the integrity of the entire tree, that would be a catastrophic approach to knowledge for most as it would require building the entire tree.
We lose a lot in translation from research because of this “reticence”(?) about challenging the integrity of the tree at more fundamental levels.
I think a good solution is moving toward more naturalistic everything, not just environments. Not just “wild type” but a model population which reflects as much diversity as possible. This has the side effect of recognizing that humans also are just as diverse in construction, and enforcing generalizability of results at the model level instead of relying on post hoc reconciliation efforts like reviews.