Remodeled Education

The second most privately asked question I get asked is what is all this for? What’s the purpose?

And most people are surprised/skeptical that my response is something on the order of “I’m not really interested in this topic, but it needs to be done”.

Being a bit more concise, this field caught my was diagnosed with “ASD” and it left me with the question “well what does that actually mean?” What does that mean for me? What does that mean for her? And in the surest sign ever that something is absolute bullshit, the more questions I asked the more convoluted the “meaning” got.

Questions should always serve the purpose of clarifying because they establish the property boundaries of a system. When they don’t, it’s because the questions are flawed or the construct is flawed.

While examining the evidence to determine whether the questions were flawed, it became apparent that all the same questions were being asked by others, and even worse every set of questions seemed to muddy rather than clarify the boundaries of the system.

So the next step becomes is the construct flawed? Sure enough, there’s very few actually “syndromic” conditions, and for all the evidence offered for how brains work, none of it can be used to accurately predict or model behavior even in the simplest organism. And if the models don’t work, they don’t work, no matter how much we can replicate a particular theoretical construct.

What this is for, and why it is so important to me, has nothing to do with how nervous systems process information. This could remain a black box forever, as long as the results of f(x) give consistent results. It does have everything to do with creating an educational environment for my kids that respects their unique processing preferences.

My desire is to create an environment that doesn’t merely teach them how to most effectively conform to “normal”, but which optimizes itself around their particular construction. And in order to do that, there must be a genuine basis which describes them as individual humans, rather than some macro theoretical construct. Currently, there are no such models.

While ostensibly this path is about understanding how nervous systems process information from a consistent, universally applicable framework, really it’s about figuring out how to optimize social programming to the “greatest benefit” of us and life as a whole (for now, pretty please believe this is a language issue rather than a utilitarian argument).

Based on the data I have, I’m really leaning toward the “wikipedia” style approach. We introduce a macro concept, whole and unencumbered with the need to understand the specifics. Then, with each subsequent session, we devolve these macro concepts until we ultimately get down to practical quanta. With these quanta, we introduce a similar construct, and guide reconstruction back up to a top level concept using the principles we learned deconstructing the concept.

We already see the inkling of this being proposed in schools, particularly with the dreaded common core math. Traditional education practice drills quanta, and steadily integrates more complex rule sets. And this educational system is strictly focused on the rulesets instead of results in an effort to maintain consistency for top level constructs.

This is a really fragile way of learning, and almost always induces two really bad artifacts. First, it means gaps in rule acquisition are fatal to higher level understanding. Even if an individual has the capacity to perform higher level operations, every single ruleset is a hard roadblock to future progress along the path. Often, these rulesets themselves are knowingly flawed, but we accept these flaws because they induce consistency in the top level construct. And not everyone can accept the flawed ruleset, even if they can produce consistent top level constructs using other methods.

Second, it leads to siloing of information, as these rulesets are bespoke to the particular topic due to a kind of “knowledge debt” (like technical debt), that we accumulate without correcting. Because a ruleset was derived from the study of a specific topic, it’s only going to produce consistent results for that particular topic because of this “knowledge debt”. And when needing to learn other topics, it requires learning completely different sets of rules to produce consistent output because the ruleset was only designed to support data related to the particular topic.

The ultimate drive here is to instill the understanding that if the universe is mechanically consistent, then we can use any underlying mechanics we want to describe that universe as long as they provide consistent result. Any individual can apply whatever mechanics they are most optimized for as long as those mechanics are producing output consistent and predictive of the universe.

f(x) can be a black box, and we can optimize those functions for the individual, rather than forcing particular algorithms that don’t work for all phenotypes. Examining the universe as is, then allowing step by step deconstruction to generate their own consistent mechanics, until we drill down to quanta, then working on optimizing those mechanics during reconstruction seems like it might be a great way to optimize for a broader array of phenotypes.

This is still fragile, as “belief” can hijack any portion of it.

Edit: The most frequently asked question is “How do I smarter.”

Leave a Comment

Scroll to Top