(This is badly written, will re-write it soon)
Despite being one of the core tools in CogNeuro, and a wildly popular pop science concept, “brain waves” are as “made up” as scientology’s e-reader (I think that’s what it’s called).
To be fair, a lot of my early research into the mechanics of brains leaned really heavily on the concept of EEG waves and their correlations. I’ve even participated on the technical side in constructing specialized recording equipment for two studies. Ultimately though, the more I looked at “brain waves”, the more uncomfortable I became with the underlying assumptions about them.
One of my primary concerns about “brain waves” is that they don’t actually exist natively in nervous systems. If you’ve had the chance to play with an EEG hat or neurofeedback device, you’ll notice that the “raw” output looks like a really chaotic with little consistent periodicity. Even worse is that modifying the sampling method can dramatically change the results, even in the same subject at (nearly) the same time. This isn’t an illusion, it’s exactly what “brain waves” look like.
Before we get the nice categorizations like delta, theta, alpha, gamma (and a few other sub categorizations), before we even get a “raw” EEG reading, we nearly always transform the actual discrete changes with a technique called a Fast Fourier Transform (FFT).
FFT is arguably the corner stone of signal processing, it transforms seemingly random points on a time series and transforms it into a series of curves which give us information about frequency changes over time. This represents not the signal itself, but a representation of a signal that fits the specific thing we are looking for.
After we do this, and here’s the killer, we assign a series of filters called “band passes”, which specifically sort that transformed signal into arbitrarily defined categories that we call brain waves. Bandpassed signals cut information above or below a certain point. During this process, a LOT of assumptions are made about which part of the signal belong to which of these “brain waves”, and this is a lossy process.
A pretty close analogy IMO is using pivot tables in a spreadsheet, which transforms the underlying raw data into a set of pre-defined categories that we think are interesting. This data is NOT completely representative but represents “close enough” approximations that we can infer other things from the underlying data.
How did we come up with the categories? They are literally arbitrary. The first commonly used signal, alpha, just happened to be what the discovers equipment was bandpassed to. That’s it. And we carved out mostly arbitrary follow on categories on that point. Over time, we’ve piled on more and more assumptions to what these arbitrary categories actually represent.
Now, the assumption is that well don’t these signals consistently replicate across a wide range of experiments? Not really. As expected, these categories don’t consistently line up between individuals, so we “correct” them, either by averaging many inputs together or shifting a particular input into the expected range. And after so many averages and exclusions of “outliers”, what usually happens is that these signals end up matching our expectation rather than providing useful information about function.
So why do EEGs seem to give such consistent results, e.g. why can we infer things about sleep and Slow Wave/Delta output? Well first, we can’t. Individual variability, especially in humans is WAY higher in the raw data than the processed signals. If you look at 100 EEGs of people in the same stage of sleep for example, none of them are going to look the same without heavy processing.
There’s no universal “theta at this power” always going on in the hippocampus, or “delta at this power (or even range)” going on during deeper sleep. It’s only after band passing and averaging the processed signals that these assumptions get drawn.
When we observe changes to EEG, what we are really observing is that certain types of cells have similar types of metabolic processes, and that similarity generates the appearance of consistency. For example, grey matter cortical astrocytes have pretty similar metabolic processes, and when activated produce a response near the “gamma” power range. These short range processes do not require much metabolic push to get the signal to the next stop, and thus appear as gamma waves. Not always, but usually enough that we can average them into “gamma”.
This is the same for all of these bands, long range interneurons, particularly those initiated by brain stem processes usually have a chunkier metabolic process due to the longer distance they have to travel and appear in the “delta” range.
So, how is it that we can make assertions about brain activity along these specific brain waves if it’s really just averaged arbitrariness? For example, why do certain phenotypes of “autism” always seem to present with specific EEG patterns?
Partially, the same as above, averages of averages (just like MRI). However underlying the mechanics of what EEG is measuring, similar cells metabolic processes, EEG can give us information about the underlying cellular activity. Similar types of brain construction, will provide similar results.
For example, “dorsal” brain constructions will show “higher than average” delta and slow wave activity in the frontal hemispheres because their default signalling paths bias toward tectal brainstem connections. “Ventral” phenotypes on the other hand bias toward tegmental interactions.
And brains which processes information similarly, tend to have more similar than not behavioral output, the basis of behavioral phenotypes. In a world in which psychiatry didn’t pathologize behavior so aggressively, these would just be “personality”.
EEG can give us a general idea about metabolic activity in a brain, but there are no static “brain waves” which are consistent over a population.