A few months ago I was thinking about making a headset which monitored frontal alpha for use in “concentration” or “attention” tasks and provided feedback when it detected changes which indicated wandering. That was abandoned because I didn’t want to figure out how to write the training software since a model would need to be generated for every user. The last few days I’ve been kicking around a much more generalized version of this idea, something like a “reflex” trainer which would be used for a much wider set of behaviors.
The initial idea this expansion is based on comes from an idea for an automatic dog training device. Many labs employ an automated training process for many animal models, this should be adaptable to naturalistic environments with some work. An automated “pet” trainer should generalize up to humans for simple stimuli response as well, and if we can figure out a good set stimuli/behavior combinations it should be possible to produce a similar effect to the EEG monitoring without needing the complex software side.
A tiny bit of complexity here is we’d be using the salience/valence model. This would mean two sets of stimuli/responses for each behavior, a “Want/High Valence” and “Don’t want/Low Valence”. I need to see if there are any existing models where this type of approach has been implemented.
The hope is that we can instantiate a stimuli state (e.g. when certain music is playing) that will induce a want/high valence response for our target task and a don’t/low valence state for all others. This should allow brains to “naturally” self correct toward our target task.
I need to figure out how to determine the safe boundaries of attention however, over use could be pretty harmful.
Edit: So the mechanics of this, we’d be looking at simultaneous dorsal NAC (?) stimulation along with ventral Habenula (?) stimulation, and maybe dorsal/medial amygdala stimulation to induce “novelty” gatings. Thinking about this is suddenly getting scary.