Brainjacking: A Step Beyond VR, A Step Too Early

One of the most memorable things that I learned from Psych 1 class my freshman year was the concept of perception and how our brain manages to turn everyday sensory input into thoughts and emotions. If I were to take whatever I learned and try to fit it into a new (probably not, but I’d like to think so haha) context of objectiveness and subjectiveness, we get the following:

Our memories and emotions are the result of raw objective sensory input (sight, sound, touch, etc.) after they’ve been processed through a subjective perception filter.

To express more visually:

Objective sensory input –> Subjective perception filter –> Thoughts, opinions, memories, emotions

Consider for example, two different people see the same dog walking around in the park. Person A feels extremely saddened and begins to cry, while Person B yells “AWWW” and proceeds to hug the dog and pet it incessantly. How does this happen?

First, we know that the raw input is the same; both Person A and Person B saw the exact same situation unfold. At this point, before the “data” from each person’s eyes and ears reaches their mental perception layer, both technically possess the same emotional potential.

Second, said objective data instantly reaches the perception filter, and that’s where the two individuals drastically differ. Person A’s perception towards dogs is heavily negatively biased since his childhood pet just passed away last week. The sight of the dog in the park immediately triggered his memories and thus the outcome (emotion, thoughts) renders NEGATIVE. On the other hand, Person B loves dogs and just adopted a new puppy, so the sight of the dog in the park passed though a positively biased filter and rendered POSITIVE.

And thus, we have a super basic (and probably oversimplified) equation for how our brains form reactions (responses triggered by outside input).

Given this context, the major takeaway is that it’s impossible to isolate each step of this process; the whole thing a bundled process that loops every nanosecond within our brain when we’re awake. Every waking moment we’re seeing/hearing/touching, perceiving, and feeling (actively or passively).

But wait, what if we could indeed isolate the holistic process and “hijack” it between the first and second stages? What if we could supplant the whole objective sensory input stage and plug in our own data that would reach the subjective perception stage just as it naturally would?

Technically, we already know how to do that, and it’s called dreaming. If you think about it, when we’re in a dream, most of the times we have no idea that we’re dreaming. On top of that, we feel emotions, consciousness, thoughts, and even pain when we’re actively dreaming.

Ok, so I guess then technically there is a way to hijack this process. As in, the existence of dreams alone proves that at least the sensory input layer itself is detachable (and replicable) from the rest of the process. But still, it’s something we can’t actively control when we’re not dreaming. It’s not like I can create my own dream when I’m awake and play it back when I’m sleeping. So what if we could?

Obviously this isn’t possible today since we have yet to encode let alone decode the language of our brains. All we know is that brain data is represented by synapses and chemicals rather than electrons and binary bits. However, if we did figure it out, “brainjacking” – the process of replacing sensory input with synthesized data – would be such an inevitable technology and practice. It would be huge! It would replace photos, videos, VR, AR, everything. At this point it’s not “virtual” or replication or synthesis – for all our brain knows, it’s reality(One could also argue that it would be incredibly dangerous, but that’s for another debate).

So why aren’t we paying enough attention to this? Why are we spending more time and money improving a technology (I’m talking about AR/VR) that’s fatefully limited in its ability to trick the brain?

In my opinion, modern-day VR is comparable to a bank robber trying to tape a phone screen that’s playing a video of an idle environment in front of the surveillance camera, when he could be hijacking the data cables and digitally sending a feed of an idle environment to the master surveillance network. No matter how fast our graphics processing and sensor read rates get, we’ll never be truly able to match our natural body’s response rate to sensory input. In other words, there will always be a feedback lag, and that ruins total immersion.

I’m sure there’s a ton of research on this concept of “brainjacking” going on right now in the broader scientific community (probably on a tiny budget compared to private enterprise). I just feel that with all this hype around VR and whatnot, we’re looking too short-term. Why don’t companies like Google and Facebook donate/dedicate billions of dollars to this research rather than purchasing expensive VR startups? Perhaps that’s too long-term and doesn’t make sense for investors expecting immediate revenue? I have no idea, but all I know is that whoever does crack this science will become incredibly rich and powerful.

Until then, we just blindly leave the complicated shit up to our researchers and scientists to worry about and laugh at people flailing around in their Oculus Rift headsets. Haha.

 

Leave a Reply

Your email address will not be published. Required fields are marked *