Reading minds with AI and an EEG machine - The horrors of neuroscience

  • 🔧 Actively working on site again.

grand larsony

data science autist, kingcobrawiki.org
kiwifarms.net
Joined
Sep 5, 2023
I've come to show you a new technological horror. The age of mind-reading is upon us, and it's accessible even to regular consumers like you and me.

1. EEG machines
An EEG machine is an electroencephalogram machine. In layman's terms, this is a machine that reads electrical impulses from your body and converts them into a form that can be treated like any other data by your computer. There are a number of EEG machines available on the market, but until recently, these machines have been pretty inaccessible. They're mostly used in medical settings, and so most of them carry with them the inflated price tag that you would expect from medical machinery. However, there have been some efforts to "democratize" (for lack of a better word) these machines in recent years.
The OpenBCI project is probably the most notable, and has been around for a little while now. It's basically a guide on how to produce your own EEG machine, along with software to run the machine. This has definitely made EEGs more accessible to a hobbyist audience, but there are still a number of challenges in working with a tool like this. You have to manufacture it yourself, which is already enough of a pain in the ass, but because this is working with noisy data taken directly out of the human body, that means if you fuck up, the machine might malfunction in unexpected ways. You could end up with a machine that appears to work properly, but is actually feeding in garbage data.
For this project I've spent the last few months on, I chose to use a different device, the Neurosity Crown. This is a simple yet functional EEG machine that works straight out of the box and allows developers direct access to the data that the machine collects (something that, bafflingly, is not given to hobbyist devs of other commercially available hobbyist EEG machines). This device comes with a somewhat hefty price tag, given the fact that it's not really useful for very much. I paid about $1200 for mine. Although I could've had a cheaper device if I built one myself, this likely saved me a lot of tedious debugging work.

2. Data & model architecture
For this project I wrote my own novel neural network architecture to take the input data and transform it into predictions about the state of my brain. First, I'll explain what the data looks like, and then I'll explain how this architecture processes the data into prediction data. (Please note though, I'm not a neuroscientist, and so my explanations of what the data means might not be entirely accurate.)
The Crown produces 4 types of data - raw, raw unfiltered, power spectral density data, and power by band data.
The raw and raw unfiltered data are pretty much exactly what they sound like. It's just raw data, taken from the skin on my head through some electrodes and converted to numbers. The "raw" data isn't really raw though, as it has some noise filtration applied to it to make interpreting the data simpler, hence raw and raw unfiltered.
The PSD data refers to the distribution of signal power across different frequencies. This contains a sort of breakdown of the raw data into chunks, as I understand it.
The PBB data breaks down the data into specific frequency bands. These represent delta, theta, alpha, beta, and gamma waves, which generally indicate things like alertness, sleepiness, focus, coordination, attention, etc.

The training process for this project was broken into two steps. Because the data is very complex, and because for any given feature that you want to predict, most of the input data is meaningless, the model learns in two stages.
In the first stage of learning, the model works as an encoder-decoder model. The full data from the EEG is fed in as an (808,) array of floats. It's then split into its constituent parts and each part is compressed by the neural network into some even more inscrutable internal state. Then, the model must learn to reconstruct the data back into a similar state as it started with. In a sense, it's learning how to compress the data, keeping the important features and discarding the unimportant features. This is generally referred to as a variational autoencoder.
The architecture I designed has a bit of a twist though. In my experimentation I realized that the model performed better if, during this pretraining stage, rather than reconstruct the original data, the model had to 1) denoise a fuzzy version of the input data back into a "clean" version and 2) predict the next state of the model, i.e. predict what my brain activity will look like a fraction of a second later.
Here's a diagram which shows how the model looks during the pretraining stage.
1720365117941.png


After the first stage of training, the two decoders that denoise the input and predict the next state are discarded. If the pretraining stage has gone well, then the "encoder out" layer contains dense and useful information about the state of my brain activity.
Then, a simple classification model can be attached to the end of the encoder, which can efficiently predict things about my brain activity based on very little training data.
1720365286819.png


This type of architecture is ideal, because it allows me to use unlabelled data to teach the model about what a "typical" brain state looks like, so later on it can distinguish between minute details with ease. Again, most of the input data at any given time is completely irrelevant to whatever task I'm trying to do with the data, so this process significantly lowers the barrier for training downstream classifiers.

3. Mind reading
In total, I used about 2 million pretraining samples to train the encoder part of the model. Now I only need a couple thousand samples of data and I can make predictions about downstream tasks. Here's some of the projects I did with this.

1720365525794.png

In this scatter plot, the blue dots are samples where I was thinking about moving my left arm, and the red dots are samples where I was thinking about moving my right arm. Note that this isn't me actually moving my arms, it's me imagining moving my arms. This is a t-SNE encoded visualization of the output from the encoder, reduced to 2 dimensions for easier interpretation. You can see that there is a clear segmentation, and the model performed at 100% accuracy for this task.

1720365632327.png

Here's another visualization of similar data. In this sample the data is visualized in 3D. Red dots are me thinking about raising my right arm, and blue dots are control data.

This is another project I did with the EEG. In this, I used control data (purple on the ground truth graph) and data where I had ingested modafinil (yellow on the ground truth graph) and visualized the results using PCA. On the left, you can see the model's predictions about whether or not I was on modafinil, and on the right, you can see the ground truth data. This model predicts a value between 0 and 1, 1 indicating modafinil and 0 indicating sobriety. I used 0.5 as the cutoff point, and in this project, I achieved 99.57% accuracy. You can see, towards the left end of the cluster, the couple of data points where the model made some false negative predictions, and towards the center, where it made some false positive predictions.
1720365753109.png

These models don't seem to generalize well, at least with the amount of training data I have. However, it stands to reason that if trained on unlabelled training data from a large number of people, the model could likely make even better predictions, even across individuals it has never encountered before.

4. Conclusion & future projects
Taking this back around to the subject of most threads on KF, my next project is going to attempt to measure the emotional impact that specific types of content have on me. I'd like to see what consumption of lolcow content does to me in a measurable way. Not just lolcow content of course, all types of content, but particularly lolcow content, since I love involving lolcow data in all of my hobby projects. If you have any suggestions, I'd love to hear them.
Working with this device has taught me a lot and been a lot of fun, but my prediction for the future is somewhat bleak. My ability to do this as a hobbyist was limited, both by the comparatively low-end hardware I'm using versus medical technology, and by my limited knowledge of the functions of the brain. I'm happy to report that with external reading devices like this, the data is unavoidably noisy, and so this likely means that we won't see true sci-fi mind-reading technology that can reconstruct images/speech directly from your thoughts, at least without implanted EEG devices. However, the progress of things like Neuralink is troubling. I'm stunned at how feasible it is to make useful, accurate predictions about mental states with this data, and with cleaner data, the sky is the limit. Getting a brain implant means your thoughts are an open book which anyone could thumb through at their leisure.
I am not looking forward to being arrested for wrongthink, ripped straight from my brain without my consent. How much longer will "hide your power level" even be possible? I'm not sure. Just don't let the government man chip your head, that's all I can say for sure
 

2009 report, an EEG could already tell that you recognized a location. I think later on they were able to reconstruct images from brain data, i.e. you are looking at image/video while getting scanned, and a computer could give an approximation of that.

It remains to be seen how far this can go without requiring invasive implants, or whether invasive implants like Neuralink will catch on. It could take decades before implants are commonplace in healthy people instead of quadriplegics. Maybe a little alien technology infusion will help.
 
Could this be used to determine innocence or guilt in a crime, and how accurate would it be?
It could be used to determine that a suspect recognizes a crime scene or a person, or other things they might lie about or withhold.

Problem is its usage will likely be deemed a Fifth Amendment violation in the US.
 
Could this be used to determine innocence or guilt in a crime, and how accurate would it be?

It could be used to determine that a suspect recognizes a crime scene or a person, or other things they might lie about or withhold.

Problem is its usage will likely be deemed a Fifth Amendment violation in the US.
i wonder if a suspect could fool the recognition tools by intentionally filling his mind with unrelated thoughts
 
Could this be used to determine innocence or guilt in a crime, and how accurate would it be?
Just speaking personally, I had trouble with fine details. I tried other experiments where I tried to distinguish between very similar things like thinking about raising my index finger vs thinking about raising my ring finger. These experiments still kinda worked, but certainly not to a "beyond a reasonable doubt" level of accuracy. I guess you could do a simple lie detector but even that seems pretty challenging, because instructing someone to "tell a lie into the microphone now" (as part of the training data collection, I mean) probably produces very different patterns in the brain than when someone is lying because their entire future is on the line if they aren't believable.
I think the bigger issue though is that the model is a black box. For example, in my modafinil experiment, I have no clue what the model is actually looking for. High beta and gamma waves and low delta waves probably play some part in it if I had to guess, but what if it turns out that having some particular mental condition also triggers those inputs in a way that the model will score highly for? Any defense lawyer could tear this model to shreds, even if it generally scored extremely high in test environments.

After typing this all out I got curious actually and decided to check the PBB data for the control data and the couple different altered states I'd collected. I know it's sort of off on a tangent from your question but I thought it was kinda cool and so I'll share it here
1720373671267.png

1720373694947.png

1720373710906.png
 
Just speaking personally, I had trouble with fine details
I guess you'd know this, but EEG is a fairly crude measurement. It measures aggregations of action potentials and necessarily lacks the sensitivity to measure and identify individual or small clusters of APs. A lot of discrete thinking and fine motor control is going to be in those smaller clusters. If you want to measure finer things, you'll need an fMRI. If you want to properly read minds, you'll need something a lot more invasive.

That's not to downplay your experiment. You clearly know what you're doing, it's really impressive for something done at home, and I think combining it with AI to try to measure emotional states is a really interesting direction. This is shit that could get published. Main thing I'd recommend at this stage is take handwritten and dated notes. As my PI said: The difference between science and fucking around is notation.

If you want advice or to talk more about it DM me. I just finished up an MS in neuroscience. It was more molecular science than what this is but I can probably be of help.
 
EEGs and fMRI's do not have the ability to produce enough data for anything tangible. even something like Neuralink is limited to very basic functions that require immense amounts of training data from the user to correlate results. we are decades away from this technology ever being a thing, atleast on the consumer or commercial level. we know almost nothing about the brain and with modern academia it's going to remain that way for some time.

governments don't need a brain implant when you carry something that parallels your exact thoughts everyday, all day. all of your conversations, location history, search history. everything you do is tracked, stored, and monitored and no one seems to give a fuck because I guess it's not a scary brain implant or whatever.
 
governments don't need a brain implant when you carry something that parallels your exact thoughts everyday, all day. all of your conversations, location history, search history. everything you do is tracked, stored, and monitored and no one seems to give a fuck because I guess it's not a scary brain implant or whatever.
and it's not even cladestine hacking they do to get that data (although that happens too) - people agree to give this data away.
 
Hey @grand larsony, have you ever heard of DARPA’s N3 program and IoB tech?

The military wants nonsurgical, craniotomy-free BCIs that use nanoparticles that cross the BBB and enter neurons to sensitize them to wireless energy and increase readout accuracy:



Meanwhile, there are people who basically want to turn the human body into an IoT device:



With the algorithms we have now, it is possible to extract so much information from people’s brain data, and they have absolutely no idea how easy it is. As you just showed, even DIYers are managing it. If there was a mass surveillance system that took into account, for instance, what someone was typing or saying while also reading their brain data, huge sets of annotated data could be produced which associate certain neural patterns with specific words, and so on. Some real Black Mirror shit.
 
Taking this back around to the subject of most threads on KF, my next project is going to attempt to measure the emotional impact that specific types of content have on me. I'd like to see what consumption of lolcow content does to me in a measurable way. Not just lolcow content of course, all types of content, but particularly lolcow content, since I love involving lolcow data in all of my hobby projects. If you have any suggestions, I'd love to hear them
I want to see what reactions pictures of niggers produce in you compared to pictures of humans.
 
I’ve been saying this! I agree 100%!
Also, welcome to the publishing world available here on the kiwi farms.
Could this be used to determine innocence or guilt in a crime, and how accurate would it be?
I’m hoping that one day ai will do all the lawyering, like of it. Although then the problem might be access to a decent ai lawyer as a decent one may take a lot of energy or compute or still be expensive. Anyway the mind reader would need to be sufficiently avanced but yes, any single part of the mind should be accessible. Now how hard of a question is that? It’s like tasking a computer with a terabyte sized issue when the computer only does kilobytes.
 
About a year ago I posted a couple of horrific WEF-style globalist retreat videos, one of which featured some smiling, disingenuous hyperevil swine tech-witch lady who stood on stage happily describing current-tech possible work farms where the supervisor lords over the cubicles of his employees actively reading their minds for things like "productivity" and "dissenting thoughts", as well as identifying "problematic ideas, racism or hatespeech".

Wish I could find those again. I think one of them was from the actual WEF. That was one video presentation that legitimately made me hate the human speaking, which hardly ever happens to me.

edits for grammar+readability because i are ritard
 
Last edited:
  • Thunk-Provoking
Reactions: Vecr
im using my tech to read every kf users mind right now.

Aaah!!! Its all gay porn!!! What the hell!!!
 
  • Agree
Reactions: Cnidarian
Back