Home Technology Scientists say AI can infer your politics from a brain scan -...

Scientists say AI can infer your politics from a brain scan – we say it’s BS

A team of researchers using what they call “state-of-the-art artificial intelligence techniques” have reportedly created a system capable of identifying a person’s political ideology by viewing their brain scans.

Wow! This is either the most advanced AI system in the entire known universe, or it’s a total sham.

Unsurprisingly, it’s a sham: there’s little cause for excitement. You don’t even have to read the researcher’s paper to disprove their work. All you need is the phrase “politically changing” and we’re done here.

Greetings humanoids

Sign up now for a weekly recap of our favorite AI stories

But for fun, let’s dive into the paper and explain how forecasting models work.

The experiment

A team of researchers from Ohio State University, University of Pittsburgh and New York University gathered 174 U.S. college students (median age 21) — the vast majority of whom identified themselves as liberal — and performed brain scans on them while running a short battery of tests. .

According to the research paper:

Each participant underwent 1.5 hours of functional MRI recording, which consisted of eight tasks and resting state scans using a 12-channel main coil.

Essentially, the researchers grabbed a bunch of young people, asked them about their politics, and then designed a machine that flips a coin to “predict” someone’s politics. Only, instead of flipping a coin, it uses algorithms to supposedly parse brainwave data to do what is essentially the same thing.

The problem

The AI ​​has to predict either ‘liberal’ or ‘conservative’ and in systems like this there is no option for ‘neither’.

So straight away: the AI ​​does not predict or identify politics. It is forced to choose between the data in column A or the data in column B.

Let’s say I sneak into the Ohio State University AI center and scramble all their data. I replace all the brainwaves with Rick and Morty memes and then I hide my tracks so people can’t see it.

As long as I don’t change the labels on the data, the AI ​​will still predict whether the subjects are conservative or liberal.

You either believe the machine has magical data powers that can arrive at a ground truth no matter what data it gets, or you recognize that the illusion stays the same no matter what kind of rabbits you put in the hat.

That 70% accuracy number is incorrect

A machine that is 70% accurate in guessing a man’s politics is always 0% accurate in determining it. This is because human political ideologies do not exist as basic truths. There is no conservative brain or liberal brain. Many people are neither or an amalgam of both. Moreover, many people who consider themselves liberal actually have conservative views and mindsets, and vice versa.

So the first problem we run into is that the researchers don’t define “conservatism” or “liberalism.” They let the subjects they study decide for themselves – let’s keep in mind that the students have a median age of 21.

What that ultimately means is that the data and labels have no respect for each other. The researchers eventually built a machine that always has a 50/50 chance of guessing which of the two labels they placed on a given data set.

It doesn’t matter if the machine looks for signs of conservatism in brain waves, homosexuality in facial expressions, or if someone is likely to commit a crime based on their skin color, these systems all work in exactly the same way.

They have to brute force an inference, so they do. They are only allowed to choose from prescribed labels, so they do. And the researchers have no idea how it all works because they’re black box systems, so it’s impossible to pinpoint exactly why the AI ​​is making a certain inference.

What is Accuracy?

These experiments don’t exactly pit humans against machines. They really just set two benchmarks and then merge them together.

The scientists will give multiple people the prediction task once or twice (depending on the controls). Then they give the AI ​​the prediction task hundreds, thousands, or millions of times.

The scientists don’t know how the machine will arrive at its predictions, so they can’t just put in the ground truth parameters and call it a day.

They have to train the AI. This means giving it the exact same job — parsing the data from a few hundred brain scans, for example — and making it run the exact same algorithms over and over.

If the machine inexplicably got 100% on the first try, they would sit still for a day and say it was perfect! Even if they wouldn’t have a clue why – remember, this all happens in a black box.

And, as is more often the case, if it doesn’t meet a significant threshold, they keep tweaking the algorithm’s parameters until it gets better. You can visualize this by imagining a scientist tuning a radio signal through static electricity, without looking at the dial.

BS in, BS out

Now consider that this particular machine only gets it right about 7 times out of 10. That’s the best the team could do. They couldn’t customize it better than that.

There are less than 200 people in its dataset, and it already has a 50/50 chance of guessing correctly without any data.

So giving it all this nice brainwave data gives it a measly 20% accuracy above base probability. And that comes only after a team of researchers from three prestigious universities pooled their efforts to create what they call “state-of-the-art artificial intelligence techniques.”

In comparison, if you were to give a human a dataset of 200 unique, unlabeled symbols, with each symbol having a hidden label of 1 or 0, the average person would probably be able to remember the dataset after a relatively small number of iterations using only the some parameter if they guessed right as feedback.

Think of the biggest sports fan you know, how many players can they remember by the team name and jersey number alone in the history of the sport?

Humans could achieve 100% accuracy when memorizing a binary file in a database of 200, if given enough time.

But the AI ​​and humans would have exactly the same problem if you gave them a new dataset: they would have to start all over again. Given an entirely new dataset of brainwaves and labels, it’s almost certain that the AI ​​wouldn’t reach the same level of accuracy without further tweaking.

Benchmarking this particular prediction model is just as useful as measuring the accuracy of a tarot card reader.

Good research, bad framing

That is not to say that this research has no merit. I wouldn’t say anything about a research team dedicated to uncovering the flaws inherent in artificial intelligence systems. You don’t get mad at a security researcher who discovers a problem.

Unfortunately, this research does not work that way.

According to the paper:

While the direction of causality remains unclear – people’s brains reflect the political orientation they choose or choose their political orientation because of their functional brain structure – the evidence here motivates further research and subsequent analyzes into the biological and neurological roots of political behavior.

This is borderline quackery in my opinion. The implication here is that, like homosexuality or autism, a person may not be able to choose their own political ideology. Alternatively, it seems to indicate that our brain chemistry can be reconfigured simply by adopting a predefined set of political viewpoints – and no less at the tender age of 21!

This experiment is based on a tiny bit of data from a tiny pool of people who, as far as we can tell, are demographically similar. Moreover, its results cannot be validated in any sense of the scientific method. We will never know why or how the machine made its predictions.

We need this kind of research to test the limits of exploitation when it comes to these predictive models. But pretending this research has led to something more sophisticated than the “Not Hotdog” app is dangerous.

This is not science, it is prestidigitation with data. And framing it as a potential breakthrough in our understanding of the human brain will only provide water for all the AI ​​scams — like predictive police forces — that rely on the exact same technology to wreak havoc.

RELATED ARTICLES

Singer R Kelly convicted of shocking sex crimes against women and children

The fallen singer finds out today whether he will ever taste freedom again after being convicted of a series of shocking sex crimes. Disgraced musician...

Change on foot in Victoria Harbour

The prominent Lady Cutler Melbourne Showboat has been relocated to North Wharf by the Melbourne City of Docklands waterways team amid “major upgrades” to...

Djokovic easily to third round and Rod out

London (AFP) – Serb Novak Djokovic, the defending champion of the last...

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Singer R Kelly convicted of shocking sex crimes against women and children

The fallen singer finds out today whether he will ever taste freedom again after being convicted of a series of shocking sex crimes. Disgraced musician...

Change on foot in Victoria Harbour

The prominent Lady Cutler Melbourne Showboat has been relocated to North Wharf by the Melbourne City of Docklands waterways team amid “major upgrades” to...

Djokovic easily to third round and Rod out

London (AFP) – Serb Novak Djokovic, the defending champion of the last...

Shimano introduces new warp-resistant disc brake discs for quieter braking

“Ting, ting, ting!” It’s the sound that many roadies with disc brakes are all too familiar with, especially on longer descents. When disc rotors...