Science MIT, Harvard scientists find AI can recognize race from X-rays — and nobody knows how - The study found that an artificial intelligence program trained to read X-rays and CT scans could predict a person’s race with 90 percent accuracy.

1652717488843.png

A doctor can’t tell if somebody is Black, Asian, or white, just by looking at their X-rays. But a computer can, according to a surprising new paper by an international team of scientists, including researchers at the Massachusetts Institute of Technology and Harvard Medical School.

The study found that an artificial intelligence program trained to read X-rays and CT scans could predict a person’s race with 90 percent accuracy. But the scientists who conducted the study say they have no idea how the computer figures it out.

“When my graduate students showed me some of the results that were in this paper, I actually thought it must be a mistake,” said Marzyeh Ghassemi, an MIT assistant professor of electrical engineering and computer science, and coauthor of the paper, which was published Wednesday in the medical journal The Lancet Digital Health. “I honestly thought my students were crazy when they told me.”

At a time when AI software is increasingly used to help doctors make diagnostic decisions, the research raises the unsettling prospect that AI-based diagnostic systems could unintentionally generate racially biased results. For example, an AI (with access to X-rays) could automatically recommend a particular course of treatment for all Black patients, whether or not it’s best for a specific person. Meanwhile, the patient’s human physician wouldn’t know that the AI based its diagnosis on racial data.

The research effort was born when the scientists noticed that an AI program for examining chest X-rays was more likely to miss signs of illness in Black patients. “We asked ourselves, how can that be if computers cannot tell the race of a person?” said Leo Anthony Celi, another coauthor and an associate professor at Harvard Medical School.

The research team, which included scientists from the United States, Canada, Australia, and Taiwan, first trained an AI system using standard data sets of X-rays and CT scans, where each image was labeled with the person’s race. The images came from different parts of the body, including the chest, hand, and spine. The diagnostic images examined by the computer contained no obvious markers of race, like skin color or hair texture.

Once the software had been shown large numbers of race-labeled images, it was then shown different sets of unlabeled images. The program was able to identify the race of people in the images with remarkable accuracy, often well above 90 percent. Even when images from people of the same size or age or gender were analyzed, the AI accurately distinguished between Black and white patients.

But how? Ghassemi and her colleagues remain baffled, but she suspects it has something to do with melanin, the pigment that determines skin color. Perhaps X-rays and CT scanners detect the higher melanin content of darker skin, and embed this information in the digital image in some fashion that human users have never noticed before. It’ll take a lot more research to be sure.

Could the test results amount to proof of innate differences between people of different races? Alan Goodman, a professor of biological anthropology at Hampshire College and coauthor of the book “Racism Not Race,” doesn’t think so. Goodman expressed skepticism about the paper’s conclusions and said he doubted other researchers will be able to reproduce the results. But even if they do, he thinks it’s all about geography, not race.

Goodman said geneticists have found no evidence of substantial racial differences in the human genome. But they do find major differences between people based on where their ancestors lived.

“Instead of using race, if they looked at somebody’s geographic coordinates, would the machine do just as well?” asked Goodman. “My sense is the machine would do just as well.”

In other words, an AI might be able to determine from an X-ray that one person’s ancestors were from northern Europe, another’s from central Africa, and a third person’s from Japan. “You call this race. I call this geographical variation,” said Goodman. (Even so, he admitted it’s unclear how the AI could detect this geographical variation merely from an X-ray.)

In any case, Celi said doctors should be reluctant to use AI diagnostic tools that might automatically generate biased results.

“We need to take a pause,” he said. “We cannot rush bringing the algorithms to hospitals and clinics until we’re sure they’re not making racist decisions or sexist decisions.”

https://www.bostonglobe.com/2022/05...i-can-recognize-race-x-rays-nobody-knows-how/ (A)
 
I saw a cope on twitter by somebody claiming it was because AI was being "developed by straight white men in Silicon Valley".

They literally think it's intentionally programmed that way.
I’ve seen that mentality from leftist programmers themselves. There was a team of programmers with one black guy who tried to develop facial recognition software, and after it identified everyone except the black guy, he accused AI of being racist. It’s not that he fucked up his program by training it on biased data, or that he’s a bad programmer, or even that the other non-black programmers were secretly conspiring against him, but the concept of artificial intelligence itself is racist.
 
Bone structure can differ slightly based on race. It's how we can look at old ass bones from humans that died centuries ago but notice that the individual had more in common with European or Asian ancestry from today.
No. You and your hatefacts and well thought out posts. Those are microaggressions, donchaknow?

You wrong. Machine Racist. AI Bad. All wypipo fault.
 
I want everyone to remember this article when they think STEM is safe from liberal rot.

“Instead of using race, if they looked at somebody’s geographic coordinates, would the machine do just as well?” asked Goodman. “My sense is the machine would do just as well.”
"Well Mr Awaale you have a Swedish passport so we can simply rule out sickle cell anemia."
"Oh Thank God, thank you Doctor"

In any case, Celi said doctors should be reluctant to use AI diagnostic tools that might automatically generate biased results.

“We need to take a pause,” he said. “We cannot rush bringing the algorithms to hospitals and clinics until we’re sure they’re not making racist decisions or sexist decisions.”
Imagine dying of a disease they could have cured ages ago, but decided against it because AI is racist and shouldn't be used in drug research or diagnostics. The best part of all the fear of biological differences is that if they had just committed to studying them and not been afraid to really get into biology racial issues COULD have vanished completely in the future as people start selecting different genes for their designer babies based on fashion trends, personal taste and hopes for the future GATTACA style. Not gonna happen now,
 
I saw a cope on twitter by somebody claiming it was because AI was being "developed by straight white men in Silicon Valley".

They literally think it's intentionally programmed that way.
It's such a retarded cope too. Hypothetically, let's say all bones are the same and the programmers are all leaders in their local KKK chapters. How did they distinguish them? What black (heh) magic did they use to accurately distinguish between human and negro skeletons if they are the same? They would not be able to. The fact that the program was accurately able to determine race is proof their is a difference, period. There is no way to fake this shit, you can't write a program to detect something that's not there.

The only way this could be fake is if they pulled the results out of their ass, but for some reason l*ftoids never use that argument. In their pea-brains the AI is just psychic and racist.
 
Everyone knows why, they just can't say anything because retards will bitch endlessly about "all humans are the same". Maybe, just maybe, the problem here is cognitive dissonance forcing common sense out of lefties brains.

Maybe, just maybe...all humans are NOT objectively the same.

The sooner these people come to terms with this fact, the better off we'll all be.
 
A doctor can’t tell if somebody is Black, Asian, or white, just by looking at their X-rays.
Any forensic medical examiner worth shit can, and has been for...idk, generations.

The article buries the lead here a little but it's from stiff like hand x-rays chest x-rays etc not from the skul.
Well known differences there too.

The fun part is, a good MRI tech can as well, from brain scans, even without taking external skull shape into account.
 
Forensic scientists already do this. Alongside anthropology, it's their job to correctly identify and reconstruct what a person looked like from a pile of unearthed bones, unassorted and most often incomplete. If you think "daas raysiz" because you live in a parallel dimension where humans are batch-made clones then go ahead off to twitter about it.
 
“When my graduate students showed me some of the results that were in this paper, I actually thought it must be a mistake,” said Marzyeh Ghassemi, an MIT assistant professor of electrical engineering and computer science, and coauthor of the paper,
PXL_20210409_2242048921-e1626984662265.jpg

I'm amazed she's allowed to work with men

The research effort was born when the scientists noticed that an AI program for examining chest X-rays was more likely to miss signs of illness in Black patients. “We asked ourselves, how can that be if computers cannot tell the race of a person?” said Leo Anthony Celi, another coauthor and an associate professor at Harvard Medical School.
What are those signs, exactly? How would a human identify them?

“Instead of using race, if they looked at somebody’s geographic coordinates, would the machine do just as well?” asked Goodman. “My sense is the machine would do just as well.”
Alan Goodman, a professor of biological anthropology at Hampshire College
You can tell he's not a real biologist (who is already at the fringes of real STEM) by how he doesn't do the science thing. He does formulate a hypothesis, but he doesn't come up with an experiment arrempting to disprove his hypothesis in order to strengthen said hypothesis.
 
The only way this could be fake is if they pulled the results out of their ass, but for some reason l*ftoids never use that argument. In their pea-brains the AI is just psychic and racist.
Ah, I see the liberal arts degrees are serving them well.

Could the test results amount to proof of innate differences between people of different races? Alan Goodman, a professor of biological anthropology at Hampshire College and coauthor of the book “Racism Not Race,” doesn’t think so. Goodman expressed skepticism about the paper’s conclusions and said he doubted other researchers will be able to reproduce the results. But even if they do, he thinks it’s all about geography, not race.

Goodman said geneticists have found no evidence of substantial racial differences in the human genome. But they do find major differences between people based on where their ancestors lived.
I thought that innate differences between ethnic groups was confirmed from skin color to disease risk. Its just mental things like IQ, impulse control, and agression that are controversial. Also his hypothesis about geography seems like a pretty long winded way of saying the same thing. Race is tied to geography and it takes eons for significant changes to occur. Hence why we distinguish people in groups like Native American, Caucasian, African, Asian.
 
  • Agree
Reactions: frozenrunner
They need to feed this AI some crime stats, just give it all the terabytes and maybe it can finally prove all those white nationalists wrong, let all of personkind see that it's really the white republicans (sometimes in skin-suits) doing all the robbing raping and murdering.
 
I find it funny how it's like "scientistsconfusedAI can tell races apart by bones!" Like no shit sherlock if it's all just white skeletons you aren't gonna need skin to tell where they came from genetically it's literally doing what a doctor does with extra steps. What kinda fucking quacksdo tey got that don't even know basic fucking grade school bone-structure lore? This shits framing this like it's something only a probably expensive AI can do which is fucking funny and horrible to me all at the same time.
 
I want everyone to remember this article when they think STEM is safe from liberal rot.
You're right, but it's not as far along as you might fear. The 'AI' and machine learning spaces are full of hacks and jackasses taking pre-developed TensorFlow models built by better researchers, stacking them on top of each other not unlike a special needs child playing with Lego bricks, buying as much cloud computing resources as their uninterested slavemaster professor can afford in the research stipend, and then running their Godless monstrosity on Facebook datasets. They then shit out mongoloid-tier papers about how well their model 'worked' (or, if it didn't, about how it's racist and AI is the doom of us all).

Nobody actually doing serious respectable research on AI/ML has any time to entertain the clowns that have overrun the field. With any luck the deep learning bubble will pop and suffocate these worthless pseudo-academics when it does.
 
Back