Technology AI models still racist, even with more balanced training

tom_mai78101

The Helper Connoisseur / Ex-MineCraft Host
Staff member
Reaction score
1,691
AI algorithms can still come loaded with racial bias, even if they're trained on data more representative of different ethnic groups, according to new research.

An international team of researchers analyzed how accurate algorithms were at predicting various cognitive behaviors and health measurements from brain fMRI scans, such as memory, mood, and even grip strength. Medical datasets are often skewed – they're not collected from a diverse enough sample size, and certain groups of the population are left out or misrepresented.

It's not surprising if predictive models that try to detect skin cancer, for example, aren't as effective when analyzing darker skin tones than lighter ones. Biased datasets are often the source for why AI models are also biased. But a paper published in Science Advances has found that these unwanted behaviors in algorithms can persist even if they're trained on datasets that are more fair and diverse.

The team performed a series of experiments with two datasets containing tens of thousands of fMRI scans of people's brains – including data from the Human Connectome Project and the Adolescent Brain Cognitive Development. In order to probe how racial disparities impacted the predictive models' performance, they tried to minimize the impact other variables, such as age or gender, might have on accuracy.

"When predictive models were trained on data dominated by White Americans (WA), out-of-sample prediction errors were generally higher for African Americans (AA) than for WA," the paper reads.

That shouldn't raise any eyebrows, but what is interesting is that those errors didn't go away even when they trained algorithms on datasets containing samples from an equal representation for both WA and AA, or from only AAs.

Algorithms trained solely on data samples from AAs were still not as accurate at predicting cognitive behaviors for the population group as those trained on WAs were for WAs, going against common understanding of how these systems normally work. "When models were trained on AA only, compared to training only on WA or an equal number of AA and WA participants, AA prediction accuracy improved but stayed below that for WA," the abstract continued. Why?

 

The Helper

Necromancy Power over 9000
Staff member
Reaction score
1,697
I wonder if this in the program or just a part of becoming "Intelligent"?
 
General chit-chat
Help Users
  • No one is chatting at the moment.

      The Helper Discord

      Members online

      Affiliates

      Hive Workshop NUON Dome World Editor Tutorials

      Network Sponsors

      Apex Steel Pipe - Buys and sells Steel Pipe.
      Top