These new tools could make AI vision systems less biased


Traditionally, skin-tone bias in computer vision is measured using the Fitzpatrick scale, which measures from light to dark. The scale was originally developed to measure tanning of white skin but has since been adopted widely as a tool to determine ethnicity, says William Thong, an AI ethics researcher at Sony. It is used to measure bias in computer systems by, for example, comparing how accurate AI models are for people with light and dark skin. 

But describing people’s skin with a one-dimensional scale is misleading, says Alice Xiang, the global head of AI ethics at Sony. By classifying people into groups based on this coarse scale, researchers are missing out on biases that affect, for example, Asian people, who are underrepresented in Western AI data sets and can fall into both light-skinned and dark-skinned categories. And it also doesn’t take into account the fact that people’s skin tones change. For example, Asian skin becomes darker and more yellow with age while white skin becomes darker and redder, the researchers point out.  

Thong and Xiang’s team developed a tool—shared exclusively with MIT Technology Review—that expands the skin-tone scale into two dimensions, measuring both skin color (from light to dark) and skin hue (from red to yellow). Sony is making the tool freely available online

Thong says he was inspired by the Brazilian artist Angélica Dass, whose work shows that people who come from similar backgrounds can have a huge variety of skin tones. But representing the full range of skin tones is not a novel idea. The cosmetics industry has been using the same technique for years. 

“For anyone who has had to select a foundation shade … you know the importance of not just whether someone’s skin tone is light or dark, but also whether it’s warm toned or cool toned,” says Xiang. 

Sony’s work on skin hue “offers an insight into a missing component that people have been overlooking,” says Guha Balakrishnan, an assistant professor at Rice University, who has studied biases in computer vision models. 

Measuring bias

Right now, there is no one standard way for researchers to measure bias in computer vision, which makes it harder to compare systems against each other. 

To make bias evaluations more streamlined, Meta has developed a new way to measure fairness in computer vision models, called Fairness in Computer Vision Evaluation (FACET), which can be used across a range of common tasks such as classification, detection, and segmentation. Laura Gustafson, an AI researcher at Meta, says FACET is the first fairness evaluation to include many different computer vision tasks, and that it incorporates a broader range of fairness metrics than other bias tools. 



Source link

Previous articleResident Evil Village coming to iPhone 15 Pro and M1+ iPads next month
Next articleLatam Insights: Bitcoin Farm Found at Venezuelan Jail, Bill … – Bitcoin.com News