Before her book came out, says O’Neil, “people didn’t really understand that the algorithms weren’t predicting but classifying … and that this wasn’t a math problem but a political problem. A trust problem.”
O’Neil showed how every algorithm is optimized for a particular notion of success and is trained on historical data to recognize patterns: e.g., “People like you were successful in the past, so it’s fair to guess you will be successful in the future.” Or “People like you were failures in the past, so it’s fair to guess you will be a failure in the future.”
This might seem like a sensible approach. But O’Neil’s book revealed how it breaks down in notable, and damaging, ways. Algorithms designed to predict the chance of rearrest, for example, can unfairly burden people, typically people of color, who are poor, live in the wrong neighborhood, or have untreated mental-health problems or addictions. “We are not really ever defining success for the prison system,” O’Neil says. “We are simply predicting that we will continue to profile such people in the future because that’s what we’ve done in the past. It’s very sad and, unfortunately, speaks to the fact that we have a history of shifting responsibilities of society’s scourges to the victims of those scourges.”
Gradually, O’Neil came to recognize another factor that was reinforcing these inequities: shame. “Are we shaming someone for a behavior that they can actually choose not to do? You can’t actually choose not to be fat, though every diet company will claim otherwise. Can you choose not to be an addict? Much harder than you think. Have you been given the opportunity to explain yourself? We’ve been shaming people for things they have no choice or voice in.”
I spoke with O’Neil by phone and email about her new book, The Shame Machine: Who Profits in the New Age of Humiliation, which delves into the many ways shame is being weaponized in our culture and how we might fight back.
The trajectory from algorithms to shame isn’t immediately apparent. How did you connect these two strands?
I investigated the power behind weaponized algorithms. Often, it’s based on the idea that you aren’t enough of an expert to question this scientific, mathematical formula, which is a form of shaming. And it was even more obvious to me, I think, because as a math PhD holder, it didn’t work on me at all and in fact baffled me.
The power of bad algorithms is a violation of trust, but it’s also shame. You do not know enough to ask questions. For example, when I interviewed a friend of mine, who is a principal whose teachers were being evaluated by the Value Added Model for Teachers in New York City, I asked her to get her hands on the formula that her teachers were targeted by. It took her many layers of requests, and each time she asked she was told, “It’s math—you won’t understand it.”