ChatGPT: Did Big Tech Set Up the World for an AI Bias Disaster?


ChatGPT’s buzzy debut has made for a rough few months for Google. Close watchers of the tech giant say: It didn’t have to go this way.

Essentially scooped by a competitor on its home turf, Google has scrambled to release its own artificial intelligence (AI) mega-system Bard in response to OpenAI’s ChatGPT, the remarkable AI chatbot garnering attention worldwide. The rollout comes amid mounting concerns that AI could perpetuate cultural, racial, and gender biases, sparking intense debate over the uses—and misuses—of this powerful technology.

If things had gone differently, Google may have held the high ground, allowing it to influence norms and policies to mitigate AI bias. But the tech giant’s decision to push out pioneering AI researcher and ethicist Timnit Gebru set the company on a rocky course, contends Harvard Business School professor Tsedal Neeley. Neeley wrote a case study last year detailing Gebru’s efforts within Google to urge caution with AI, saying tech companies shouldn’t race to launch systems without considering the potential risks and harms they could cause. She warned that unchecked AI databases could reek of bias that can become immense without effective oversight or regulation.

“You have to slow down to ensure that the data that these systems are trained on aren’t inaccurate or biased.”

While many people are now aware that bias can be baked into AI systems, from credit reporting to facial recognition, Gebru was instrumental early on in bringing the public’s attention to the problem. Now, Neeley sees many of Gebru’s insights and warnings, which are highlighted in the case study, taking on newfound importance amid Big Tech’s race to build ever-larger AI datasets.

“We now see, once again, that she is ahead of everyone. Not only does she have a deep understanding of the underlying technology that powers these systems, she is ringing the alarm: You have to slow down to ensure that the data that these systems are trained on aren’t inaccurate or biased,” says Neeley, who is senior associate dean of faculty development and research strategy at HBS and co-author of The Digital Mindset: What It Really Takes to Thrive in the Age of Data, Algorithms, and AI.

The case, and its reverberations as ChatGPT’s success escalates the AI race, offers lessons for those who are interested in AI implementations—as well as those in the tech industry and beyond—about the importance of listening to voices urging caution, even if doing so might cool corporate profits.

An early tech star reveals bias in AI

In May 2022, Neeley and HBS research associate Stefani Ruper shined a spotlight on the ethical concerns and issues raised by the breakneck development of AI technology in the case “Timnit Gebru: ‘Silenced No More’ on AI Bias and the Harms of Large Language Models.” The case study examines the efforts by the pioneering Gebru to warn Google’s executive suite.

Born in Addis Ababa to Eritrean scientists, Gebru emigrated to the US with her mother, settling in the Boston area. Gebru was a math and science standout at her high school, “despite teachers’ disbelief that a Black refugee could be intellectually successful,” Neeley notes. Gebru would go on to earn a Ph.D. as part of the Stanford Artificial Intelligence Laboratory.

AI uses data to teach computer systems to recognize text and image. While at Stanford, Gebru collaborated with AI researcher Fei-Fei Li, who had hired people to tag the contents of 15 million images, noting if they included a cat or a cello. This dataset would teach an AI system to automatically recognize cats and cellos in images.

In 2017, Gebru took this research a step further. She used AI to not only identify vehicle makes in Google Street View images, but connect them to demographic, crime, and voting data. For example, more Buicks meant more Black residents in a neighborhood, and more vans meant more crime in a particular area. Her findings, which she presented at a prominent investor conference, captured the imaginations of industry titans and academics.

Computer scientists long assumed that AI systems would become more accurate and objective as they gathered more data, but Gebru soon challenged that theory. Her Gender Shades project with Joy Buolamwini found that facial recognition services offered by IBM, Microsoft, and other companies misidentified Black women as much as 35 percent of the time while performing nearly perfectly with white men.

In a foreshadowing of her later work, Gebru called out “algorithmic bias” as one of the “most important emergent issues plaguing our society today.”

Google turns its back on Gebru

Gebru joined Google in 2018 as co-lead of the company’s Ethical Artificial Intelligence unit, shifting her focus from technological advancement to assessing the new medium for fairness.

She was particularly concerned about so-called large language models, such as Google’s BERT model, which was trained on 3.3 billion words, and OpenAI’s GPT-3, built on a half-trillion words. Google and others were staking profits on the success of these models. But if data that go into an AI system contain bias, she contended, the outputs will contain the same bias, a problem that multiplies as the size of the dataset grows.

In 2020, Gebru and Emily Bender, a linguistics professor at the University of Washington, led the submission of a major paper to an academic conference calling for a slowdown in AI development. The work questioned whether AI language models, with hundreds of billions of words, had grown too large to effectively deal with issues of bias.

“And we saw this race of just trying to train larger and larger and larger language models,” Gebru recalls in Neeley’s case study. “Just seeing this race for the sake of it. And so we wanted to write a paper to help slow it down and just think through the risks and harms.”

While the paper had already been reviewed by Google and submitted for peer review outside the company, Gebru’s supervisor ordered her to retract it. This triggered a dramatic chain of events—detailed in Neeley’s case study—that led to Gebru leaving Google. (Gebru maintains that she was fired, while Google says she resigned.)

An uproar over Gebru’s departure ensued, triggering a public relations nightmare for the tech giant. Thousands of Google employees and supporters in academia, industry, and civil society groups signed a petition calling Gebru’s “termination” an “act of retaliation” that “heralds danger for people working for ethical and just AI—especially Black people and People of Color—across Google.”

A week after Gebru left the company, Google CEO Sundar Pichai issued an apology.

Learning from Google’s mistakes

The takeaways from Gebru’s story are hardly singular to Google as Big Tech scrambles to build ever-larger AI data sets with seemingly few restraints or safeguards, Neeley says. And since ChatGPT hit the scene late last year, the news coverage of both its potential and its ominous risks has highlighted the concerns that Gebru sounded years earlier.

“If we don’t have the right strategies in place to design and sanitize our sources of data, we will propagate inaccuracies and biases that could target certain groups.”

“The biggest message I want to convey is that AI can scale bias in ways that we can barely understand today,” Neeley says. “It takes millions and billions of data points for them to operate. If we don’t have the right strategies in place to design and sanitize our sources of data, we will propagate inaccuracies and biases that could target certain groups.”

In a world so heady about ChatGPT, developed with backing from Microsoft, and AI that US regulators cautioned companies to rein in grandiose claims, leaders should get ready to confront increasingly thorny ethics questions with potentially higher stakes. How can they prepare? Neeley recommends that leaders ask themselves:

Do our ethics watchdogs and controls have authority and autonomy? One of Google’s biggest mistakes was not giving Gebru the authority and independence to be an effective AI ethics researcher, Neeley’s case suggests. For example, an internal document issued by the tech giant asked employees to use a “positive” tone in reports, Neeley writes, citing a Reuters report.

Gebru, who has argued that such directions undermine research objectivity, preferred the rigorous peer review approach of academia to evaluate her team’s findings. Without protections from corporate-driven interference, Gebru reasoned that research was little more than propaganda.

Do we have diverse perspectives who can challenge the status quo? Gebru’s story is also a call to hire diverse talent, Neeley says. According to the case, Gebru suggests some of the bias in AI could be mitigated by including people from historically marginalized communities in its design.

“The people who will really, really know how tools are being used are refugees or incarcerated people or heavily policed communities,” Gebru says in the case. “And the issue is that, at the end of the day, those are also the communities with the least amount of power.”

Is short-term hubris clouding my judgement? As a market leader, Google might have felt invincible enough to ignore major shortcomings in key products, but ChatGPT is a reminder that no company is safe from disruption.

While hindsight may be 20/20, the emergence of ChatGPT makes it clear that Google, in failing to give Gebru the independence to do her job, might have sacrificed an opportunity to become a global leader in responsible AI development, Neeley says. Google also lost a computer science star who could have been instrumental in helping the company reach its long-term AI goals.

You Might Also Like:

Feedback or ideas to share? Email the Working Knowledge team at hbswk@hbs.edu.

Image: iStockphoto/kentoh



Source link

Previous articleGBTC rises as spot bitcoin ETF debate hits U.S. courts
Next articleBitcoin Mar. 7 daily chart alert – Bears work to extend price downtrend