Stop me if you’ve heard this one before: “This new technology will change everything!”
It’s a phrase regurgitated endlessly by analysts and tech executives with the current buzzword of the moment plugged in. And in 2023, that buzzword is AI. ChatGPT has taken the world by storm, Microsoft redesigned its Edge browser around an AI chatbot, and Google is rushing to integrate its AI model deeply into search.
I don’t blame you if you think AI is just another fad. I understand the skepticism (and frankly, the cynicism) around claiming any technology is some revolution when so many aren’t. But where augmented reality, the metaverse, and NFTs have faded into relative obscurity, AI isn’t going anywhere — for better and worse.
This isn’t new
Let’s be clear here: AI impacting everyday life isn’t new; tech companies are just finally bragging about it. It has been powering things you use behind the scenes for years.
For instance, anyone who’s interacted with Google search (read: everyone) has experienced a dozen or more AI models at play with only a single query. In 2020, Google introduced an update that leveraged AI to correct spelling, identify critical passages in articles, and generate highlights from YouTube videos.
It’s not just Google, either. Netflix and Amazon use AI to generate watching and shopping recommendations. Dozens of AI support chat programs power customer service from Target to your regional internet provider. Navigation programs like Google Maps use AI to identify roadblocks, speed traps, and traffic congestion.
Those are just a few high-level examples. Most things that could previously be done with a static algorithm — if ‘this,’ then ‘that’ — can be done now with AI, and almost always with better results. AI is even designing the chips that power most electronics today (and doing a better job than human designers).
Companies like Google and Microsoft are simply pulling back the curtain on the AI that’s been powering their services for several years. That’s the critical difference between AI and the endless barrage of tech fads we see every year.
Better over time
AI’s staying power hinges on the fact that we’re all already using it, but there’s another important element here. AI doesn’t require an investment from you. It absolutely requires a ton of money and power, but that burden rests on the dozens of companies caught up in the AI arms race, not on the end user.
It’s a fundamental difference. Metaverse hype tells you that you need to buy an expensive headset like the Meta Quest Pro to participate, and NFTs want you to cough up cold cash for code. AI just asks whether you want the tasks you’re already performing to be easier and more effective. That’s a hell of a lot different.
AI doesn’t have the growing pains of this emerging (soon-to-be-dead) tech, either. It has problems of its own, which I’ll dig into next, but the basis of generative AI has already been refined to a point that it’s ready for primetime. You don’t have to hassle with expensive, half-baked tech that doesn’t have many practical applications.
It also holds a promise. AI models like the ones now powering search engines and web browsers use reinforcement learning. They’ll get things wrong, but every one of those missteps is put pack into a positive feedback loop that improves the AI as time goes on. Again, I understand the skepticism around believing that AI will magically get better, but I trust that logic much more than I trust a tech CEO telling me a buzzword is going to change the world.
A warning sign
Don’t get it twisted; this is not a resounding endorsement of AI. For as many positives as that can bring, AI also brings some sobering realities.
First and most obviously: AI is wrong a lot of the time. Google’s first demo of its Bard AI showed an answer that was disproven by the first search result. Microsoft’s ChatGPT-powered Bing has also proven that complex, technical questions often throw the AI off, resulting in a copy-paste job from whatever website is the first result in the search engine.
That seems tame enough, but a constantly learning machine can perpetuate problems we already have online — and develop an understanding that those problems aren’t valid. For instance, graphics card and processor brand AMD recently announced in an earnings call that it was “undershipping” chips, which lead many outlets to initially report the company was price fixing. That isn’t the case. This term simply refers to the number of products AMD is shipping to retailers and signifies that demand is lower. Will an AI understand that context? Or will it run with the same misunderstanding that usually trusted sources are already erroneously repeating?
It’s not hard to see a negative feedback loop of misinformation around these complex topics, nor how these AIs can learn to reinforce negative stereotypes. Studies from Johns Hopkins show the often racist and sexist bias present in AI models, and as the study reads: “Stereotypes, bias, and discrimination have been extensively documented in machine learning methods.”
Safeguards are in place to protect against this type of bias, but you can still skirt these guardrails and reveal what the AI believes underneath. I won’t link to the examples to avoid perpetuating these stereotypes, but Steven Piantadosi, a professor and researcher of cognitive computer science at UC Berkely, revealed half a dozen inputs that would produce racist, sexist responses within ChatGPT just a couple of months ago — and none of them were particularly hard to come up with.
It’s true that AI can be prodded into submission on these fronts, but it hasn’t been yet. Meanwhile, Google and Microsoft are caught up in an arms race to debut their rival AIs first, all carrying these same underpinnings that have been present in AI models for years. Even with protection, it’s a matter of when, not if, these models will deteriorate into the same rotten core that we’ve seen through AIs since their inception.
I’m not saying this bias is intentional, and I’m confident Microsoft and Google are working to remove as much of it as possible. But the momentum behind AI right now pushes these concerns into the background and ignores the implications they could have. After all, the AI revolution is upon us, and it won’t quickly fade into obscurity like another tech fad. My only hope is that the never-ending need for competition isn’t enough to uproot the necessity for responsibility.
Editors’ Recommendations