The Real AI Revolution Will Be Invisible


Key Takeaways

  • Qualcomm, NVIDIA, AMD, Intel, and other companies are developing hardware for faster on-device AI.
  • On-device AI is already powering impressive audio mixing software, accessibility tools, and other software.
  • As AI hardware becomes more accessible, expect more useful AI features to be integrated into applications, replacing the current trend of obnoxious AI chatbots and suggested replies.



Tech companies are really excited about artificial intelligence, often to the point of creating unhelpful AI-related services and features just to prove they’re doing something with the technology. Underneath the vapid marketing and useless functionality, there are some genuinely impressive AI features that will make a difference in your day-to-day life.

Qualcomm invited me to its headquarters in San Diego, California this week to show off its ongoing work on AI technology. You might not have heard of Qualcomm, but it’s the company that builds the core chipsets for countless phones and tablets, from high-end devices like the Galaxy S24 Ultra to budget models like the Moto G 5G. The company’s modems are found in most iPhones, and it’s building VR and AR hardware for use in the Meta Quest and other headsets. Most recently, Qualcomm started building high-end System-on-a-Chip (SoC) designs for Windows laptops, in direct competition with CPUs from Intel and AMD.


Qualcomm is developing a lot of AI hardware and software, built up from the company’s experience with mobile image processing and other earlier implementations of on-device machine learning. The new Snapdragon X chipsets for PC laptops have a dedicated neural processing unit (NPU) for on-device AI tasks. The company’s newer mobile chips, like the Snapdragon 8s Gen 3, can handle some large language models (LLMs) without help from an external server over an internet connection. Qualcomm isn’t alone here, to be clear—the latest laptop CPUs from AMD and Intel also have NPUs, and consumer Nvidia GPUs can also handle many on-device AI workloads.



Outside The Hype

I know what you’re thinking. You’re tired of hearing every tech company ramble about AI like it’s the magical solution to all the world’s problems. You’re sick of the AI features popping up in your favorite apps. Maybe you’re an artist, writer, or some other creator that heard OpenAI’s CTO say that AI could kill some creative jobs that “shouldn’t have been there in the first place,” and you’re ready to burn it all down to the ground. I get that, and I agree most implementations of “AI” right now are solutions in search of a problem or actively harmful.

Underneath the AI hype cycle nonsense, and the executives excited about replacing countless workers with cheaper automation, there are some actually useful features that have only become feasible with recent hardware from Qualcomm, Intel, AMD, NVIDIA, and other companies.


Cephable, a company that builds a camera-based input tool for people with disabilities, showed off an updated version of its software running on a Snapdragon X Elite laptop. It uses a webcam for monitoring head movements and facial expressions, translating them into key presses or other actions for desktop software (for example, turning your head to change slides in a PowerPoint presentation). The new version for Snapdragon laptops runs all machine learning software on the dedicated NPU, reducing battery usage, improving processing speed and accuracy, and freeing up CPU and GPU resources for your other applications. There was another demonstration of djay Pro that could split songs into multiple instrumental and vocal tracks for real-time DJ mixing, which is only practical with on-device AI. The latest Logic Pro update on Mac and iPad has similar functionality for audio production.

Live demo of DJ software running on a Snapdragon X Elite laptop.
Corbin Davenport / How-To Geek


The ability to run large language models on a more typical smartphone, tablet, or PC opens up other interesting use cases. For example, the upcoming “Apple Intelligence” on iPhones, iPads, and Macs will use on-device AI to sort notifications and better understand spoken language in Siri. There are some features that are harder to build and scale when they require a powerful datacenter somewhere, and that’s what hardware makers are trying to change right now.

There aren’t many applications and services that use on-device AI right now, because they can behave differently across different devices and operating systems, and not everyone has a phone or PC with the required processing power. Newer developer tools, like NVIDIA’s TensorRT-LLM and Qualcomm’s AI Engine Direct SDK, are slowly making that part more accessible for software developers. Eventually, adding a feature that requires a powerful LLM won’t be much more complex than adding a feature that needs any other system function, and I expect that’s when we’ll see more apps adding useful features.


These advancements are pointing towards a future where more on-device AI features will be possible, and they will be implemented just like any other functionality in your favorite applications. The trend of obnoxious AI chatbots or AI-suggested replies on social media posts will eventually fade away (hopefully), but we’ll be left with the features that are actually useful. That’s the real AI revolution: not a giant Copilot button in Microsoft Edge, but your apps and devices becoming smarter and accomplishing specific tasks much quicker and more efficiently.

What’s Old Is New

The word “AI” has lost most of its meaning over the past year or two, much like “crypto” became meaningless during the last cryptocurrency bubble. It might mean large language models that require expensive datacenters, or it might be the image processing algorithms used when taking a photo on a modern smartphone. There are also many AI-enabled devices that are clearly just using the term for the hype value, like rice cookers with “AI Smart Cooking Technology.”


The term “AI” is also often used to describe the same functionality that was called “machine learning” a few years ago, such as object recognition in photos or translating text between languages. Many of those machine learning features were useful, such as Google Photos adding the ability to search for specific people or pets in photo libraries, or using Google Lens to figure out what type of bug you just found. Much of that functionality was never as obnoxious and in-your-face as many modern AI features, and many of those features don’t need big expensive servers.

The real AI revolution won’t be annoying popups or chatbots everywhere, or ugly AI-generated images all over social media. It will just be another step in the decades-long evolution of software, making your devices more useful. That’s the AI I’m excited about.


Disclosure: My trip to San Diego, California to visit the AI Analyst & Media Workshop was paid for by Qualcomm, including travel and accommodations. Qualcomm did not review this article before it went live.



Source link

Previous articleUS Lawmaker Matt Gaetz Unveils Legislation That Would Allow IRS To Accept Federal Taxes in Bitcoin
Next articleOver 1 Million Addresses Now Own 1 BTC