The term “AI” is inescapable right now, spreading its way across every facet of every market, whether it actually makes sense or not. And as it turns out, the US Federal Trade Commission (FTC) is as sick of all this AI-related branding as everyone else.
A recent regulatory push dubbed “Operation AI Comply” now has the FTC cracking down on some of the more notable AI implementations, including at least three alleged scams.
A press release from last week detailed five new cases that the FTC is taking on, specifically targeting firms that sprinkle AI-related claims into their businesses. It quotes chairperson Lina M. Khan, who’s been particularly proactive since her appointment in 2021:
“Using AI tools to trick, mislead, or defraud people is illegal. The FTC’s enforcement actions make clear that there is no AI exemption from the laws on the books. By cracking down on unfair or deceptive practices in these markets, FTC is ensuring that honest businesses and innovators can get a fair shot and consumers are being protected.”
One company has already settled its case, but four others remain under active investigation, according to the press release:
- DoNotPay: A well-known company that offers semi-automated assistance for disputing parking tickets and other procedural fines in the US. The FTC takes issue with the company’s newest product that it called “the world’s first robot lawyer,” which the Commission says is a wild overrepresentation of its capabilities. DoNotPay has already settled its $193,000 fine and agreed to alert affected customers.
- Rytr: A company that offered AI-powered text generation specifically tuned for writing reviews and testimonials. The FTC claims that it was used to generate thousands of “reviews” for products that didn’t match up to any real person’s input.
- Ecommerce Empire Builders, Ascend Ecom, and FBA Machine: These three businesses all operated variations of passive income generation schemes, promising ridiculous returns on initial investments as hooks to get consumers buying into their programs.
The last three are arguably only tangentially related to AI. The dream of “passive income” has been a frequent online siren song since The 4-Hour Workweek was published in 2007, and all three are examples of this thinking being applied to sell more “courses” and pre-built online stores. (Pro tip: If someone is selling you a money-making method, ask yourself why they aren’t just using that method to make even more money and why they would give away their secrets.)
But these three companies sprinkled over-used pitches with claims of new AI generation tools to make them seem more trendy and plausible. All three charged their customers thousands — or even tens of thousands — of dollars for “training” and implementation of online stores, which allegedly needed almost no work in order to generate massive, never-ending profits. They’ve all been halted by federal courts pending more investigations and legal actions.
This news follows a push to outlaw AI-generated product reviews and fake social media clout. The FTC is going hell-bent against the effects that large language models are having on American consumers and businesses. The Commission isn’t unanimous on all these actions, but it’s pretty close: the two Republican-appointed Commissioners only objected on the Rytr case, while all four other cases were 5-0 votes.
To be clear, the FTC isn’t outlawing the use of AI; rather, the misleading use of AI-related claims to embellish products. It’s a refreshingly swift action from regulators who generally struggle to keep up with the latest tech. But it’ll take a lot of diligence to keep up with the growing flood of AI scams and other shenanigans as the technology spreads.