Don’t overstate your AI claims, FTC warns

The Federal Trade Commission, not generally known for flowery rhetoric or philosophical musings, took a moment Monday to publicly ponder, “What exactly is ‘artificial intelligence’ anyway” in a live blog post by Michael Atleson, an attorney in the FTC’s division. Advertising Practices.

After summarizing humanity’s tendency to tell stories about the lives of things “imbued(d) with power beyond human possibility,” he asks, “It’s strange that we can be so ready to accept what the marketers say about news tools and devices that purport to reflect the capabilities and benefits of artificial intelligence?’

(Related: ChatGPT Quietly Co-signs Books on Amazon.)

Although Atleson ultimately leaves the broader definition of “AI” largely open to debate, he did make one thing clear: The FTC knows what it definitely isn’t, and grifters are officially on notice. “(I) is a marketing term. It’s really hot right now,” Atleson continued. “And at the FTC, one thing we know about hot marketing terms is that some advertisers won’t be able to stop themselves from overusing and abusing them.”

The FTC’s official statement, while somewhat unusual, is certainly in keeping with the new Wild West era of artificial intelligence—an era where every day we see new headlines about Big Tech’s latest big language models, “hidden personalities,” dubious sensitivity claims and the resulting inevitable frauds. As such, Atleson and the FTC go so far as to draw up a clear list of things to look out for while companies continue to breathlessly issue press releases about their supposed breakthroughs in artificial intelligence.

“Exaggerating what your AI product can do?” asks the Commission, warning businesses that such claims could be labeled as “misleading” if they lack scientific evidence or apply only to extremely specific users and situations. Companies are too loudly are encouraged to refrain from advertising AI as a means of potentially justifying higher product costs or labor decisions and to take extreme risk assessment precautions before making products available to the public.

(Related: No, AI chatbots are not (yet) sentient.)

Don’t blame third-party developers for biases and unwanted effects by bemoaning retroactive “black box” programs you don’t understand—those won’t be viable excuses for the FTC and could potentially cause you serious headaches. Finally, the FTC is asking perhaps the most important question right now: “Does the product really use any artificial intelligence?” Which… fair enough.

While this isn’t the first time the FTC has issued industry warnings — even warnings about AI claims — it remains a pretty strong indication that federal regulators are reading the same headlines as the public now — and they don’t seem happy.

Leave a Reply

Your email address will not be published. Required fields are marked *