Last week, OpenAI CEO Sam Altman published a blog post about how he says the company will use superhuman artificial general intelligence (AGI), the point at which AI systems can compete with and even surpass human intellect, for ‘the benefit of all mankind. .”
The company has garnered plenty of attention lately with its AI chatbot ChatGPT, a versatile tool that has seen an exponential rise in popularity since its launch just a few months ago.
The tool, which is based on the company’s large language model (LLM) called GPT, can expertly construct responses to a surprising array of prompts, an ability that has tricked users into thinking it’s sentient or has a personality.
In reality, however, LLMs have a long way to go before they can compete with a human’s intellect — which is why several experts are blasting Altman’s recent blog post as meaningless and misleading.
After all, AGI is a vague term, borrowed from the realm of science fiction, that refers to something that simply doesn’t exist yet. In fact, we haven’t even settled on a common definition.
“The term AGI is so loaded, it’s misleading to throw it around as if it were a real thing with real meaning,” Bentley University mathematics professor Noah Giansiracusa argued in a tweet. “It’s not a scientific idea, it’s a science fiction marketing ploy.”
“Artificial intelligence will steadily improve, there is no magic (moment) when it becomes ‘AGI,'” he added.
In a thread on TwitterUniversity of Washington linguistics professor Emily Bender took Altman’s blog post apart piece by piece.
“From the beginning this is just gross,” he argued. “They think they’re really into developing/configuring ‘AGI’. And they think they are in a position to decide what ‘benefits all mankind’.”
Bender also pointed out the “Altmanrhetorical ploy,” starting by treating AGI as hypothetical, but immediately turning it into “something that has “capabilities.”
At the end of his blog post, Altman goes so far as to claim that AGI cannot be prevented and “would also entail serious risks of misuse, drastic accidents, and social disruption.”
In short, Bender says, Altman is getting way ahead of himself and is positioning his company to have laid the groundwork for an early AGI.
“Your system is not AGI, it is not a step towards AGI, and yet you dismiss it as if the reader was supposed to just nod along with it,” Bender he argued.
For the linguist, OpenAI’s recent decision to transform itself from an open source platform into a profit-maximizing capitalistic entity is truly telling.
In his blog post, Altman argues that we should “enact regulations,” which could allow “society and artificial intelligence to co-evolve, and for people to collectively figure out what they want while the stakes are relatively low.”
But that’s beside the point, Bender argued, especially given OpenAI’s status as a private company that has no obligation to make its motives apparent to the world.
“The problem is not the regulation of ‘AI’ or future ‘AGI,'” he said He wrote. “Protects individuals from corporate and government overreach by using ‘AI’ to reduce costs and or deflect accountability.”
“There are harms NOW: to privacy, theft of creative product, harms to our information ecosystems, and harms from scaling bias reproduction,” Bender said. added. “An organization that cared about the ‘benefit of humanity’ would not develop/proliferate technology that does these things.”
More about OpenAI: Elon Musk Recruits Team to Build Own AI Anti-“Woke” to Rival ChatGPT