OpenAI seems like a very scrappy company creating world-changing AGI

Let’s just say, hypothetically, that someone builds Artificial General Intelligence (AGI) — human-level artificial intelligence that, if realized, would undoubtedly change everything.

The world economy would likely be turned upside down overnight, while radical changes in social structures, political systems, and even international power dynamics would follow close behind. What it means to be human would suddenly feel a lot less concrete, and meanwhile those building the system would quickly amass economic and political power.

If only a piece of technology could do it all theseyou might have some thoughts about what person or group might be the ideal—as in, least likely world—candidate to build such a machine.

Operating as a non-profit would make sense. Money would certainly be needed to create this and any other world-changing machine, but with the need to generate steady revenue, both objectivity and security can go out the window pretty quickly. Also, you’d probably want the technology to be open source, and you probably wouldn’t want any big legacy tech company to have a controlling stake.

And yet, all of these characteristics apply to OpenAI, which just updated its AGI ambitions in a lengthy blog post titled “Programming for AGI and Beyond,” written by doomsday-preparing CEO Sam Altman.

OpenAI is for-profit, closed-source, and largely in bed with legacy tech mammoth Microsoft, which has poured billions into the buzzed-about AI leader. But unlike other AI competitors, which may have started out as for-profit at first, OpenAI has very different roots.

Indeed, like Vice As writer Chloe Xiang famously put it, the current iteration of OpenAI is everything the company once “promised not to be” — a pretty flattering detail, especially when you consider that these are the people who might just be the ones to bring AGI, if it is ever really possible, to exist.

When the outfit launched in 2015 — the brainchild of SpaceX and Tesla founder Elon Musk, alleged vampire Peter Thiel and Y Combinator co-founder Jessica Livingston, among other big industry players — it was open source (hence the name) as well as consistently anti-profit, arguing that a revenue-dependent model would compromise the integrity of the technology.

“OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by the need to generate financial returns,” reads its introductory statement OpenAI, posted back. in 2015. “Since our research is free of financial obligations, we can better focus on positive human impact.”

That statement seems a little foreign considering where the company is today. Corporate and fiscally motivated, driven by Silicon Valley’s familiar—and familiarly flawed—“move fast and break things” approach, it has ventured so far from its original goal that even co-founder Elon Musk has become a staunch opponent of the company.

And while it claims its goal for its technology is to “ensure that AI … benefits all of humanity,” as Altman wrote in his latest blog post, corporate profit and the good of humanity don’t always go hand in hand. – in hand. (Honestly, if we were to indulge in a little psychoanalysis, it’s starting to feel like OpenAI is trying to convince himself that he means well, however much he tries to convince the audience of the same.)

“There is a misalignment between what the company stands for publicly and how it operates behind closed doors,” wrote Karen Hao for MIT Technology Review back to 2020, as Xiang notes. “Over time, it has allowed fierce competition and increasing pressure for ever-greater funding to erode its founding ideals of openness, transparency and cooperation.”

And that, really, is what makes OpenAI so troubling, not to mention disappointing, as a leader in the field.

Like any other profitable market, the tech industry is full of lousy outfits, mostly run by lousy figures. But the reality that he did a total 180 to go after the pot o’ gold while still making the same allegations about how they look out for humanity’s best interests is troubling. The old OpenAI wouldn’t be the worst case Dr. Frankenstein, but the company’s current iteration — flip-floppy, high-speed, and generally unreliable — might just be the crappiest option out there.

“We want AGI to empower humanity to flourish to its fullest potential in the universe,” Altman wrote in his new blog post, published just last week. “We don’t expect the future to be an unqualified utopia, but we want to maximize the good and minimize the bad, and for AGI to be an enhancer of humanity.”

READ MORE: OpenAI is now everything it promised not to be: corporate, closed source and for-profit (Vice)

More about OpenAGI: Experts criticize OpenAI’s ‘nonsensical’ new AGI promises

Leave a Reply

Your email address will not be published. Required fields are marked *