Report: Microsoft Cuts Core Ethical AI Team

An entire team responsible for ensuring that Microsoft’s artificial intelligence products ship with safeguards to mitigate social harm has been cut during the company’s most recent layoff of 10,000 employees, Platformer reported.

Former employees said the ethics and society team was a critical part of Microsoft’s strategy to reduce the risks associated with using OpenAI technology in Microsoft products. Before it disappeared, the team developed an entire “responsible innovation toolbox” to help Microsoft engineers predict what harms AI might cause and then mitigate those harms.

Platformer’s report came shortly before OpenAI released what is likely its most powerful AI model yet, GPT-4, which already helps power Bing search, Reuters reported.

In a statement provided to Ars, Microsoft said it remains “committed to developing AI products and experiences safely and responsibly, and it does so by investing in people, processes and partnerships that prioritize that.”

Calling the ethics and society team’s work a “frontline,” Microsoft said the company had focused more over the past six years on investing in and expanding the size of the Office of Responsible AI. This office remains active, along with Microsoft’s other responsible AI working groups, the Aether Committee and the Responsible AI Strategy in Engineering.

Emily Bender, a University of Washington expert in computational linguistics and ethical issues in natural language processing, joined other critics tweeting to denounce Microsoft’s decision to disband the ethics and society team. Bender told Ars that, as an outsider, she thinks Microsoft’s decision was “short-sighted.” He added, “Given how difficult and important this work is, any significant cuts to the people doing the work are damning.”

Brief history of the ethics and society group

Microsoft began focusing on teams dedicated to exploring responsible artificial intelligence in 2017, CNBC reported. By 2020, this effort included the ethics and society group with a maximum size of 30 members, the Platformer noted. But as the AI ​​race with Google heats up, Microsoft began moving the majority of its ethics and society team members to specific product teams last October. That left just seven people dedicated to implementing the ethics and society group’s “ambitious plans,” Platformer employees said.

It was too much work for such a small team, and Platformer reported that former team members said Microsoft did not always act on their recommendations, such as the mitigation strategies recommended for Bing Image Creator that might prevent it from copying live artist brands. (Microsoft disputed that claim, saying it modified the tool before release to address the group’s concerns.)

While the team was downsized last fall, Platformer said Microsoft’s corporate vice president of artificial intelligence, John Montgomery, said there was a lot of pressure to “take these latest OpenAI models and what’s coming next and get them into the hands of customers. at a very high speed”. Workers warned Montgomery of “significant” concerns they had about potential negative effects of this speed-based strategy, but Montgomery insisted “the pressures remain the same.”

However, even as the ethics and society group has been reduced in size, Microsoft has told the group that it will not be eliminated. The company announced a change on March 6, however, when the remnants of the team were told during a Zoom meeting that it was deemed “business critical” to disband the team entirely.

Bender told Ars that the decision is particularly disappointing because Microsoft has “managed to bring together some really great people working on AI, ethics and social impact for technology.” But Bender said that with the move, Microsoft is “basically saying” that if the company perceives the ethics and society group’s recommendations “as contrary to what’s going to make us money in the short term, then they have to go.”

To experts like Bender, it appears that Microsoft is no longer interested in funding a team dedicated to telling the company to slow down when its AI models may pose risks, including legal risks. One employee told Platformer that he wondered what would happen to both the brand and users now that there was seemingly no one to say “no” when potentially irresponsible plans were pushed to users.

“The worst thing is that we have put the business at risk and human beings at risk,” a former employee told Platformer.

The shaky future of responsible artificial intelligence

When the company relaunched Bing with AI, users quickly discovered that the Bing Chat tool was behaving in unexpected ways—creating conspiracies, spewing misinformation, and even seemingly defaming people. Until now, tech companies like Microsoft and Google have relied on self-regulating AI tool releases, identifying risks and mitigating vulnerabilities. But Bender—who co-authored the paper with former Google AI researcher Timnit Gebru on Google ethics that led to Gebru’s firing for criticizing large language models on which many AI tools depend—told Ars that “self-regulation as a model is not going to work.”

“There needs to be external pressure” to invest in responsible AI teams, Bender told Ars.

Bender argues that regulators need to get involved at this point if society wants more transparency from companies amid the “current wave of AI hype.” Otherwise, users risk using popular tools — as they did with AI-powered Bing, which now has 100 million monthly active users — without fully knowing how those tools could harm users.

“I think that any user that comes across this needs to have a really clear idea of ​​what it is that they’re working with,” Bender told Ars. “And I don’t see any company doing a good job at it.”

Bender said it’s “scary” that companies seem consumed by the AI ​​hype, which claims AI “will be as big and disruptive as the Internet.” Instead, companies have a duty to “think about what can go wrong.”

At Microsoft, that task now falls to the Office of Responsible AI, a spokesperson told Ars.

“We’ve also increased the scale and scope of the Office of Responsible AI, which provides cross-company support for things like looking at sensitive use cases and supporting policies that protect customers,” the Microsoft spokesperson said.

For Bender, a better solution than depending on companies like Microsoft to do the right thing is for society to support regulation — not to “micromanage specific technologies, but rather to establish and protect rights” in a lasting way, he wrote. on Twitter.

Until there are proper regulations, more transparency about potential harms, and better information literacy among users, Bender recommends that users “never accept AI medical advice, legal advice, psychotherapy” or other sensitive applications of AI.

“It seems very, very short-sighted to me,” Bender said of the current AI hype.

Leave a Comment