Microsoft has fired the entire ethics and society team at its AI organization as part of recent layoffs affecting 10,000 employees across the company, Platform has learned.
The move leaves Microsoft without a dedicated team to ensure AI principles are closely tied to product design as the company leads the way in bringing AI tools to the mainstream, current and former employees said.
Microsoft still maintains an active Office of Responsible AI, which is tasked with creating rules and principles that will govern the company’s AI initiatives. The company says its overall investment in the liability project is growing despite the recent layoffs.
“Microsoft is committed to developing AI products and experiences safely and responsibly, and it does so by investing in people, processes and partnerships that prioritize this,” the company said in a statement. “Over the past six years, we’ve grown the number of people on our product teams and in the Office of Responsible AI, who, along with all of us at Microsoft, are responsible for making sure we’re implementing our AI principles. […] We appreciate the pioneering work Ethics & Society has done to help us on our ongoing journey with responsible artificial intelligence.”
But employees said the ethics and society team played a critical role in making sure the company’s responsible AI principles were truly reflected in the design of the products it shipped.
“Our job was to create rules in areas where there were none.”
“People would look at the authorities coming out of the AI manager’s office and say, ‘I don’t know how that works,'” says one former employee. “Our job was to show them and create rules in areas where they didn’t exist.”
In recent years, the team designed a role-playing game called Judgment Call that helped designers envision potential harms that could arise from AI and discuss them during product development. It was part of a larger “responsible innovation toolkit” that the team released publicly.
More recently, the team has been working to identify the risks posed by Microsoft’s adoption of OpenAI technology across its product line.
The ethics and society team was at its largest in 2020, when it had around 30 employees, including engineers, designers and philosophers. In October, the team was reduced to about seven people as part of a reorganization.
In a meeting with the team after the reorg, John Montgomery, AI’s corporate vice president, told employees that company leaders had instructed them to move quickly. “The pressure from [CTO] Kevin [Scott] and [CEO] Satya [Nadella] is very, very high to take these latest OpenAI models and what comes after them and get them into the hands of customers at a very high speed,” he said, according to audio of the meeting obtained by Platform.
Because of that pressure, Montgomery said, much of the team was going to be transferred to other areas of the organization.
Some members of the group pushed back. “I’m going to be bold enough to ask you to reconsider this decision,” an employee said in the call. “While I understand there are business issues… what this group has always been deeply concerned about is how we impact society and the negative impact we’ve had. And they are important.”
Montgomery refused. “Can I reconsider? I don’t think I will,” he said. “Because unfortunately the pressures remain the same. You don’t have the view I do, and you can probably be thankful for that. There are many things that go into the sausage.’
In response to questions, however, Montgomery said the team would not be disqualified.
“It’s not that it’s going away — it’s that it’s evolving,” he said. “It’s moving towards putting more energy into the individual product teams that build the services and the software, which means that the central hub that’s been doing some of the work is handing over its capabilities and responsibilities.”
Most team members were transferred elsewhere within Microsoft. Afterward, team members who remained on Morality and Society said the smaller crew made it difficult to realize their ambitious plans.
The move leaves a fundamental gap in the holistic design of AI products, an official says
About five months later, on March 6, the remaining employees were told to join a Zoom call at 11:30 a.m. PT to hear a “business critical update” from Montgomery. During the meeting, they were told that their team was finally eliminated.
One employee says the move leaves a fundamental gap in the user experience and holistic design of AI products. “The worst thing is that we have put the business at risk and human beings at risk by doing this,” they explained.
The conflict highlights an ongoing tension for tech giants that are creating divisions dedicated to making their products more socially responsible. At their best, they help product teams anticipate potential misuses of the technology and fix any problems before shipping.
But they also have the job of saying “no” or “slowing down” inside organizations that often don’t want to hear it — or explaining the risks that could lead to legal headaches for the company if they show up in legal discovery. And the resulting friction sometimes boils over in plain sight.
In 2020, Google fired ethical AI researcher Timnit Gebru after she published a paper critical of the large language models that would soar in popularity two years later. The resulting furor resulted in the departure of several more top leaders within the division and reduced the company’s credibility on responsible AI issues.
Microsoft focused on shipping AI tools faster than its competitors
Ethics and society team members said they generally tried to support product development. But they said that as Microsoft focused on shipping AI tools faster than its competitors, the company’s leadership became less interested in the kind of long-term thinking the team specializes in.
It is a carefully controlled dynamic. On the one hand, Microsoft may now have a chance to gain significant traction over Google in search, productivity software, cloud computing and other areas where the giants compete. When it relaunched Bing with artificial intelligence, the company told investors that every 1 percent of market share it could take away from Google in search would result in $2 billion in annual revenue.
This potential explains why Microsoft has so far invested $11 billion in OpenAI and is currently scrambling to integrate the startup’s technology into every corner of its empire. It appears to be having some early success: the company said last week that Bing now has 100 million daily active users, with a third of them new since the search engine was relaunched with OpenAI technology.
On the other hand, everyone involved in the development of artificial intelligence agrees that the technology carries strong and possibly existential risks, both known and unknown. The tech giants have gone to great lengths to say they take these risks seriously — Microsoft alone has three separate teams working on the issue, even after the ethics and society team was eliminated. But given the stakes, any cuts to groups focused on responsible work seem remarkable.
The demise of the ethics and society team came just as the group’s remaining employees had trained their focus on arguably their biggest challenge yet: predicting what would happen when Microsoft released OpenAI-powered tools to a global audience.
Last year, the team wrote a memo detailing the brand risks associated with Bing Image Creator, which uses OpenAI’s DALL-E system to generate images based on text messages. The imaging tool launched in a handful of countries in October, making it one of Microsoft’s first public partnerships with OpenAI.
While text-to-image technology has proven wildly popular, Microsoft researchers correctly predicted that it could also threaten artists’ livelihoods by allowing anyone to easily copy their style.
“In testing Bing Image Creator, it was discovered that with a simple prompt that included only the artist’s name and a medium (painting, print, photography, or sculpture), the images created were almost impossible to differentiate from the original works,” the researchers wrote in the note.
“The risk of brand damage … is real and significant enough to require remediation.”
They added: “The risk of brand damage, both to the artist and their financial stakeholders, and the negative PR to Microsoft resulting from artist complaints and public backlash is real and significant enough to require remediation before harm Microsoft’s brand.”
Additionally, last year OpenAI updated its terms of service to give users “full ownership rights to the images you create with DALL-E.” The move left Microsoft’s ethics and society team concerned.
“If an AI image generator mathematically reproduces images of works, it is morally suspect to suggest that the person who submitted the message has full ownership rights to the resulting image,” they wrote in the memo.
Microsoft researchers created a list of mitigation strategies, including blocking Bing Image Creator users from using the names of living artists as prompts and creating a marketplace to sell an artist’s work that would appear if someone searched for the name of.
Employees say none of these strategies were implemented, and Bing Image Creator launched in test countries anyway.
Microsoft says the tool was modified before release to address concerns raised in the paper and prompted additional work by its AI team.
But legal issues surrounding the technology remain unresolved. In February 2023, Getty Images filed a lawsuit against Stability AI, makers of the Stable Diffusion AI art generator. Getty accused the AI startup of improperly using more than 12 million images to train its system.
The charges echo concerns raised by Microsoft’s AI engineers. “It is likely that few artists have consented to allow their works to be used as training data, and possibly many are still unaware of how genetic technology enables the production of online variants of their work in seconds,” the workers wrote last year.