Microsoft makes reductions in ethics AI team amid ChatGPT work
Microsoft says it hasn't "de-invested" in its ethics team as it pursues its ChatGPT AI project. Reports pointed to layoffs in that area.
Amid recent layoffs at Microsoft, some 10,000 employees were affected across the global corporation.
Part of that 5% cut of the company's workforce included employees with Microsoft's ethics and society department.
The outlet Platformer, which focuses on Silicon Valley's place in the country and how it works within the framework of U.S. legislation, reported that as Microsoft continues to build up its ChatGPT AI project, it made its "entire ethics and society team within the artificial intelligence organization" redundant.
Microsoft responded to the report sharing a statement with Scripps News that said, "As we shared with Platformer, the story as written leaves the impression that Microsoft has de-invested broadly in responsible AI, which is not the case."
As Ars Technica reported, largely citing Platformer's source, information pointed to a cut in the "entire team" that ensures Microsoft's AI products come with safeguards that try to mitigate harms to society.
The team's work included the creation of a "Responsible Innovation Practices Toolkit" meant to put a set of practices in place based on learnings. Additional practices were meant to be added to that to help mitigate any potential harms, according to a post from Mira Lane, who worked with the Ethics and Society team at Microsoft at the time the post was made.
That kit included various principles for responsible AI like "fairness," "reliability and safety," "privacy and security" and other items that would be expected from a tech company, to keep innovative tools from harming users.
As Reuters reported, the Platformer report on the cuts to the ethics team at Microsoft was published ahead of a release from OpenAI on a powerful model called GPT-4 that would help power the Bing search engine.
But Microsoft told Scripps News that the "initial work" of the Ethics and Society team helped "spur the interdisciplinary way in which" the company works "across research, policy, and engineering across Microsoft." This suggests that the company found the team's work, up until that point, sufficient, as it moved forward in an amended direction.
The statement said, "The Ethics and Society team played a key role at the beginning of our responsible AI journey, incubating the culture of responsible innovation that Microsoft’s leadership is committed to."
Microsoft says it has "hundreds" of employees currently working on "responsible AI considerations into" its "engineering systems and processes."
As Ars Technica noted, critics of the cuts aren't convinced.
Emily Bender, a scholar on computational linguistics and ethics issues at the University of Washington, says what struck her most about the Platformer report is the claim that executives with Microsoft described "the urgency to move 'AI models into the hands of customers."
The belief is that powerful artificial intelligence models like GPT-4 are the first building blocks for the inevitable proliferation of human-like technology to compete in the world.
As Lane wrote, "As a profession known for prioritizing human needs, design thinking is vital for building an ethical future. Expanding human-centered design to emphasize trust and responsibility is equally vital."
According to Platformer, John Montgomery, Microsoft's corporate vice president of AI, reportedly told employees after the reorganization that he wanted to see them take the most recent openAI models and "move them into customers hands at a very high speed."
The report said at least one member of the team said on a call, “I'm going to be bold enough to ask you to please reconsider this decision.”
The employee reportedly said, the "team has always been deeply concerned about is how we impact society and the negative impacts that we've had. And they are significant.”
Google launches artificially intelligent chatbot 'Bard'
Google has opened a waitlist for users to begin interacting with Bard, an AI tool similar to OpenAI's ChatGPT technology.
AI discovers potential new cancer treatment in just 30 days
Researchers from the Univ. of Toronto and Insilico Medicine used an AI-powered database to create a drug that could potentially treat liver cancer.
OpenAI unveils GPT-4, its most advanced AI language model
GPT-4 exhibits "human-level performance" on various professional and academic benchmarks, the company said.
Why does today's audience say certain fictional characters are gay?
In this segment of "Pop Quiz," Scripps News explores how historians and viewers are speculating about queer subtext in new and old media.
Los Angeles school support staff at an impasse in higher pay talks
Cafeteria workers, bus drivers and other school support staff employees say the district isn't meeting their requests.
This basketball magazine is mixing art with 'outlandish' sport stories
Flagrant Magazine is carving out its own space in the sports media industry by mixing abstract basketball art with culture stories from off the court.