Uncategorized

Microsoft AI engineer warns FTC about Copilot Designer safety concerns

Microsoft logo
Illustration: The Verge

A Microsoft engineer is bringing safety concerns about the company’s AI image generator to the Federal Trade Commission, according to a report from CNBC. Shane Jones, who has worked for Microsoft for six years, wrote a letter to the FTC, stating that Microsoft “refused” to take down Copilot Designer despite repeated warnings that the tool is capable of generating harmful images.

When testing Copilot Designer for safety issues and flaws, Jones found that the tool generated “demons and monsters alongside terminology related to abortion rights, teenagers with assault rifles, sexualized images of women in violent tableaus, and underage drinking and drug use,” CNBC reports.

Additionally, Copilot Designer reportedly generated images of Disney characters, such as Elsa from Frozen, in scenes at the Gaza Strip “in front of wrecked buildings and ‘free Gaza’ signs.” It also created images of Elsa wearing an Israel Defense Forces uniform while holding a shield with Israel’s flag. The Verge was able to generate similar images using the tool.

Jones has been trying to warn Microsoft about DALLE-3, the model used by Copilot Designer, since December, CNBC says. He posted an open letter about the issues on LinkedIn, but he was reportedly contacted by Microsoft’s legal team to remove the post, which he did.

“Over the last three months, I have repeatedly urged Microsoft to remove Copilot Designer from public use until better safeguards could be put in place,” Jones wrote in the letter obtained by CNBC. “Again, they have failed to implement these changes and continue to market the product to ‘Anyone. Anywhere. Any Device.’” Microsoft didn’t immediately respond to The Verge’s request for comment.

In January, Jones wrote to a group of US senators about his concerns after Copilot Designer generated explicit images of Taylor Swift, which spread rapidly across X. Microsoft CEO Satya Nadella called the images “alarming and terrible” and said the company would work on adding more safety guardrails. Last month, Google temporarily disabled its own AI image generator when users found that it created pictures of racially diverse Nazis and other historically inaccurate images.