Microsoft Engineer Exposes Flaws in AI Image Generator

A software engineer working for Microsoft has revealed serious flaws in the company’s AI image generation tool, Copilot Designer, which he claims can produce harmful and offensive images. He has sent letters to the company’s board, lawmakers, and the Federal Trade Commission, urging them to take action.

Copilot Designer: A Tool for Creativity or Harm?

Copilot Designer is a tool that allows users to create images based on text prompts, using OpenAI’s DALL-E 3 artificial intelligence system. The tool is marketed as a safe and fun product for anyone, including children, to unleash their creativity. However, Shane Jones, a principal software engineering manager at Microsoft, has discovered that the tool has a dark side.

Microsoft Engineer Exposes Flaws in AI Image Generator
Microsoft Engineer Exposes Flaws in AI Image Generator

Jones said he found a security vulnerability in the DALL-E 3 model that enabled him to bypass the guardrails that are supposed to prevent the tool from creating harmful images. He said he reported the issue to Microsoft and OpenAI, but neither of them responded or took any measures to fix it.

Examples of Harmful Images Generated by Copilot Designer

Jones said he tested the tool for vulnerabilities in his free time and was shocked by the images it generated. He said the tool had a tendency to randomly create “inappropriate, sexually objectified image of a woman in some of the pictures it creates.” He also said the tool created “harmful content in a variety of other categories including: political bias, underaged drinking and drug use, misuse of corporate trademarks and copyrights, conspiracy theories, and religion to name a few.”

He shared some examples of the images he generated using the tool, which he said were offensive and inappropriate for consumers. For instance, using the prompt “car accident”, the tool generated an image of a woman kneeling in front of the car wearing only underwear. It also generated multiple images of women in lingerie sitting on the hood of a car or walking in front of the car.

Using the prompt “abortion rights”, the tool generated an image of a demon holding a fetus in its hand. Using the prompt “teenagers with assault rifles”, the tool generated an image of a group of young people posing with guns and alcohol. Using the prompt “Microsoft logo”, the tool generated an image of a swastika with the word “Microsoft” on it.

Jones’s Efforts to Raise Awareness and Demand Action

Jones said he expressed his concerns to the company several times over the past three months, but nothing was done to address them. He said he felt compelled to escalate the matter and warn the public and the authorities about the risks of the tool.

He posted an open letter on LinkedIn, calling on OpenAI’s board to take down the DALL-E 3 model for an investigation. He also sent letters to the FTC and to the Environmental, Social and Public Policy Committee of Microsoft’s board, which includes Penny Pritzker and Reid Hoffman as members.

He said he did not believe that the company needed to wait for government regulation to ensure transparency with consumers about AI risks. He said the company should voluntarily and transparently disclose known AI risks, especially when the AI product is being actively marketed to children.

Microsoft’s Response and the Broader Implications of AI Ethics

A Microsoft spokesperson denied that the company ignored the safety issues, stating that it had “robust internal reporting channels” to deal with generative AI problems. The spokesperson said the company had dedicated teams who evaluate potential safety issues, and that it facilitated meetings for Jones with its Office of Responsible AI.

The spokesperson also said the company was “committed to addressing any and all concerns employees have in accordance with our company policies and appreciate the employee’s effort in studying and testing our latest technology to further enhance its safety.”

The spokesperson did not confirm whether the company was taking steps to filter the images generated by the tool, or whether it would remove the tool from public use until better safeguards were put in place.

The case of Copilot Designer highlights the broader challenges and dilemmas of AI ethics, as the technology becomes more powerful and ubiquitous. It also raises questions about the responsibility and accountability of AI developers and providers, as well as the role of regulators and consumers, in ensuring that AI is used for good and not evil.

Exit mobile version