Issue #107: Combatting deepfakes on the internet
Microsoft collaborates with StopNCII to tackle intimate image abuse
Welcome to Issue #107 of One Minute AI, your daily AI news companion. This issue discusses a recent announcement from Microsoft.
Microsoft helps tackle intimate image abuse along with StopNCII
Microsoft is ramping up efforts to combat intimate image abuse, particularly non-consensual intimate imagery (NCII) exacerbated by AI-generated deepfakes. Partnering with StopNCII, a platform that enables victims to create digital fingerprints of intimate images, Microsoft aims to prevent such content from appearing in its Bing search results. Since March 2024, they have acted on nearly 269,000 images. Microsoft continues to advocate for updated policies, global collaboration, and more robust safety measures to protect users, particularly women and girls, from the harms of NCII.
The company also emphasizes its comprehensive policies that prohibit the sharing, creation, or distribution of NCII across its services, including synthetic content. Microsoft supports victims by providing a centralized reporting system to remove such content globally. They are further engaging with NGOs and policymakers to address the evolving threat, focusing on safeguarding vulnerable groups such as children and teens.
Want to help?
If you liked this issue, help spread the word and share One Minute AI with your peers and community.
You can also share feedback with us, as well as news from the AI world that you’d like to see featured by joining our chat on Substack.