Amid mounting concerns regarding the influence of AI-generated content on global elections this year, OpenAI, supported by Microsoft, announced on Tuesday the development of a tool capable of identifying photos produced by its text-to-image generator, DALL-E 3.
In internal tests, the company reported that the tool successfully detected images generated by DALL-E 3 approximately 98% of the time. Furthermore, it demonstrated resilience to common alterations such as compression, cropping, and saturation adjustments, maintaining accuracy.
In addition to detection capabilities, the creator of ChatGPT aims to integrate tamper-resistant watermarking into the tool, allowing for secure labeling of digital files like images and audio.
OpenAI is also working on establishing a standard aimed at tracing the origins of various media, and has joined an industry consortium comprising Google, Microsoft, and Adobe to bolster these efforts.
Instances of fake videos circulating online, such as those depicting two Bollywood stars criticizing Indian Prime Minister Narendra Modi during the ongoing general election, underscore the growing prevalence of deepfake and AI-generated content in electoral contexts globally, including Indonesia, Pakistan, the United States, and India.
In a bid to promote AI literacy and societal resilience, OpenAI and Microsoft have announced plans to establish a $2 million fund dedicated to AI education.