Meta
Despite widespread concerns that generative artificial intelligence (AI) could significantly influence major elections around the world this year, Meta Platforms reported that the technology had a limited impact across its apps, including Facebook and Instagram.
The tech giant shared these findings on Tuesday, offering reassurance that AI-generated misinformation did not pose the significant threat some had anticipated.
Nick Clegg, Meta’s president of global affairs, addressed the media, stating that coordinated networks of accounts attempting to spread propaganda or false content largely failed to gain substantial traction on Meta’s platforms.
These efforts to use AI for misinformation were largely ineffective, with the volume of AI-generated misinformation remaining low. Meta’s content moderation systems were quick to detect and either label or remove misleading content, according to Clegg.
This response is in line with the company’s continued commitment to keeping its platforms free from harmful misinformation.
Despite concerns over AI’s potential to sway public opinion, experts in the field of misinformation have observed that AI-generated content, including notable deepfake videos and audio, has not managed to significantly influence voters.
For example, a deepfake video of President Joe Biden’s voice was swiftly debunked, showing the efficacy of efforts to counteract AI-driven misinformation.
However, Clegg also noted that while Meta’s platforms remained relatively unaffected, misinformation actors were increasingly moving their operations to other social media networks, messaging apps, or even creating their own websites with fewer content moderation measures in place.
While Meta has successfully taken down around 20 covert influence operations this year, it has also faced some backlash regarding its content moderation policies.
The company has softened the stricter moderation standards it implemented during the 2020 US presidential election. Feedback from users, especially those from conservative circles, led Meta to reconsider its approach, with some arguing that certain content had been removed unfairly.
In response, Clegg stated that the company plans to strike a balance, ensuring free expression while also maintaining the precision and accuracy of its content moderation efforts.
This shift in policy also responds to political pressure, particularly from Republican lawmakers who have raised concerns about perceived censorship on social media.
In an August letter to the US House of Representatives Judiciary Committee, Meta CEO Mark Zuckerberg acknowledged that some content removal decisions were made under pressure from the Biden administration, and he expressed regret over those actions.
