Meta has apologised after Instagram users reported a surge of violent and graphic content on their Reels feed, despite having sensitive content controls set to the highest level.
The company confirmed that the disturbing content was caused by a technical error and assured users that the issue had been fixed.
“We have corrected an error that caused some users to see content in their Instagram Reels feed that should not have been recommended. We apologise for the mistake,” a Meta spokesperson said.
The glitch sparked widespread frustration on social media, with many users sharing their experiences of encountering graphic violence — including disturbing imagery — despite having opted for Instagram’s strictest content moderation settings.
Meta’s policies prohibit content featuring graphic violence, such as dismemberment or suffering, unless posted to raise awareness about human rights abuses or terrorism — and even then, such posts are flagged with warning labels.
The company uses a combination of artificial intelligence, machine learning, and 15,000 human moderators to detect and remove harmful imagery. However, this incident raised concerns about the platform’s ability to prevent inappropriate content from reaching users, particularly younger audiences.
The error comes shortly after Meta announced changes to its content moderation approach in January, shifting automated systems to prioritise high-risk violations like terrorism and child exploitation while relying more on user reports for minor infractions.
The company’s decision to scale back the demotion of political content has also sparked speculation about its ties to political figures, including Donald Trump.
Meta said it remains committed to protecting users and will continue refining its moderation systems.
