Connect with us

Hi, what are you looking for?

Social Media Updates

Instagram Faces Backlash Over Self-Harm Content Aimed at Teens

Instagram

A recent study conducted by Danish researchers at Digitalt Ansvar (Digital Accountability) has revealed alarming shortcomings in Instagram’s moderation of self-harm content.

Owned by Meta, the platform is reportedly failing to address harmful posts effectively, potentially contributing to the proliferation of such content among teenagers.

The researchers created a simulated self-harm network on Instagram, posting 85 progressively graphic images and messages over a one-month period.

Shockingly, despite Meta’s claims of utilizing artificial intelligence to remove 99% of harmful content, none of the images were flagged or removed during the experiment.

Digitalt Ansvar’s findings underscore a troubling discrepancy. The organization’s own AI tool was able to identify a significant portion of the harmful content, indicating that Instagram possesses the technological capacity to detect such material but appears to be falling short in implementation.

Even more concerning is the platform’s algorithmic behavior. The study found that Instagram actively promoted the expansion of the simulated self-harm network by connecting fake profiles, including 13-year-old accounts, to all members of the group.

This points to an unsettling reality: the platform’s algorithms may be inadvertently facilitating the growth of self-harm communities rather than curbing them.

Experts have raised the alarm over these findings. A psychologist who previously served on Meta’s suicide prevention group expressed grave concerns about the platform’s inaction, emphasizing the potential life-threatening consequences.

The criticism highlights Instagram’s apparent prioritization of user engagement and algorithmic reach over the safety and well-being of its users.

The implications of this study extend beyond Instagram’s internal policies, raising serious questions about its compliance with the European Union’s Digital Services Act (DSA).

This legislation mandates that platforms address systemic risks to user well-being, including harmful content.

Instagram’s failure to moderate self-harm posts could constitute a violation of these regulations, potentially exposing Meta to legal and financial repercussions.

The findings call for urgent action from Meta to reevaluate its moderation practices, strengthen its algorithms, and prioritize user safety to mitigate the risk posed by harmful content on its platform.

Written By

I am a dynamic professional, specializing in Peace and Conflict Studies, Conflict Management and Resolution, and International Relations. My expertise is particularly focused on South Asian Conflicts and the intricacies of the Indian Ocean and Asia Pacific Politics. With my skills as a Content Writer, I serve as a bridge between academia and the public, translating complex global issues into accessible narratives. My passion for fostering understanding and cooperation on the national and international stage drives me to make meaningful contributions to peace and global discourse.

Health & Education

Punjab Schools The Punjab government has officially released the winter vacation schedule for schools across the province, offering students and educators a much-needed break...

National

ISLAMABAD: A Pakistani YouTuber has reached a remarkable milestone by earning YouTube’s coveted Diamond Play Button. In a recent video, Arshad expressed heartfelt gratitude...

Health & Education

Winter Vacation The Islamabad High Court (IHC) has officially announced winter vacation, as confirmed by a notification issued by the Deputy Registrar following approval...

Showbiz News

Hania Aamir has been recognized as the top Pakistani celebrity for 2024, securing the ninth spot on the globally renowned Top 50 Asian Celebrities...