By Ayesha Babar
In February 2026, the Government of Pakistan organized Indus AI Week as a national artificial intelligence (AI) movement to enable structured coordination across sectors, increase awareness, and to connect policy and skills with industry, while committing to invest $1 billion in Pakistanโs AI ecosystem in the country by 2030. During the inauguration ceremony, Prime Minister Shahbaz Sharif said, โPakistan is absolutely ready to accept the challenge.โ However, in the evolving technological and AI landscape, no country is currently fully prepared to grapple with complex and emerging challenges brought on by AI. The Government AI Readiness Index 2025, which assesses a countryโs AI readiness on six pillars โ policy capacity, AI infrastructure, governance, public sector adoption, development and diffusion, and resilience โ ranks Pakistan 81 out of 195, which suggests that the country has comparatively weaker technical infrastructure and AI sector maturity.
Between AI Ambition and Reality
One of the main challenges that surrounds Pakistanโs AI readiness is societal transition and adaptation to AI, a pillar in which the country has scored 33.75 on the Government AI Readiness Index. Societal transition and adaptation to AI refers to a countryโs ability to effectively integrate AI in governance, economy, and public life that yields the most technological benefits while mitigating risks. This includes digital literacy and awareness on AI, guidelines on the ethical use of AI, and the responsible use of AI by citizens and institutions. Countries that score higher on this pillar are the ones that have stronger guidelines for AI regulation and harm mitigation, increased institutional preparedness and digital literacy on AI. Countries that rank lower have limited guidelines regulating AI misuse, and limited institutional preparedness and awareness about AI. According to Gallup, only 31% of people in Pakistan have a good understanding of AI, indicating the reason the country ranked lower on the societal adaptation pillar.
A lower score on this pillar has far reaching consequences, especially in Pakistan, where digital spaces are politically polarized, characterised by echo chambers reinforcing specific viewpoints to increase the divide among the masses and the rapid spread of disinformation with a single click, all of which is amplified by a growing number of 66.9 million social media users in the country. This creates a spawning ground for digital disinformation and deepfakes, especially deployed against women to spark controversy and tarnish their reputation. Deepfakes are synthetic and manipulated images, audios, or videos generated using AI for malicious purposes, including non-consensual content and political propaganda. Common types of deepfakes include face swapping, voice cloning, and lip syncing. A 2023 State of Deepfakes Report reveals that deepfake pornography makes up 98% of all deepfake videos online. Of all the individuals targeted through deepfake pornography, women make up 99% of affected individuals. Over the years, the cost and time utilized in generating deepfakes has reduced and it has become much cheaper and faster to produce synthetic media. The same report reveals that users can create a sixty-second deepfake pornographic video of anyone using a clear image of their face, for free, in less than twenty five minutes.
A New Frontier of Gendered Harm
For many women in Pakistan, deepfakes manifest as a form of technology-facilitated gender-based violence (TFGBV) which is used to threaten, intimidate and silence them. TFGBV refers to the act of using technology to cause physical, economic, psychological and sexual harm to women, girls and vulnerable communities. In the same month that the Government of Pakistan organized the Indus AI week, the Government of Punjab acquired a Gulfstream GVII-G500 aircraft, which the administration claimed was part of a proposed airline project and later stated that it is to be utilized for VIP transport roles. As a result and due to highly politically polarized digital spaces in Pakistan, AI-generated pictures and videos of Punjabโs Chief Minister, Maryam Nawaz, rapidly surfaced on different social media platforms to spread political propaganda. These pictures and videos were generated to advance an implied narrative of a misleading association between the government and armed forces. In response, Punjabโs Senior Minister, Marriyum Aurangzeb announced that the provincial government will take strict action under the Defamation Act 2024 against all those involved in this malicious campaign.
This is not the first time a female politician and public figure has been targeted with AI-generated images and videos to endanger their reputation and safety. In August 2024, a manipulated video of Punjabโs Information Minister, Azma Bukhari, emerged wherein her face was superimposed on a video of a woman kissing an unidentified man. Similarly, in September 2024, a doctored video of Hina Pervaiz Butt, Member of the Provincial Assembly of Punjab, surfaced showing her in an explicit act. The Soch Fact Check identified that the video was manipulated and the politicianโs face was superimposed on an unidentified woman intimately engaged with a man.
In recent years, there has also been a surge in highly coordinated politically-motivated online abuse against women journalists and activists. In August 2025, a doctored image of human rights lawyer, Imaan Mazari, emerged wherein she was next to Usman Qazi, a lecturer at the Balochistan University of Information Technology, Engineering, and Management Sciences, Quetta. Similarly, in December 2025, multiple AI-generated images and videos of a prominent female Pakistani journalist, Benazir Shah, went viral where she was dancing and some posts showed Shah dining with a media mogul. Both of these incidents underscore the nature of coordinated attacks carried out against female activists to tarnish their professional image and question their character.
At a time when the administration is working towards initiatives such as the Indus AI week to harness AI for development and signalling full preparedness in the face of any challenge, a parallel reality is unfolding in which the government itself is becoming a target of AI-generated propaganda and has to make additional efforts to combat AI-based harassment and disinformation. Secondly, in the wake of any political event, women are the first ones to become targets of AI-generated abuse, highlighting how digital spaces are becoming a means for perpetuating gender-based violence against women and girls.
This online abuse often escalates into offline violence, leading to life-threatening consequences for sufferers. In 2023, an 18-year old girl was killed in the name of honor in a remote area of Kohistan, Pakistan, after a picture of her with a man went viral. Later, the police investigation found that the picture was photoshopped and posted on a fake social media account. This incident underscores the extreme consequences of TFGBV, since pictures and videos go viral without any geographical boundaries instantly before facts are verified, creating a dangerous loop of online-offline violence.
Availability and Accessibility of Image-Based Abuse Tools
It is also imperative to scrutinize the apps that generate AI-generated non-consensual images. Generative AI (GenAI) has seen an exponential rise, with the top 40 GenAI apps attracting nearly three billion monthly visits from hundreds of millions of users, as of March 2024. Women make up one third of this user base, while young men aged between 18 and 24 are the most active users, especially for video GenAI tools. The lower uptake of these tools in women reflects the gendered inequalities in digital access and participation, and the risks associated with these technologies that disproportionately impacts women and girls.
Nudify apps make up a rapidly growing section of AI tools. The two main mobile application stores, the Apple App Store and Google Play Store, host 47 and 55 nudify apps respectively, despite both platforms claiming that they are dedicated to protecting the safety and security of users. There are two categories of these apps: apps that use AI to generate sexualized images based on a user prompt, and apps that use AI to swap faces and superimpose the face of one person on another personโs body. Google Play Policy Center says that it does not allow apps that contain or promote sexual content, including content that attempts to threaten people sexually and apps that claim to undress people, even if labelled as entertainment or a prank. Similarly, Appleโs App Store Guidelines say that the platform bans any overtly sexual and pornographic material, including apps that may include pornography. These nudify apps collectively have been downloaded more than 705 million times worldwide, generating a $117 million in revenue, a share of which both app stores retain.
It is equally important to note that these nudify services exist beyond mobile applications. A study analyzing 85 nudify services found that most of these services are available on third-party affiliated sites that earn money by monetizing them, with overall 18.6 million visitors monthly using these services. This number of visitors excludes the number of people who share deepfakes on messaging platforms such as Telegram. Nearly 150 Telegram channels, encrypted channels known for secure communication, were identified to offer nudified photos and videos to users for free, while other channels are providing a curated gallery of AI-generated nude images and sexualized videos of celebrities and ordinary women. One of these channels had 25,000 subscribers, where men shared videos of their love interests made to strip via AI. This shows the grave irresponsibility of these platforms who fail to uphold and implement their own policies, exacerbating TFGBV.
Advertising Synthetic Abuse
Advertisements are a big part of the reason these apps secure millions of users monthly. A study by an AI Security Expert at Cornell Tech discovered more than 8,000 ads of a nudify app, on Meta ads library. Similarly, Meta displayed 2,500 ads of โAI kissing appsโ across Facebook and Instagram, out of which 1,000 were still active as of January 2025. These ads portray pictures of celebrities kissing each other while some depict pictures of random people kissing, unsure whether they are AI-generated or real, advertising that with AI you can kiss your celebrity crush or your ex. Apart from this, Meta has also displayed around 1,200 ads of โAI hugging appsโ which depict children hugging their favorite cartoon or people hugging their younger or older selves from the past and future. These apps may be advertised as harmless but they promote a trend normalizing non-consensual images and can lead to tools that generate other types of image-based abuse.
Reimagining Policy Priorities in the Age of AI
With TGFBV rapidly intensifying due to the rise in generative AI apps and the lack of accountability on part of platforms, perpetrators are finding new ways to bypass guardrails to create harmful, non-consensual content. According to the UN, 1.8 billion women and girls are still vulnerable to TFGBV and lack legal protection, as only less than 40% of the countries have laws against online harassment and all forms of TFGBV, leaving perpetrators at large. However, Pakistan is the only country in South Asia having specialized procedures in place for cyber harassment. While that is applaudable, a deeply entrenched digital gender divide exists in Pakistan, according to which women are 25% less likely to use mobile internet than men, and remain largely unaware of their digital rights and protections available under the law. A study on online violence revealed that 72% of women in Pakistan are unaware of cyber laws and ways to file a complaint to seek protection while more than 45% of women find it embarrassing to report harassment.
Given the risks associated with an AI revolution across sectors, Pakistan lacks AI-readiness currently in the face of these harms, especially unless its disproportionate impact on women and vulnerable communities has been assessed, analyzing real-world examples of those affected by it, and remedies have been put in place.
With regards to generating manipulated images and videos, technology companies must move beyond reactive moderation towards stricter policies against generating and posting any form of non-consensual sexualized images of women and girls. App Stores, including Google Play and Apple App Store, should restrict all apps that generate nude media. Alongside this, legal protection against cyber harassment, such as specialized laws and procedures against cyber harassment for all vulnerable groups so perpetrators cannot evade due punishment, is extremely critical. Lastly, digital literacy should be prioritized so women and girls are aware of new forms of online abuse and the legal protections available under the law. It is imperative to take these measures to ensure that innovation does not disproportionately impact those already vulnerable to digital abuse.
The writer is an Incident Response and Research Analyst at Digital Rights Foundation. She can be reached at ayesha_babar@digitalrightsfoundation.pk
