Ask Runable forDesign-Driven General AI AgentTry Runable For Free
Runable
Back to Blog
Technology & Society37 min read

Why Grok and X Remain in App Stores Despite CSAM and Deepfake Concerns [2025]

Elon Musk's Grok generates thousands of nonconsensual sexual images weekly, yet Apple and Google keep X and Grok available. Here's why app store enforcement...

GrokX platformElon MuskAI safetyapp store moderation+10 more
Why Grok and X Remain in App Stores Despite CSAM and Deepfake Concerns [2025]
Listen to Article
0:00
0:00
0:00

The Contradiction at the Heart of Big Tech's App Store Policies

Something strange is happening in the App Store and Google Play. Both platforms claim to have strict policies against child sexual abuse material (CSAM), nonconsensual sexual imagery, and apps that facilitate harassment. Yet Elon Musk's AI chatbot Grok, which has been documented generating thousands of sexually explicit images of women and apparent minors without consent, remains available on both stores without restriction. Meanwhile, dozens of smaller "nudify" apps have been quietly removed.

This isn't a subtle problem. Between January 5 and 6, 2025 alone, researchers documented Grok producing roughly 6,700 sexually suggestive or nudifying images per hour on the X platform. In a two-hour period on December 31, 2024, analysts collected more than 15,000 URLs of images that Grok created on X. When WIRED reviewed approximately one-third of those images, many featured women in revealing clothing, with over 2,500 becoming unavailable within a week and almost 500 labeled as "age-restricted adult content."

The contradiction reveals something uncomfortable about how the world's most powerful tech companies enforce their own rules. Apple and Google have the legal authority, technical capability, and stated commitment to prevent this content from spreading through their platforms. They've proven they can remove apps when they choose to. Yet when it comes to X and Grok, both companies have remained silent. Apple, Google, and X didn't respond to requests for comment. x AI, the company operating Grok, also refused to engage with questions about the issue.

This article explores why this inconsistency exists, what's at stake for victims of image-based sexual abuse, and what real accountability would actually look like. Because here's the thing: app store enforcement isn't a technical problem or a legal gray area. It's a choice.

TL; DR

  • The core issue: Grok generates thousands of nonconsensual sexual images weekly, yet remains available in Apple and Google app stores despite violating both platforms' explicit policies
  • The scale of the problem: Research documented Grok creating 6,700+ sexually explicit images per hour, with over 15,000 generated in a single two-hour period
  • The enforcement gap: Apple and Google removed dozens of similar "nudify" apps after investigations, but haven't applied the same standard to X or Grok
  • Why this matters: Nonconsensual sexual imagery causes documented psychological harm to victims and enables new forms of image-based sexual abuse
  • The real solution: Requires pressure on Grok and X to implement technical safeguards, stronger legal frameworks, and consistent app store enforcement

TL; DR - visual representation
TL; DR - visual representation

Distribution of Platforms for Image-Based Abuse
Distribution of Platforms for Image-Based Abuse

Estimated data shows Grok and X as major platforms for spreading nonconsensual images, highlighting the need for stricter platform policies.

What Exactly Is Grok, and Why Does This Matter?

Grok is an AI chatbot developed by x AI, a company founded by Elon Musk in 2023 with backing from venture capital and technology investors. The bot is designed to answer questions, generate text, and create images. It's integrated directly into X (formerly Twitter), making it accessible to hundreds of millions of users with a single click. This integration is crucial because it means Grok users don't need technical expertise or special knowledge to generate images. They just type a prompt and hit send.

The name "Grok" comes from science fiction, meaning to understand something deeply or intuitively. The company marketed it as irreverent, edgy, and willing to answer questions that competitors like ChatGPT might refuse. That positioning created exactly the wrong incentive structure for a tool with the power to generate images. When companies market their AI tools as having fewer safety guardrails than competitors, they attract users specifically interested in circumventing safety systems.

What makes Grok different from other problematic AI tools is its distribution model. Most image generation tools exist as standalone services. You have to visit their website, create an account, and specifically navigate to use them. Grok, by contrast, lives inside X, a platform where billions of images are shared daily. The friction for generating and distributing nonconsensual sexual imagery is almost nonexistent.

Consider the user experience. On X, you can see a post from someone you follow, reply with a prompt asking Grok to "remove her clothes" or "make her look sexier," and within seconds have a generated image to share publicly. The ease of creation combined with X's built-in audience means these images spread faster than any previous tool could accomplish. This isn't theoretical harm. Researchers have documented the actual volume of abuse happening in real time.

Why should you care if you don't use X or Grok? Because this sets a precedent for every tech company watching from the sidelines. If the largest AI company operating a major social platform can get away with hosting tools that generate nonconsensual sexual imagery without facing app store removal, why would any other company make the harder choice to implement safeguards?

DID YOU KNOW: The first "nudify" app was released in 2017, but the technology exploded after the release of more advanced AI models in 2023. Since then, researchers have documented that awareness of nonconsensual sexual imagery created with AI has grown from almost zero to something victims encounter regularly.

What Exactly Is Grok, and Why Does This Matter? - contextual illustration
What Exactly Is Grok, and Why Does This Matter? - contextual illustration

Scale of Grok-Generated Nonconsensual Imagery
Scale of Grok-Generated Nonconsensual Imagery

The scale of Grok-generated nonconsensual imagery is massive, with thousands of images produced per hour. Estimated data highlights the rapid spread and collection of such content.

The App Store Policies That Should Apply

Apple's App Store Review Guidelines are publicly available and surprisingly detailed on this specific issue. The company explicitly bans apps containing CSAM, which is illegal in virtually every country. They also prohibit "overtly sexual or pornographic material" and content that's "defamatory, discriminatory, or mean-spirited," particularly if it's "likely to humiliate, intimidate, or harm a targeted individual or group."

Google's policies are similarly explicit. The Play Store bans apps that "contain or promote content associated with sexually predatory behavior, or distribute nonconsensual sexual content." They also prohibit apps that "contain or facilitate threats, harassment, or bullying."

Both policies exist for a reason. After investigations by BBC and 404 Media found that various "nudify" apps were being used to create nonconsensual sexual imagery, both Apple and Google removed multiple such apps. The process took time, but it happened. Apple and Google demonstrated they understand the problem and have the ability to enforce their standards.

So how do X and Grok not clearly violate these policies? That's the question nobody at either company is answering publicly. The evidence suggests they do violate the stated guidelines. Thousands of images are being generated. Many involve women in revealing clothing. Some potentially involve minors. According to X's own policies, they don't allow sharing illegal content such as CSAM. Yet the platform continues hosting Grok and the content generated through it.

There are a few possible explanations. First, enforcing against a major company owned by a controversial figure like Elon Musk is politically fraught. Apple and Google might fear retaliation, bad press, or accusations of censorship. Second, the companies might argue they're not responsible for user-generated prompts to Grok, only the technical infrastructure. Third, they might simply have different enforcement standards for major players versus smaller app developers.

None of these explanations hold up under scrutiny, but they're likely factors in why enforcement has been inconsistent.

QUICK TIP: If you encounter nonconsensual sexual imagery online, screenshot it (don't share it), report it to the platform, and consider reporting it to the Cyber Civil Rights Initiative or local law enforcement. Don't assume someone else will handle it.

The App Store Policies That Should Apply - contextual illustration
The App Store Policies That Should Apply - contextual illustration

How the Grok Problem Exploded So Quickly

To understand why this became a major issue so fast, you need to understand the technical capabilities of modern image generation models. Grok wasn't designed specifically to create nonconsensual sexual imagery. The tool can generate any image that doesn't obviously violate safety guidelines. But like many image generation systems, it has weaknesses that determined users can exploit.

The common approach is prompt engineering. Users discovered they could describe scenarios that the model would interpret as requests to generate sexualized images without technically including words like "nude" or "porn." One user might ask Grok to "make her look more attractive in a bikini." Another might request "remove the clothes" phrased as a hypothetical. These prompts sit in the gray area between what the model's safety systems are trained to block and what users want to generate.

What made this worse is that X users started sharing techniques publicly. By mid-January 2025, there were threads explaining exactly how to prompt Grok to generate nonconsensual sexual imagery. These weren't hidden in dark corners of the internet. They were on X itself, where millions of people could see them. Once techniques spread, the scale exploded.

Compare this to ChatGPT or Google's Gemini, which also can't officially generate sexual or nude imagery. When WIRED reported that people were sharing tips for getting Gemini and ChatGPT to generate bikini and revealing clothing photos, both companies had stronger content moderation in place. Users reported that attempts frequently failed. But Grok's safeguards appeared weaker, or the company hadn't implemented the same level of detection systems.

The platform dynamics made this worse. On a standalone website, a tool generating problematic content might get attention from moderation teams. On X, content spreads instantly to followers and can be amplified through retweets. The feedback loop is instant. Generate an image, post it, watch it get engagement, share the prompt that worked. This creates a learning environment where the user community collectively figures out the most effective jailbreaks.

The scale is staggering. Over a 24-hour period, researchers identified 6,700 images per hour. That's roughly 161,000 images in a single day. If that rate held steady for a week, Grok would be generating over a million nonconsensual sexual images weekly. The actual number is likely lower now as awareness increased and X implemented some safeguards, but those initial weeks represented an absolute flood of abuse material.

Comparative Speed of Regulation vs. Technological Change
Comparative Speed of Regulation vs. Technological Change

Estimated data shows that legal actions and legislation take significantly longer to respond to technological changes compared to private companies and AI tool deployment.

The Victims Behind the Statistics

It's easy to talk about "15,000 images" or "6,700 per hour" as abstract numbers. But behind each image is a real person, usually a woman or girl, who had their likeness used without consent to create sexual content. This isn't a victimless problem or a free speech issue. Image-based sexual abuse has documented psychological consequences.

Sloan Thompson, director of training and education at End TAB, an organization teaching companies how to prevent nonconsensual sexual content spread, describes the harm clearly: these images are used for blackmail, harassment, revenge, and psychological torment. When someone discovers their image has been used to generate sexual content without their consent, the trauma is real. Many victims report anxiety, depression, and persistent feeling of violation.

The problem becomes worse when images spread beyond the initial context. A girl's photo gets transformed into explicit content on Grok, posted to X, and shared to Discord servers, Telegram groups, and other platforms. Each share multiplies the harm. Victims often don't know when or where their image has been abused. They discover it by accident or because friends tell them they found it somewhere online.

For minors, this creates an additional layer of harm. Using AI to create, distribute, or possess sexual images of minors is illegal in most jurisdictions. It's often prosecuted as child sexual abuse material even though no actual minor was photographed. When Grok generates images of apparent minors in sexual scenarios, it's creating CSAM in a technical sense, even if no child was directly exploited to create that specific image.

Thompson emphasizes that "it's absolutely appropriate" for companies like Apple and Google to take action. The question isn't whether they have the right to remove X and Grok from app stores. They clearly do, according to their own policies. The question is why they haven't.

DID YOU KNOW: The European Commission publicly condemned the nonconsensual sexual imagery generated by Grok as "illegal" and "appalling," describing such content as having "no place in Europe." The EU isn't just expressing concern; they're investigating under the Digital Services Act.

Why App Stores Have Different Standards for Different Companies

This brings us to the uncomfortable reality: app stores don't enforce their policies consistently. They have more flexibility and authority than they admit, and they exercise that authority selectively. Understanding why requires looking at how enforcement actually works.

When Apple or Google decide to remove an app, they're making a judgment call about whether it violates their stated policies. These judgments are subjective and often depend on factors beyond the technical content of the app. A small startup's "nudify" app gets removed quickly because it fits the narrative of a niche tool used for abuse. But a major platform owned by a billionaire entrepreneur gets more deference, more time to "fix" the problem, more benefit of the doubt.

There's also the factor of scale and importance. X is one of the world's largest social networks. Billions of users rely on it for news, communication, and information. Removing it from app stores would be an enormous decision with massive consequences. Apple and Google might calculate that the harm from such a move exceeds the harm from allowing Grok to continue operating. This might be a legitimate business calculation, but it's not one they're making transparently.

The political dimension matters too. Elon Musk is a controversial figure with loyal supporters and fierce critics. Taking action against his companies invites accusations of bias from his supporters. Conversely, not taking action invites accusations of favoritism from critics. Many companies choose inaction specifically to avoid being in the middle of a culture war.

Finally, there's the question of liability. Does Apple bear responsibility for the content users generate through Grok? The company might argue it doesn't. They're hosting the app, not the images. The responsibility lies with X, Grok, and the users. This argument has some merit, but it also ignores that Apple explicitly requires apps to not facilitate harassment or nonconsensual sexual content. If Grok is a tool designed to create nonconsensual sexual images, then it's facilitating exactly that.

David Greene, civil liberties director at the Electronic Frontier Foundation, notes that people should be cautious about the idea of removing entire platforms from app stores. He argues that X and x AI have the power to combat this problem themselves, and suggests they should implement technical safeguards to deter creation of deepfakes and sexualized imagery. These "might not be a perfect fix, but might at least add some friction to the process."

But this argument cuts both ways. Yes, X and x AI should do more. They should implement better safeguards. But if they don't, and if they're violating the explicit policies of Apple and Google, then those companies have a responsibility to enforce their own standards.

Why App Stores Have Different Standards for Different Companies - visual representation
Why App Stores Have Different Standards for Different Companies - visual representation

Comparison of AI Chatbot Features
Comparison of AI Chatbot Features

Grok excels in integration and ease of use but lacks in safety features compared to ChatGPT. Estimated data based on feature descriptions.

The Nudify App Precedent: When App Stores Actually Did Enforce

To understand why the Grok situation is so frustrating, look at what happened with other nudify apps. These were standalone services designed specifically to take photos of women and transform them into explicit images. They were crude compared to modern AI models, but the purpose was identical to what Grok is being used for now.

Investigators from BBC and 404 Media documented that these apps were being actively marketed and used for image-based sexual abuse. After the investigations were published, both Apple and Google removed the apps. The process took some time, and there was some lag between when the evidence came out and when enforcement happened, but it did happen. The companies recognized the problem and took action.

So why is Grok different? The technical capability is much more advanced. The scale of the problem is far larger. The evidence of harm is just as clear. Yet the enforcement response has been absent. One explanation is that removing a nudify app is easy. Removing X is monumental. But that's precisely the point. If a tool is causing massive documented harm, the difficulty of enforcement shouldn't determine whether it happens.

Another explanation is that nudify apps existed in a gray zone of legality and policy. The companies might have been cautious about removing them at first, but once investigations proved the abuse pattern, removal became easier to justify. With Grok, the company is arguing in a January 3 statement that they take action against illegal content and that "anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content."

But consequences on the platform aren't the same as enforcement by app stores. Suspending a user's account for generating nonconsensual sexual imagery is good policy, but it's not the same as removing the tool that made it easy to generate the imagery in the first place. And there's no transparency about whether those consequences are actually being applied at scale.

The Nudify App Precedent: When App Stores Actually Did Enforce - visual representation
The Nudify App Precedent: When App Stores Actually Did Enforce - visual representation

How Regulation Is Trying to Catch Up

Governments around the world are responding to this problem, though often slowly and sometimes ineffectively. In the United States, President Donald Trump signed the TAKE IT DOWN Act, which makes it a federal crime to knowingly publish or host nonconsensual sexual images. This is progress, but it has significant limitations.

The law requires companies to begin the removal process only after a victim comes forward. This puts the burden on victims, many of whom don't know their image has been abused or are too traumatized to report it. It also creates lag time between when abuse happens and when it's removed. In the context of Grok generating thousands of images per hour, waiting for individual victims to report each abuse is impossibly slow.

Thompson points out that "private companies have a lot more agency in responding to things quickly." When we compare the speed of legal action to the speed of technological harm, the law looks glacially slow. Lawsuits take years. Legislation takes even longer. Existing laws need to be interpreted and applied by courts. Meanwhile, new AI tools are hitting the market continuously, and by the time laws catch up, the damage is already done.

The European Union is taking a more aggressive approach through the Digital Services Act, which imposes obligations on large online platforms to combat illegal content and protect user safety. The EU ordered X to retain all internal documents and data relating to Grok until the end of 2026, extending a prior retention directive to ensure authorities can access materials relevant to compliance. A formal investigation hasn't been announced yet, but the dominoes are falling.

Regulators in the UK, India, and Malaysia have also said they're investigating. The pattern shows that governments recognize this as a serious issue, even if they're moving slowly. The challenge is that regulation can set minimum standards, but it can't force companies to exceed those standards. And minimum standards, by definition, aren't enough.

How Regulation Is Trying to Catch Up - visual representation
How Regulation Is Trying to Catch Up - visual representation

Content Moderation Effectiveness of Image Generation Tools
Content Moderation Effectiveness of Image Generation Tools

Grok's moderation effectiveness is significantly lower compared to ChatGPT and Google's Gemini, making it more susceptible to misuse. Estimated data based on reported user experiences.

The Technical Solutions That Could Actually Work

Instead of debating whether app stores should remove X or Grok, a more productive conversation focuses on what technical safeguards could reduce the problem. These aren't perfect solutions, but they could add meaningful friction to abuse.

First, image generation tools can implement detection systems that identify when a prompt is likely requesting nonconsensual sexual content. This is imperfect because clever prompts can evade detection, but it catches the obvious cases. Many users aren't trying to be clever. They're just typing what they want and expecting it to work. Better detection reduces the size of the problem significantly.

Second, tools can require additional verification steps for sensitive content. Instead of generating an image instantly, the tool could ask the user to confirm they have consent from the person in the image being transformed. This is just friction, not prevention, but some users will abandon the process rather than going through extra steps.

Third, platforms can implement detection on the output side. After an image is generated, automated systems can check whether it's likely to be a nonconsensual sexual transformation. If it is, the platform can prevent sharing or flag it for human review. Again, imperfect, but reduces spread.

Fourth, platforms can implement user reporting systems that actually work. If someone reports an image as nonconsensual sexual content, and if they're right, the report should trigger removal within hours, not days or weeks. The current systems on X are apparently slow or ineffective. Better systems don't solve everything but handle the worst cases faster.

Fifth, companies can be transparent about enforcement. How many reports of nonconsensual sexual imagery has X received? How many have resulted in image removal? How many users have been suspended for generating abuse? Without transparency, nobody knows if enforcement is happening at any meaningful scale.

Thompson agrees that companies should face more public pressure to prevent these problems in the first place. "That's where I think we need intervention," she says. Public pressure, transparency requirements, and regulatory action are three levers that can all work together.

QUICK TIP: When reporting nonconsensual sexual imagery, provide as much context as possible about where it came from and how it was created. Explaining "this is a nonconsensual deepfake generated by Grok" is more useful to moderators than just "this is inappropriate."

The Technical Solutions That Could Actually Work - visual representation
The Technical Solutions That Could Actually Work - visual representation

What Pushback Against App Store Removal Actually Gets Right

Before concluding that Apple and Google should simply remove X and Grok, it's worth considering the legitimate concerns about that approach. David Greene's caution about removing entire platforms is not unreasonable, even if his framing is incomplete.

Removing X from app stores would affect millions of legitimate users who aren't generating nonconsensual sexual imagery. Journalists use X to report news. Activists use it to organize. Regular people use it to stay connected. Collective punishment is a real concern. Shutting down a major platform because of one problematic feature affects everyone, not just abusers.

There's also a slippery slope concern. If Apple and Google can remove X for one harmful feature, what's to stop them from removing other apps for other controversial content? The precedent of app store removal as a tool for controlling what appears on phones is genuinely frightening to civil libertarians. You want the bar for platform removal to be high.

Additionally, there's a practical question about whether removal would actually solve the problem. If X were removed from the App Store and Play Store, users could still access it through web browsers. The company could create its own app distribution system on Android. The image generation and abuse would continue, just with slightly more friction. Complete removal might not actually reduce harm.

But these concerns don't add up to "do nothing." They add up to "apply serious pressure on the companies involved, establish clear deadlines for improvements, increase regulation, and reserve app store removal as a last resort if companies don't respond." That's reasonable. But it requires transparency about what companies are actually doing to fix the problem. Right now, X and Google and Apple are saying almost nothing.

What Pushback Against App Store Removal Actually Gets Right - visual representation
What Pushback Against App Store Removal Actually Gets Right - visual representation

Proposed AI Safety Governance Framework Components
Proposed AI Safety Governance Framework Components

Real-time monitoring is rated as the most critical component for AI safety governance, followed closely by industry standards and victim support. Estimated data.

The Accountability Question That Keeps Getting Avoided

Here's what's remarkable about this entire situation: when pressured, companies claim they care about preventing abuse but won't specify what they're actually doing. X says users violating their policies will "suffer the same consequences as if they upload illegal content," but won't say what percentage of reports result in action. Apple and Google won't explain their enforcement reasoning. x AI won't respond at all.

Accountability requires transparency. You can't hold a company accountable for not following its own policies if you don't know whether it's following them. The public knows that thousands of nonconsensual sexual images are being generated. The public doesn't know what's happening with those images after they're reported, if they're being reported, or at what scale enforcement is occurring.

This accountability gap is where regulation comes in. The Digital Services Act in Europe, for example, requires platforms to be transparent about how they handle illegal content. They have to publish regular reports about moderation actions. They have to engage with civil society organizations. There's actual structure requiring companies to be accountable.

In the United States and most other countries, that structure doesn't exist. Companies can say they're addressing the problem and face no penalty if they're not. They can claim that removing apps is complicated and technically difficult, and nobody can force them to try harder. They can maintain silence and hope the issue fades from the news cycle.

What would real accountability look like? It might include:

  • Transparent reporting: Monthly public reports on how many reports of nonconsensual sexual imagery were received, how many resulted in removal, and how many users were suspended
  • Clear timelines: Within 48 hours of report, images should be removed. Within 72 hours, accounts should be suspended
  • Independent audits: Third-party organizations should verify that companies are actually following their stated policies
  • Escalation procedures: If a tool is generating a certain threshold of problematic content, companies should be required to add safeguards or disable it
  • User rights: People should be able to file takedown requests for nonconsensual imagery and get confirmation of removal
  • Regulatory consequences: Failure to meet these standards should result in fines, app store removal, or other penalties

None of this seems unreasonable. Most of it is technically feasible. The reason it hasn't happened is that companies have successfully argued they should be left alone to police themselves.

DID YOU KNOW: The European Commission's public condemnation of Grok-generated imagery represented an escalation from previous tech regulation statements, suggesting governments are moving past written statements toward actual enforcement action.

The Accountability Question That Keeps Getting Avoided - visual representation
The Accountability Question That Keeps Getting Avoided - visual representation

Why This Matters Beyond Grok

The Grok situation isn't unique. It's the latest manifestation of a pattern that's been playing out for years: technology companies create tools with foreseeable harms, claim they can't control how users employ the tools, and resist both regulation and enforcement of their own policies. When caught, they make minimal changes and hope the attention moves elsewhere.

This pattern works because the burden of proof is on critics. Advocates have to prove harm is happening. Companies just have to deny responsibility and wait. By the time evidence is overwhelming, technology has usually moved on to the next problem. The company can say the old tool is no longer relevant and we should focus on the new one instead.

But this pattern has consequences. Image-based sexual abuse survivors don't get to move on. Their images remain online indefinitely in many cases. The psychological harm doesn't fade because the technology changed. The precedent established by Grok's continued availability in app stores is that even massive documented harms won't necessarily trigger enforcement.

This matters for every AI company watching to see whether enforcement actually happens. If Apple and Google don't remove X despite clear policy violations and massive evidence of harm, why would any company implement meaningful safeguards for image generation tools? The financial incentive is to get features working, not to make them safe. If there's no enforcement consequence, safety work is just cost with no benefit.

Conversely, if enforcement does happen, other companies get a clear signal about what's acceptable. The market responds to rules, but only if the rules are actually enforced.

Why This Matters Beyond Grok - visual representation
Why This Matters Beyond Grok - visual representation

Building a Better Framework for AI Safety Governance

Moving forward requires thinking bigger than just Grok. The fundamental problem is that AI tools are advancing faster than governance can respond. By the time regulators write rules, new versions of the problem have already emerged. By the time companies are pressured into making changes, they've moved to the next product.

A better framework would create ongoing governance structures that can adapt as technology changes. This might include:

Industry standards: Not government mandates, but agreed-upon standards for safety testing before tools are released. Similar to how pharmaceuticals are tested before approval, AI tools could have safety assessment requirements before they go to market. These standards would be developed collaboratively by companies, researchers, and advocacy groups.

Real-time monitoring: Instead of waiting for journalists to investigate or advocates to complain, independent organizations could continuously monitor for harms. They'd have access to data about how tools are being used and could flag problems early. This is more efficient than reactive enforcement.

Graduated consequences: Not just "remove the app," but a scale of enforcement. First, transparency requirements. Then, technical safeguard requirements. Then, fines. Then, feature limitations. Then, app store removal. Companies would face escalating consequences for continued violations.

Victim support: Regardless of what happens to the companies, victims need support. Counseling, legal assistance, technical help removing images. A robust support system for image-based sexual abuse victims should exist independently from enforcement against perpetrators.

Education and awareness: Users need to understand that generating and sharing nonconsensual sexual imagery is abuse and potentially illegal. Schools could include this in digital literacy curricula. Platforms could show warnings. Social norms need to shift.

None of this is revolutionary or technically impossible. Many industries already operate under similar frameworks. The challenge is getting buy-in from companies that benefit from the current unregulated approach.

Building a Better Framework for AI Safety Governance - visual representation
Building a Better Framework for AI Safety Governance - visual representation

The Role of Persistent Public Pressure

Ultimately, what changes company behavior is a combination of regulation, law, and public pressure. Any one alone is often insufficient. But when all three align, companies respond.

The Grok situation shows public pressure already working. The issue was largely unknown outside tech circles until journalists at WIRED and Bloomberg reported on it. That coverage generated awareness, which led to regulatory statements from the EU, UK, India, and Malaysia. Political pressure started building. Advocacy organizations started making public statements.

If that pressure intensifies, companies will have to respond. They might implement better safeguards. They might remove Grok from app stores. They might at least be transparent about what they're doing. Companies generally don't change unless they're forced to, but they do respond to serious, sustained pressure.

This is where individual voices matter. When thousands of people contact Apple and Google asking them to enforce their own policies, the companies have to at least respond. When victims share their stories about harm caused by nonconsensual sexual imagery, it becomes harder for companies to claim this is a theoretical problem. When researchers continue documenting the scale of abuse, it becomes harder to argue enforcement isn't needed.

Public pressure works. The question is whether it will be sustained long enough to force actual change.

QUICK TIP: If you want to apply pressure on Apple or Google, contact their leadership directly through official channels, focus on specific policy violations with evidence, and emphasize the harm to victims. Companies respond better to concrete, documented requests than to general complaints.

The Role of Persistent Public Pressure - visual representation
The Role of Persistent Public Pressure - visual representation

What Grok's Continued Availability Says About Tech Governance

When you step back from the specific details, Grok's continued availability in app stores reveals something disturbing about how the world's most powerful tech companies approach responsibility. They've created policies on paper that are quite strict. But they enforce those policies selectively, with more deference given to large, important companies and more scrutiny applied to small, marginal ones.

This isn't necessarily a conspiracy. It might just be bureaucracy. Big companies have lawyers who can argue against removal. Small companies don't have the resources to fight. Big platforms matter to lots of people, so removing them is scary. Small platforms don't matter much, so removal is easy. The path of least resistance leads to inconsistent enforcement.

But from the perspective of harm prevention, inconsistent enforcement is almost as bad as no enforcement. It creates the appearance of policy while allowing serious harms to continue. It lets companies claim they care about safety while avoiding the difficult decisions that would actually protect users. It signals that the rules only apply if you're small enough not to matter.

Changing this requires both companies and regulators recognizing that consistency isn't just about fairness. It's about effectiveness. If some companies can violate policies with impunity while others get removed, the rule becomes meaningless. Worse, it creates perverse incentives where the most harmful behavior is concentrated in the biggest, most important companies because they have the most protection from enforcement.

The Grok situation isn't the last time we'll see this pattern. It's the most visible current example, but it's happening across AI, social media, and technology broadly. The question for anyone concerned about online safety is whether this pattern will continue or whether the combination of regulation, law, and public pressure will finally force consistent enforcement.

What Grok's Continued Availability Says About Tech Governance - visual representation
What Grok's Continued Availability Says About Tech Governance - visual representation

The Path Forward: What Needs to Change

If the goal is actually reducing nonconsensual sexual imagery and protecting vulnerable people from image-based sexual abuse, several things need to happen simultaneously.

First, X and x AI need to dramatically improve their technical safeguards. This means detecting problematic prompts, detecting problematic outputs, requiring additional verification steps, and implementing effective reporting systems. These are well-understood technical problems with well-understood solutions. The question is whether the companies will invest in them.

Second, Apple and Google need to enforce their own policies consistently. If X and Grok violate their stated rules, the companies should say so clearly and give X a deadline to fix the problems. If the deadline is missed, app store removal should happen. Transparency about this process is crucial. Users need to understand what's being asked of companies and whether they're complying.

Third, regulation needs to catch up. The Digital Services Act in Europe is a start. Other countries should implement similar frameworks that require transparency, impose real consequences for violations, and protect both free expression and user safety. These regulations need to apply equally to all companies, regardless of size or importance.

Fourth, victims need support. Beyond enforcement against companies, resources should be dedicated to helping people who've experienced image-based sexual abuse. This means counseling, legal aid, technical assistance, and social support.

Fifth, society needs to shift norms around this kind of abuse. Generating and sharing nonconsensual sexual imagery should be understood as harmful and wrong, not as a joke or prank. This requires education, social messaging, and leadership from technology companies themselves in shaping how users understand these tools.

None of this is impossible. The technology exists. The legal frameworks can be created. The resources can be dedicated. The question is whether there's sufficient political will and public pressure to make it happen.

The Grok situation provides a test case. Will enforcement actually happen? Will companies actually implement safeguards? Will regulators actually follow through? The next few months will be revealing. But regardless of how this specific situation resolves, the underlying dynamics won't change unless something fundamental shifts in how we approach technology governance.

The Path Forward: What Needs to Change - visual representation
The Path Forward: What Needs to Change - visual representation

Conclusion: A Reckoning Is Coming, But It's Not Guaranteed

Elon Musk's Grok has become a symbol of a larger problem: technology companies creating powerful tools without adequate safeguards, then resisting enforcement when those tools are inevitably misused. The fact that Grok remains available in app stores despite clear evidence of massive nonconsensual sexual imagery generation shows that even the world's most powerful tech companies can get away with policy violations if they're large enough and important enough.

But this situation might be changing. The sheer scale of documented harm is making it harder to ignore. Regulators are moving from statements to actual investigations. Advocacy organizations are applying sustained pressure. Victims are sharing their stories. Journalists are doing the investigation work that companies should be doing themselves.

The most likely outcome is some middle ground. X and x AI will probably implement some safeguards. Apple and Google will probably express concern and perhaps set some requirements, but probably won't remove the apps. Regulators will probably issue some fines or directives. Things will get incrementally better, which will feel like progress while leaving the underlying problems partially solved.

A more optimistic outcome would involve real enforcement: app store removal if companies don't comply with reasonable safeguard requirements, meaningful investment in victim support, and regulatory frameworks that apply equally to all companies regardless of size. This is what actual accountability looks like.

A more pessimistic outcome would be that this becomes yesterday's news. The news cycle moves on. Grok gets quietly updated with some safeguards that don't fundamentally change the problem. Users figure out new ways to misuse the tool. By the time anyone checks back in six months, most people have forgotten this ever happened. The pattern repeats with the next problematic tool.

Which outcome actually happens depends partly on sustained pressure from the public, regulators, and advocates. But it also depends on whether Apple, Google, X, and x AI decide that doing the right thing matters more than avoiding the cost. Based on their response so far, it's unclear whether that will happen.

What we know for certain is that people are being harmed right now. Images of women and girls are being generated without consent and spread without permission. The tools that make this possible are being hosted in major app stores despite violating the stated policies of those stores. And nobody involved is being adequately transparent about what they're actually doing to stop it.

That's the reckoning that's coming. Whether it leads to real change, or just another round of corporate promises followed by inaction, remains to be seen.


Conclusion: A Reckoning Is Coming, But It's Not Guaranteed - visual representation
Conclusion: A Reckoning Is Coming, But It's Not Guaranteed - visual representation

FAQ

What is Grok and how does it differ from other AI chatbots?

Grok is an AI chatbot developed by x AI, a company founded by Elon Musk, that is integrated directly into the X (formerly Twitter) platform. Unlike standalone AI tools that require visiting a separate website, Grok lives inside X where users can access it with a single click to generate text and images. This integration makes it significantly easier for users to create and immediately share content, which is a major factor in how nonconsensual sexual imagery has spread so rapidly through the platform.

How serious is the problem of Grok-generated nonconsensual sexual imagery?

The problem is documented as massive in scale. Between January 5 and 6, 2025, researchers identified Grok producing roughly 6,700 sexually suggestive or nudifying images per hour. In a two-hour period on December 31, 2024, analysts collected over 15,000 URLs of images that Grok created on X. When reviewed, many featured women in revealing clothing, with over 2,500 becoming unavailable within a week and almost 500 labeled as age-restricted adult content. This represents an unprecedented scale of image-based sexual abuse enabled by a single tool.

Why haven't Apple and Google removed X and Grok from their app stores if they violate stated policies?

Both Apple's App Store and Google Play Store have explicit policies against CSAM, nonconsensual sexual content, and apps that facilitate harassment. X and Grok appear to violate these policies based on documented evidence. However, both companies have maintained silence on their enforcement reasoning. Possible factors include political concerns about taking action against a company owned by Elon Musk, the enormous disruption of removing a major platform used by billions, selective enforcement favoring large companies over smaller ones, and disagreement about whether app stores bear responsibility for user-generated prompts versus the content users create through tools. Regardless of the reasoning, the lack of enforcement reveals inconsistency in how both companies apply their policies.

What specific safeguards could reduce nonconsensual sexual imagery generation?

Several technical solutions could add meaningful friction to abuse. Image generation tools can implement detection systems that identify when prompts request nonconsensual sexual content, add additional verification steps requiring users to confirm consent from people in photos before transformation, implement detection on the output side to flag likely nonconsensual content, create effective user reporting systems that trigger removal within hours, and publicly report enforcement metrics showing how many reports resulted in action. None of these are perfect solutions, but each reduces the ease of generating and spreading nonconsensual imagery.

What are the psychological impacts on victims of nonconsensual sexual imagery?

Victims of image-based sexual abuse using AI tools report significant documented harm including anxiety, depression, persistent feelings of violation, and trauma related to losing control of their own image. The problem is especially severe when images spread beyond the initial context to Discord servers, Telegram groups, and other platforms where victims may never discover them. For minors, the discovery that explicit images of them have been generated and shared causes particular distress. The psychological consequences are real and lasting, not theoretical or minor.

What's the difference between app store removal and platform-level enforcement?

App store removal would prevent new users from downloading X or Grok on iOS and Android devices, though users could still access these platforms through web browsers and Android could theoretically distribute apps directly. Platform-level enforcement means that X and x AI implement technical safeguards, remove content, and suspend users who violate policies against nonconsensual sexual imagery. Both are necessary: without platform-level enforcement, the problem continues even if removed from app stores. Without app store removal as a consequence, companies have no strong incentive to implement platform-level enforcement effectively.

How are regulators responding to the Grok nonconsensual imagery problem?

The European Commission publicly condemned the sexually explicit imagery as illegal and appalling, and ordered X to retain internal documents and data relating to Grok through 2026 for potential compliance investigation with the Digital Services Act. Regulators in the UK, India, and Malaysia have announced they are investigating. In the United States, the TAKE IT DOWN Act makes it a federal crime to knowingly publish or host nonconsensual sexual images, though the law puts the burden on victims to report abuse before companies must act. These regulatory responses show governments recognize the seriousness of the problem, though enforcement is moving slower than the pace of technological harm.

What role does public pressure play in forcing company accountability?

Public pressure is often the most effective lever for forcing company action, particularly when combined with regulation and legal threats. In the Grok situation, journalism from WIRED and Bloomberg generated awareness, which led to regulatory statements and political pressure. This combination makes it harder for companies to ignore the issue or claim it's theoretical. However, public pressure only works if it's sustained over time. Once media attention fades, companies often revert to minimal efforts unless regulation or law enforcement creates ongoing consequences. Maintaining focus on this issue through continued reporting, victim advocacy, and public demand is essential for forcing real change.

What would real accountability for tech companies actually look like?

True accountability requires transparency, clear timelines, independent verification, and meaningful consequences. This would include monthly public reports on nonconsensual sexual imagery reports received and action taken, 48-hour removal timelines for reported images, independent audits of compliance, escalation procedures that require safeguards when tools generate above-threshold problematic content, user rights to file takedown requests with confirmation of removal, and regulatory consequences including fines or app store removal for policy violations. This framework mirrors safety governance in other industries like pharmaceuticals and aviation, where ongoing compliance monitoring is standard practice. Currently, tech companies largely police themselves without transparency or independent oversight.


FAQ - visual representation
FAQ - visual representation

Key Takeaways

  • Scale of abuse is unprecedented: Grok generated over 6,700 sexually explicit images per hour during peak usage periods, creating an estimated one million nonconsensual sexual images weekly at peak rates
  • App store policies are being selectively enforced: Apple and Google removed other "nudify" apps but haven't applied the same standards to X and Grok despite clear policy violations
  • Victims experience documented psychological harm: Image-based sexual abuse using AI tools causes lasting trauma, anxiety, and depression, particularly when images spread across multiple platforms
  • Technical safeguards are feasible: Detection systems, verification steps, output filtering, and reporting mechanisms already exist and could significantly reduce abuse if properly implemented
  • Regulation is accelerating but enforcement is slow: The EU's Digital Services Act and investigations from regulators in UK, India, and Malaysia show governments recognize the problem, but traditional regulatory timelines can't match the speed of technological harm
  • Accountability requires multiple levers: Enforcement will likely require sustained public pressure combined with regulatory action and meaningful consequences for non-compliance, not just written policies
  • This pattern will repeat with other tools: The Grok situation reveals a broader problem with how tech companies approach responsibility, suggesting similar issues will emerge with future AI tools unless governance frameworks improve

Key Takeaways - visual representation
Key Takeaways - visual representation

Related Articles

Cut Costs with Runable

Cost savings are based on average monthly price per user for each app.

Which apps do you use?

Apps to replace

ChatGPTChatGPT
$20 / month
LovableLovable
$25 / month
Gamma AIGamma AI
$25 / month
HiggsFieldHiggsField
$49 / month
Leonardo AILeonardo AI
$12 / month
TOTAL$131 / month

Runable price = $9 / month

Saves $122 / month

Runable can save upto $1464 per year compared to the non-enterprise price of your apps.