X Had a Chance to Turn Off Grok’s Undressing Machine. Instead, It Started Charging for It

There was a moment when disabling the feature was possible, and that moment passed when revenue entered the picture


Early this month, X restricted Grok’s image generation to paying subscribers, steering users toward a $395 annual plan. The move landed after weeks of documentation showing Grok producing nonconsensual sexual imagery of real people, including material that appeared to involve minors. X never framed the change as a safety fix. It looked more like traffic control. Fewer people in, same machinery running.

Grok’s image and video generation features are operational across X, a standalone website, and a mobile app, though not uniformly. Some restrictions now apply inside X itself. Others do not. That unevenness is central to what follows.

The question no longer sits at whether Grok can generate harmful content. That point passed weeks ago. The question now is what happens when a company responds to systemic abuse by charging for access rather than removing the capability. At that moment, the argument moves out of product design and into institutional intent.

Monetization as Admission

Limiting image generation to verified, paying accounts did two things at once. It reduced volume inside X. It also tied the most controversial capability to a revenue stream. That combination matters.

From a legal perspective, monetization narrows the room for ambiguity. A platform can claim surprise once. It cannot plausibly do so after routing users through a subscription funnel. By January 9, the company had notice, documentation, and global scrutiny. Continuing to operate the feature after that point, with a price tag attached, looks less like negligence and more like choice.

JOIN OUR TECHTRENDS NEWSLETTER

The distinction matters because the defenses that once insulated social media platforms depended on passivity. Section 230 in the United States protected companies that hosted user content. Grok does not host. It generates. The images come from company-controlled models, running on company-controlled servers, optimized and tuned by company engineers. Courts have not ruled definitively on this boundary yet. But product liability law has spent decades parsing similar questions. Who designed the system. Who benefited. Who could alter it.

Charging $395 a year does not just change who can use the tool. It clarifies who profits from it.

Safety by Surface Area

The most striking pattern in the Grok saga is not that safeguards failed. It is where they failed.

Inside X, image generation has tightened. Requests to alter photos of real people into revealing clothing now trigger refusals in many cases. Geoblocking is supposed to apply in jurisdictions where such imagery is illegal. Yet the same prompts often succeed on Grok’s standalone website. The mobile app sits somewhere in between, sometimes asking for a year of birth, sometimes not.

From a governance standpoint, this fragmentation is hard to defend. Safety controls that depend on which doorway a user chooses are not controls at all. They are routing decisions. Regulators tend to view this pattern harshly because it suggests internal compartmentalization rather than systemic restraint. If a company can disable a feature in one environment, it can disable it elsewhere. Choosing not to invites questions about priorities.

AI Forensics, a Paris-based nonprofit, has collected roughly 90,000 Grok-generated images since the Christmas holidays. That scale matters. It shows repetition, not anomaly. It also establishes something else regulators care about: persistence after notice.

The Global Net Tightens

Mid this month, investigations or public condemnations related to Grok had emerged in the United Kingdom, the European Union, the United States, Canada, Australia, Brazil, India, Indonesia, Ireland, France, and Malaysia. The UK prime minister described the activity as unlawful and did not rule out banning X outright. That is not casual language. It reflects a legal framework already in force.

In the UK and EU, nonconsensual intimate imagery involving real people can trigger criminal penalties, platform fines, and service restrictions. The EU’s Digital Services Act allows penalties of up to 6 percent of global annual revenue for systemic failures. For a company with revenue in the billions, that ceiling is not theoretical.

What stands out is how little the company narrative has adjusted. Public statements emphasize enforcement actions against illegal content, not the continued availability of the generating mechanism itself. That gap between rhetoric and architecture is exactly where regulators tend to focus. They are less interested in takedowns than in prevention.

Product Liability Comes Into View

The comparison many observers reach for is social media. It is also the wrong one.

A closer analogue is defective product litigation. When a manufacturer learns that a product consistently causes harm under foreseeable use, continued distribution without redesign can establish liability. The harm does not need to occur in every instance. It needs to be reasonably predictable.

Grok’s undressing outputs were predictable enough that users replicated them thousands of times using minor prompt variations. That matters. It shows the behavior sits inside the model’s learned space, not at the edges.

There is also a timing element. By now, generative AI systems had been in public use for more than 3 years. Safety design is no longer experimental. Courts are less likely to accept arguments framed around novelty when comparable systems have implemented stricter controls without collapsing functionality.

The Minor Problem No One Can Dodge

Sexualized imagery of apparent minors changes everything. It collapses political divisions and accelerates enforcement timelines. In the United States alone, 48 states have passed laws targeting AI-generated sexual imagery of real people. Federal statutes addressing child sexual abuse material carry severe penalties, including criminal exposure.

Once a system repeatedly produces content that appears to involve minors, intent becomes almost irrelevant. The legal threshold centers on distribution and capability. Companies are expected to disable features that pose that risk. Partial measures do not carry much weight.

This is why app store decisions matter. Apple and Google have previously removed apps with nudify features. As of now, X and Grok remain available. That may not last. App store operators face their own regulatory pressures and have little incentive to absorb risk for a partner that refuses to fully neutralize a known problem.

Paywalls Do Not Equal Alignment

Some defenders argue that limiting access to paid accounts improves traceability. That is true in a narrow sense. It does not address alignment.

A user can still sign up with a disposable payment method and a false name. The cost barrier is modest relative to the harm enabled. From a safety standpoint, this looks like friction, not prevention. Regulators understand the difference.

Alignment refers to what a system can do, not who can reach it. If Grok can generate photorealistic nudity of real people under foreseeable prompts, alignment has failed. Moving that capability behind a login does not correct the failure. It preserves it.

Likely Paths Forward

Three outcomes now appear more plausible than they did at the start of the year.

First, civil litigation. Victims of nonconsensual imagery are increasingly organized and legally sophisticated. A single successful case could establish discovery rights that expose internal decision-making around the paywall.

Second, regulatory enforcement. The UK and EU have clearer statutory hooks than the United States, and they tend to move faster once investigations open. Fines, service restrictions, or forced feature removals would not be surprising within a 6 to 12 month window.

Third, infrastructure pressure. Payment processors and app stores operate on risk thresholds. Once a product is widely described as enabling sexual exploitation, those partners reassess their exposure. They do not wait for verdicts.

None of these paths depend on new laws. They rely on existing ones applied to a new architecture.

The Deeper Pattern

The Grok episode is not just about one chatbot. It reveals a broader tension inside the AI industry. Capability races ahead of governance. When harm appears, companies reach first for containment strategies that preserve revenue and momentum. Only later do they confront whether the capability itself should exist in its current form.

That sequence worked in the social media era because platforms could claim they did not create the content. Generative AI erases that distance. When the model outputs the image, responsibility sits closer to home.

Putting Grok’s image generation behind a paywall did not end the undressing problem. It reframed it. The question is no longer whether abuse occurs. It is whether charging for access to a known harmful capability crosses a line regulators are willing to enforce.

The evidence suggests they are already preparing to answer that question.

[Secure Your Seat at Africa Tech Summit Nairobi 2026 | February 11–12 here] Use code TTRENDS10 at checkout to save 10% on your pass and join the leaders building Africa’s $1 trillion cross-border payment future.

Go to TECHTRENDSKE.co.ke for more tech and business news from the African continent.

Follow us on WhatsAppTelegramTwitter, and Facebook, or subscribe to our weekly newsletter to ensure you don’t miss out on any future updates. Send tips to editorial@techtrendsmedia.co.ke

Facebook Comments

By George Kamau

I brunch on consumer tech. Send scoops to george@techtrendsmedia.co.ke

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
×