I Teach 13-Year-Olds About Digital Ethics. Tech Leaders Keep Pretending It’s Complicated.
- Soni Albright

- 1 day ago
- 6 min read
Grok, Nudify Apps, and Media Literacy

There is a Cyber Civics lesson I teach to middle school students about AI-manipulated images, specifically so-called “nudify” or “deepnude” apps. The lesson is not edgy or radical. It focuses on basic rules and clear boundaries, and asks students to think through fictional scenarios from multiple perspectives so they understand both harm and responsibility.
We discuss a real case in which students at a Beverly Hills middle school became involved in a criminal investigation after creating and sharing nude deepfakes of classmates. In our class discussions, students consistently raise thoughtful concerns: why these tools are so easily available to young people; why individuals can face criminal consequences while platforms often do not; and whether kids should be punished for receiving these images, alongside the senders and creators. Many students also note that these conversations often happen only after harm has occurred, rather than proactively, when education could prevent it.
The core ideas of the lesson are simple:
Don’t use technology to humiliate people.
Don’t create sexual images of someone without consent—and never of anyone under 18.
“It was a joke,” “I didn’t know,” or “the AI did it” is not a legal defense.

Which brings us to Grok.
Over the past month, use of Grok—an AI image-generation tool developed by xAI and integrated into X—has surged, with the tool being used to manipulate existing images on Grok to depict nearly nude or skimpy-bikini-clad women and minors (including one of Elon Musk’s baby-mamas) or to repose individuals into sexually explicit scenarios. Reporting indicates that safeguards were weak, inconsistently applied, or deliberately loosened, making it easy for harmful content to be created and shared.
When confronted via email by several news outlets regarding the lapse in safeguards and the proliferation of these images on the X platform, xAI replied with the message: “Legacy Media Lies”.
This Is Not a “User Misuse” Problem
When platforms are confronted with harm, the response is familiar: blame bad actors, promise fixes (while insisting regulation isn’t necessary), invoke free speech, point to Section 230, and move on—after someone has already been hurt.
Grok breaks that pattern in an important way. It isn’t merely hosting user content; it is generating the images itself. When an AI system produces nonconsensual sexual imagery, especially involving minors, the platform is no longer a neutral intermediary. It becomes the creator. Section 230, by the way, protects social media companies from being sued over content created by their users, not content generated by the platform’s own tools.
What Kids Understand That Tech Bros Apparently Don’t
When I teach this lesson alongside others on sexting, digital consent, healthy online relationships, and digital footprints, students are often surprised by how much the law governs online behavior. Many don’t realize that sexting between minors, even with consent, can be illegal because it may fall under child sexual abuse material statutes related to creating, sharing, or possessing explicit images. These same legal and ethical boundaries increasingly apply to AI-generated deepfakes and “nudify” technologies like Grok.
In class, we talk through the real-world consequences students don’t always see at first:
Psychological harm to victims
Reputational damage (to perpetrators and victims) and digital permanence, including how sexualized images can resurface years later and affect education, employment, or personal safety
Power dynamics, particularly how girls and women are disproportionately targeted, which can discourage participation in public life, among other repercussions
Legal consequences, including the possibility of criminal charges
We also talk about consent. Not as an abstract idea, but as a practical rule for digital life. Even sharing an ordinary photo of a friend or family member requires consent. The Grok situation makes clear how quickly that boundary can be violated at scale.
And here’s the key: students understand this.
What students struggle with is the mixed messaging. They watch one of the most powerful tech figures in the world, Elon Musk, joke publicly about the tool, repost memes that minimize the harm, and treat the issue as trivial, while real people experience humiliation, fear, and long-term damage.

In a stunning “Do As I Say, Not As I Do” moment, that contradiction undermines norms faster than any single app ever could.
When “Going Viral” Is the Point
This wasn’t Grok’s first controversy, either. Earlier tweaks in the summer of 2025 that caused the chatbot to generate deliberately provocative, anti-semitic, racist, and extremist content—including the widely reported “MechaHitler” episode—were not accidental. They followed public statements about loosening guardrails to make the system more “spicy” and more likely to go viral.
At the same time, reporting showed that trust-and-safety teams at X were reduced or sidelined, even as internal staff warned that Grok’s image tools could produce illegal and harmful content. When backlash followed, the response was not meaningful reform, but a paywall. Turning a harmful feature into a paid one is a decision to monetize risk, not reduce it. As international criticism and investigations continued, Grok’s undressing technologies were then blocked as of January 15, 2026.
From a media-literacy perspective, the pattern is familiar:
Sensationalism drives engagement
Engagement drives profit
Harm becomes collateral damage
This is what happens when engagement is prioritized over responsibility.
P.S. Grok is expected to be integrated into Pentagon systems later this month, per Defense Secretary Pete Hegseth, even as questions remain about its safeguards, training data, and security risks in defense and military environments.
Who Bears the Cost?
The real-world impact of nudify and undressing tools is not evenly distributed.
Women and girls are disproportionately targeted. In documented cases, women who expressed opinions online were met not with disagreement, but with sexualized, AI-generated images posted in response. This is not free speech; it is a silencing tactic.
Public figures such as Taylor Swift and elected officials like Alexandria Ocasio-Cortez have been subjected to this behavior, but the harm is often greater for non-public individuals—students, teachers, and journalists—who lack visibility, legal support, or institutional protection. Sexualizing women, regardless of status, functions to undermine credibility, diminish authority, and reassert power.
More broadly, any nonconsensual deepfake, particularly sexualized deepfakes and deepnudes, undermines consent, reputation, safety, and dignity. While anyone can be targeted, these harms are experienced disproportionately by women and have a chilling effect on participation in public life.
Media literacy teaches us to ask of every technology we use and encounter: Who benefits? Who is harmed? Who is pushed out of the conversation?
The Legal Reality Is Catching Up—Slowly
The Take It Down Act, signed into law in 2025, criminalizes the creation and distribution of nonconsensual intimate imagery, including AI-generated deepfakes. It requires platforms to remove such content quickly and grants the FTC enforcement authority.
The problem is not a lack of awareness; most people understand that creating or sharing nonconsensual sexual imagery—including revenge porn—is wrong. There are numerous examples of adults, including prominent men and elected officials (this guy, another one, yep, him too……one more), who have done this anyway, using intimate images to punish, control, or humiliate former partners.
What’s missing is consistent, explicit education about digital consent and systems that reinforce it. Schools, platforms, and public discourse have not kept pace with how easily images can be manipulated, shared, and amplified. Media literacy education is a critical part of closing that gap.
When adults normalize or excuse these behaviors, or design tools that make them easy, or lack proper safety protections, young people are left navigating inconsistent boundaries and consequences. This reflects both gaps in youth judgment and a failure of adult responsibility, especially when adults model the very behavior they claim to condemn.
App Stores Share Responsibility
Grok and X are distributed through major app stores (Apple, Google) that assign age ratings and set content standards. These ratings are applied to rapidly evolving AI systems that continue to change after release through ongoing experimentation.
Assigning a 13+ rating to an AI tool capable of generating sexualized or “undressing” images of real people raises clear concerns about age appropriateness. Age ratings also offer limited protection, as app stores rarely require meaningful age verification and rely largely on self-reporting.
App stores regularly remove apps for policy violations. Choosing not to restrict or remove AI tools that enable nonconsensual sexual imagery reflects discretionary enforcement, not a lack of authority.
This Is Why Media Literacy Education Matters
Media literacy education is not about being “anti-tech.” It is about understanding systems, incentives, power, and consequences.
It teaches people to ask:
What was this tool designed to do?
What behaviors does it reward?
Who bears the risk when things go wrong?
These are the same questions educators like me use to help young people navigate their online choices regarding things like digital consent, reputation, and responsibility. What this moment makes clear is that those questions are just as necessary for the people building, deploying, and distributing these tools. When they are ignored at the design and policy level, the consequences are borne by everyone else.

Author Soni Albright is a teacher, parent educator, curriculum specialist, researcher, and writer for Cyber Civics with nearly 24 years of experience in education. She has taught the Cyber Civics curriculum for 14 years and currently works directly with students while also supporting families and educators. Her experience spans a wide range of school settings—including Waldorf, Montessori, public, charter, and homeschool co-ops. Soni regularly leads professional development workshops and is passionate about helping schools build thoughtful, age-appropriate digital literacy programs. Please visit:
Thanks for reading this article from "Do as I say, not as I do!" Subscribe for free to receive new posts and support my work.



