
Artificial Intelligence and Online Harassment
Jan 5
4 min read
0
5
0

In the upcoming documentary"Shattered" we explore the human cost of digital harassment through the story of a journalist whose life was upended by coordinated online abuse. Her experience raises urgent questions about workplace safety in an AI-driven world. As regulatory bodies like the E-Safety Commissioner work to address online harassment, employers face new challenges: How will they protect staff from AI-amplified harassment? What are the implications for workers' compensation when psychological injuries stem from digital abuse? The intersection of AI technology and workplace safety creates complex territory for employers, insurers, and policymakers alike.
The specific risks of cyberbullying and harassment in the age of AI
As this technology and its capabilities continues to grow, here’s some of what we know about the hazards of AI-generated harassment:
It spreads faster and farther – Harassment via automated troll bots greatly increases how many people are sent content and how quickly content is shared.
It’s specific – AI-generated content can make attacks even more personal by learning from personal-level data that the victim shares through online platforms.
It’s smart – This technology can create content that successfully evades automated content moderation systems.
It breeds bullying – The use of AI-generated content for harassment increases how vulnerable an individual is to hate speech and racist tropes depending on the data that is used to train the systems.
Drawing on the excellent academic work of Danielle Keats Citron, delve into the question does workers' compensation protect its victims or further harm them?

About Danielle Citron
Danielle Keats Citron stands as a pioneering legal scholar in cyber law and digital privacy. At the time of writing the book, she was the Lois K. Macht Research Professor of Law at the University of Maryland. Her expertise in privacy law, civil rights, and constitutional law has made her a leading voice in cyber harassment legislation. She has served on Twitter's Trust and Safety Council and frequently advises tech companies and legislators on cyber privacy issues. Her work has been cited by courts and has shaped legal frameworks around cyberstalking and revenge porn legislation.
Core Arguments and Analysis
Citron's book was prescient in identifying cyber harassment not as mere "pranks" but as serious civil rights violations that disproportionately impact vulnerable populations. She systematically dismantles the false dichotomy between "virtual" and "real" harm, demonstrating how online attacks can destroy careers, mental health, and lives.
Key insights from the book include:
1. The viral nature of online harassment, where content spreads uncontrollably across platforms
2. The permanent digital footprint that haunts victims long after initial attacks
3. The inadequacy of existing legal frameworks to address cyber harassment
4. The role of anonymity in emboldening attackers
5. The economic and psychological toll on victims who often must abandon online presence
Case Study: The Anna Mayer Story - Read the Newsweek Extract (2014)
To illustrate these principles, Citron presents the compelling case of Anna Mayer (pseudonym), which serves as a crucial example of how cyber harassment operates and escalates.
Initial Attack and Escalation
The harassment began on AutoAdmit, a law school discussion board, when an acquaintance created a thread about Mayer. The situation quickly escalated from inappropriate comments to:
- False allegations about her academic performance
- Fabricated stories about professional misconduct
- Explicit sexual comments and threats
- Unauthorized sharing and manipulation of her photos
The Spiral Effect
The case demonstrated what Citron terms "Google bombing" - when harmful content dominates search results for a person's name:
- Content spread virally across multiple platforms
- Anonymous participants amplified the harassment
- Defamatory content topped search results
- Professional contacts encountered the harmful content first
Professional and Personal Impact
The consequences were severe and far-reaching:
- Multiple job offers rescinded
- Interview opportunities withdrawn
- Reputation damage in legal community
- Development of anxiety and depression
- Social withdrawal
- Strained personal relationships
Contemporary Relevance in 2025
The rise of AI technology has dramatically amplified the concerns Citron raised in 2014:
AI-Enhanced Harassment Capabilities
- Deepfake technology enables creation of convincing false narratives
- AI language models can generate massive volumes of targeted harassment content
- Automated bot networks can spread disinformation at unprecedented scale
- AI tools can scrape and aggregate personal information more effectively
Modern Workplace Implications
For workplace injury victims, whistleblowers, and harassment survivors, the landscape has become increasingly treacherous. AI systems can:
- Rapidly disseminate workplace incidents across multiple platforms
- Generate convincing counter-narratives to discredit victims
- Create persistent online campaigns that follow victims across platforms
- Automate the process of finding and targeting vulnerable individuals
If Mayer's Case Happened Today
In 2025, similar harassment would likely be even more devastating:
- AI could generate countless variations of defamatory content
- Deepfake technology could create false video evidence
- Language models could write convincing false testimonials
- Automated systems could spread content across platforms instantly
- Digital footprints are more permanent and comprehensive
- AI recruitment tools might automatically flag controversial content
- Professional networking platforms are more integrated
- Remote work makes online reputation more crucial
Preventive Measures and Solutions
Based on both Citron's analysis and modern developments:
Legal and Institutional
1. Strengthened legal frameworks specifically addressing AI-enhanced harassment
2. Platform responsibility for AI content moderation
3. Corporate policies protecting whistleblowers from AI-enhanced retaliation
4. Digital forensics capabilities to track AI-generated harassment
Individual Protection Strategies
1. Regular monitoring of online presence
2. Immediate documentation of harassment
3. Early legal intervention
4. Professional support networks
5. Digital security measures
6. AI-powered content removal services
7. Professional reputation management services
Conclusion
"Hate Crimes in Cyberspace" remains a foundational text for understanding online harassment, with its insights becoming even more crucial in our AI-enhanced digital landscape. The Mayer case study, while predating current AI capabilities, perfectly illustrates the pattern of how cyber harassment campaigns develop and spread. In 2025, with AI systems capable of generating and spreading content autonomously, the potential for similar harassment campaigns to cause harm has increased exponentially.
The combination of Citron's theoretical framework and real-world examples like the Mayer case provides a crucial foundation for understanding and combating cyber harassment in the AI era. The book serves as both a warning about the dangers of unchecked online harassment and a call to action for developing comprehensive responses to these threats, particularly as AI technologies make such harassment more sophisticated and pervasive.