Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 155 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 27 tok/s Pro
GPT-5 High 24 tok/s Pro
GPT-4o 96 tok/s Pro
Kimi K2 218 tok/s Pro
GPT OSS 120B 429 tok/s Pro
Claude Sonnet 4.5 38 tok/s Pro
2000 character limit reached

Reporting Non-Consensual Intimate Media: An Audit Study of Deepfakes (2409.12138v1)

Published 18 Sep 2024 in cs.CY

Abstract: Non-consensual intimate media (NCIM) inflicts significant harm. Currently, victim-survivors can use two mechanisms to report NCIM - as a non-consensual nudity violation or as copyright infringement. We conducted an audit study of takedown speed of NCIM reported to X (formerly Twitter) of both mechanisms. We uploaded 50 AI-generated nude images and reported half under X's "non-consensual nudity" reporting mechanism and half under its "copyright infringement" mechanism. The copyright condition resulted in successful image removal within 25 hours for all images (100% removal rate), while non-consensual nudity reports resulted in no image removal for over three weeks (0% removal rate). We stress the need for targeted legislation to regulate NCIM removal online. We also discuss ethical considerations for auditing NCIM on social platforms.

Summary

  • The paper finds that DMCA reports achieved a 100% removal rate within approximately 25 hours, contrasting sharply with a 0% removal rate for nudity reports.
  • The audit employed 50 AI-generated images across 10 accounts to systematically compare the speed and effectiveness of X's reporting mechanisms.
  • The study underscores the need for federally mandated NCIM policies and enhanced platform moderation to better protect victim-survivors.

Reporting Non-Consensual Intimate Media: An Audit Study of Deepfakes

The paper "Reporting Non-Consensual Intimate Media: An Audit Study of Deepfakes" provides a comprehensive investigation into the efficacy of reporting mechanisms for non-consensual intimate media (NCIM) on X (formerly Twitter). The paper centers on two primary reporting avenues: copyright infringement under the Digital Millennium Copyright Act (DMCA) and non-consensual nudity via X's privacy reporting policy. Using an audit methodology, the researchers evaluated the speed and efficacy of content removal under these mechanisms, presenting crucial insights into the current state of social media moderation for NCIM.

Study Design and Methodology

The audit utilized 50 AI-generated nude images representing non-consensual intimate content, which were systematically posted across 10 newly created X accounts. The images were split into two groups: one reported under the DMCA and the other under X's non-consensual nudity policy. The timeline for content posting, reporting, and follow-up structured around three-week intervals aimed to discern the disparities in content removal rates between the two mechanisms.

Key Findings

  1. Removal Success and Speed:
    • DMCA Reports: All 25 images reported under the DMCA were removed within approximately 25 hours. The fastest removal occurred in about 13 hours, and every image was successfully taken down, indicating a 100% removal rate.
    • Non-Consensual Nudity Reports: In stark contrast, none of the 25 images reported under X's non-consensual nudity policy were removed within the three-week period, resulting in a 0% removal rate.
  2. Engagement Metrics:
    • The paper found minimal engagement with the posted NCIM, averaging 8.22 views across all images, with no likes, retweets, or comments. This indicates low visibility and interaction, likely due to the new and relatively unfollowed status of the poster accounts.
  3. Account Consequences:
    • Accounts that posted under the DMCA condition received temporary suspensions and notifications from X, suggesting the platform takes more robust punitive measures in response to copyright infringement compared to non-consensual nudity.

Implications and Policy Recommendations

The outcomes of this paper underscore significant disparities in how different reporting mechanisms are enforced, reflecting an underlying inconsistency in protecting victim-survivors of NCIM. The DMCA, backed by federal law, showcases the capacity for swift and effective content removal. However, the platform's own voluntary privacy policies fall drastically short, failing to remove any reported NCIM in the observed period.

Practical Implications

  • Targeted Legislation: The findings advocate for the establishment of federally mandated laws specifically designed to address NCIM. Such legislation should obligate platforms to promptly remove non-consensual intimate content, akin to the obligations imposed by copyright law.
  • Enhanced Transparency: The paper's results call for greater transparency in platform moderation practices. Benchmarks for response times to different types of content reports would facilitate accountability and provide clear expectations for both victims and platforms.
  • Automated and Manual Moderation: Leveraging automated systems alongside human moderators could facilitate more effective and timely handling of NCIM. Given the success of automated methods in DMCA takedowns, similar systems could be developed and refined for privacy-related issues.

Theoretical Implications

  • Content Moderation Policies: The research contributes to our understanding of the disparities in moderating different types of harmful content. It highlights the need for unified, legally enforced standards across platforms.
  • Sociotechnical Systems: It also raises important questions about the efficacy and ethics of current sociotechnical systems in dealing with sensitive and harmful content. The paper exemplifies how societal values and technical implementations interact to influence outcomes.

Ethical Considerations

The ethical design of this paper is crucial, given the sensitive nature of the content. Although the research involves AI-generated images that do not correspond to real individuals, the potential for harm remains. The researchers minimized these risks by ensuring the images had no real-world matches and limiting the visibility of the posts. Additionally, using DMCA reports for AI-generated content extends beyond typical usage, inviting scrutiny and emphasizing the need for clear ethical frameworks for audit studies in content moderation.

Future Directions

Future research should focus on extending these audits to other platforms and varying content types to validate the findings across a broader context. Additionally, exploring the impact of demographic variables on the treatment of NCIM would further elucidate biases in current moderation systems. Legal researchers and policymakers should work towards drafting and advocating for comprehensive NCIM-specific legislation to protect victim-survivors effectively.

In summary, this paper elucidates the stark contrasts between copyright law and platform policies in handling non-consensual intimate media, emphasizing the urgent need for robust legal frameworks and improved platform accountability.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 5 tweets and received 171 likes.

Upgrade to Pro to view all of the tweets about this paper:

Youtube Logo Streamline Icon: https://streamlinehq.com