Testing semi-automated moderation processes for compliance

Develop verifiable methodologies and criteria to test whether semi-automated, human-in-the-loop notice-and-action and content moderation processes comply with the Digital Services Act’s requirements for non-arbitrary and non-discriminatory enforcement.

Background

The DSA requires platforms to enforce content moderation in a non-arbitrary and non-discriminatory manner. In practice, many moderation pipelines are semi-automated and involve a mixture of algorithmic tools and human review, complicating compliance assessment.

Robust testing methods are needed to evaluate such hybrid systems against legal standards, including issues of arbitrariness, procedural justice, and auditability.

References

It is still unclear how to test such semi-automated processes for compliance, and links back to general questions of assurance in auditing algorithmically supported processes.

"There is literally zero funding": Understanding the Emerging Role of Trusted Flaggers under the EU Digital Services Act  (2603.29874 - Sekwenz et al., 31 Mar 2026) in Related Work (Flagging as content moderation measure)