This is critically important in a space as fast-evolving and innovative as tech. Who should regulate NSFW AI? The development of AI is outpacing the evolution of regulatory structures, and we need to change that urgently. The Brookings Institution estimates the global AI industry will surpass $126 billion by 2025, much of which is expected to be in content moderation. Given the stakes, regulation that resulted only from self-regulation by those tech companies would be biased towards allowing any behavior that maximizes profit with insufficient regard to ethics. Meanwhile, because advancements in AI enabled Meta to make $114 billion off ad-targeting algorithms alone last year and still totally muff on content moderation.
The EU’s AI Act provides critical support for the framework. This chart is used to classify AI applications based on potential risk, which dictates how NSFW controls might be configured. That law has a ban on certain high-risk uses of AI and carries stiff fines for companies that violate the rules, up to 6% of global revenue. This way of thinking might be broadened on a global scale to reinforce the norms around how platforms should deal with inappropriate AI-generated content. China AI Regulation Draft reveals obvious state-led frameworks, focused on censorship and surveillance, quite predictably causing worries regarding the ways by which a government would limit itself under this overreach. At times, however, this method has managed to stifle the spread of pornography in its territory.
Discussion of distributed governance picks up as well. Taking the other extreme, Tim Berners-Lee — inventor of the World Wide Web himself — argues that regulation exclusive to governments or corporations “closes down future development.” On the other hand, through hybrid governance that consists of multi-stakeholder councils scientists could engage in their research without compromising ethical concerns. This included the formation of an AI Governance Board in 2023 within OECD, which would draft recommendations relating to new technology such as NSFW AI by representatives from government, academia and industry.
Let’s remember that public input is still critical. The Pew Research 2023 survey highlighted that the AI regulation should focus on public safety rather than innovation and this collective opinion will change how we make decisions. Meanwhile, another team of MIT researchers argue that the control over NSFW AI can only be realized by means of technological countermeasures combined with unequivocal legal frameworks. For example, AI-driven age verification systems under stringent regulation could potentially lower access to adult content by minors over 80%.
Ultimately, it is important to recognize that the AI also might become involved in regulating itself. Using machine learning algorithms to scan and identify inappropriate behaviors will improve compliance by 30-40% on the level of what we found in Google’s Content Moderation AI. Opponents, however, say such systems are nothing more than the prejudices of their underlying data—another argument for clear regulatory oversight.
And regarding who gets to regulate NSFW AI? There is no one way to ensure ethics in artificial intelligence, but the involved approach: a mix of government oversight, corporate responsibility and public participation combined with AI solutions. Otherwise, regulating nsfw ai will fall to one organization or another — it is something that requires an international cooperation from all corners of the tech industry in order for everyone to agree on a point between constraining innovation with safety and ethical considerations.