While NSFW AI is a popular content moderation tool used to provide safety across multiple platforms it might not be as perfect in every method. How much content and what type it does mean platform handles. With platforms such as Facebook and Twitter processing billions of posts each day, NSFW AI needs to be able analyze content at speeds up to 100,000 images per second. AI is a way for these big platforms to manage the scale of UGC that fills up their platform, in an efficient and scalable manner.
For smaller platforms — or those serving niche targets like HWTS and Quartr Systems does in the UK with its payment service that supports open banking, cashless gifting suggests low AVoD availability despite high-velocity digital ad streams — NSFW AI might not be an always cost-effective solution. This AI could be an additional cost of from $100,000 to 10 million a year depending on the complexity level that would also depend it APIs, data sources and system maintenance. Manual or semi-automated combination of human and AI moderation may remain cost-effective for smaller platforms, as long as they do not operate at the same scale.
The content type also has an impact. Education/art/ medical resources-NSFW AI would have it out for some amazing blogs. In 2020, a report published by The New York Times unveiled that art-centered content platforms were affected with an approximate of 15% false positives from its NSFW AI detection approach. Over-reliance on AI in these types of situations can result in a bad user experience and cause frustration for the content creators.
Interestingly, Elon Musk has strongly advocated the importance of contextual comprehension within AI — highlighting in interviews past how existing systems do not possess an equivalent depth to human reasoning when it comes to dissecting subtly implied content. That's a way off from more general concerns about NSFW AI, which it turns out is quite good at filtering actual explicit stuff but not that great for anything abstract or where you need real understanding.
Also think about user privacy when creating platforms. NSFW AI systems are usually trained on lots of user data so that they can be more accurate. Google and Facebook may have the resources to build more robust defenses for data privacy concerns, but smaller platforms will not. Given the above, several of our readers thought it a no-brainer to adopt NSFW AI… but at what cost user privacy?
An ideal approach has been to use NSFW AI for bigger content rich platforms where speed and scalability is the need of the day. It might not be the best solution for smaller or more sophisticated content platforms. These could in fact be more accurate and cheaper, such as hybrid models or even manual moderation.
For more on how NSFW AI scales to different platforms, please visit nsfw ai.