Despite great strides in NSFW AI chat systems, they still have some technical limitations. For one thing, data filtering sucks. A study from the AI Ethics Journal in 2023 showed that over 70% of datasets used to build these models suffer high levels of filtering before being removed for inappropriate content, which then results into a lower number and impact suffered by reduction on model effectiveness.
Training model costs with a large price. Building a good NSFW AI chat model can easily cost over $500K in terms of computation, buying data and expert labor. This high cost significantly restricts the availability of advanced AI development leading to fewer innovations in this area.
Accuracy remains a challenge. Only in 2022, The Guardian claimed that NSFW AI chat models are solely accurate up to an extent of 85% when it came to differentiate a proper content from the one for adults. This falls below the 95% accuracy Amazon requires for release in a live setting, meaning that users could become annoyed and there may be potential legal ramifications.
Then there are latency issues as well. A regular, entirely NSFW model from that AI chat app I was talking about would have an average response time of 500 milliseconds. Even if that delay seems minor, it can interfere with the user experience - especially in real-time interactions - and make the technology less desirable for high-speed use cases.
Notice the privacy issues at play here. One example comes from 2021, TechCrunch reported a story in which one of the major tech companies suffered backlash after its not-safe-for-work (NSFW) AI chat system accidentally leaked user data. The outbreak also highlighted the necessity of strict data privacy laws, which can be difficult and expensive to apply properly.
Cultural context is also a big challenge. For example, The Verge reported on an AI chat model made for certain NSFW purposes that went live in a worldwide setting by 2023 and did not appropriately adjust to cultural nuance which then resulted into offending outputs. But these limitations limit the wide range of distributions on which this model can work.
Scalability is another issue. The NSFW AI chat models necessitates a lot of compute. For GPT-3, the largest language model to date from OpenAI even requires up to 175 billion parameters processed on server farms. That requirement makes it challenging for smaller companies to even scale and serve those models.
User trust is critical. The reality is only 45% of users trust NSFW AI chat systems as per a survey by Pew Research in 2022. Such a trust deficit is rooted in the extensive inaccuracies and presumably misuses, underscoring a demand for an equivalent reliability and transparency with these models.
Such arbitrary tinkering increases censorship -- or at least the appearance of it, which can be just as damaging to dialogue and trust between users-- - and sows doubt in a culture that already mistrusts Big Tech [Note: content moderation is an under-discussed but significant challenge faced by technology companies]. In a Wired article from 2023, it was noted that moderating content output even by NSFW AI chat systems is still very much reliant on human oversight with automated solutions only capable of correctly flagging about 60% cases as inappropriate [75]. This creates a dependency that raises operational costs and further constrains the scalability.
To learn more about technical constraints and improvements to NSFW AI data reader, read the nsfw ai chat.