The current method is auto deleting nsfw images. Doesn’t matter how you got there, it detects nsfw it dumps it, you never get an image. Besides that gating nsfw content generation behind a pay wall or ID wall. It would stop a lot of teenagers. Not all, but it would put a dent in it. There are also AI models that will allow some nsfw if it’s clearly in an artistic style, like a watercolor painting, but will kick nsfw realism or photography, rendered images, that sort of thing. These are usually both in the prompt mode, paint in/out, and image reference mode, generation of likely nsfw images, and after generating a nsfw check before delivering the image. AI services are antisipating full on legal consequences for allowing any nsfw or any realistic, photographic, cgi, image of a living person without their consent, it’s easy to see that’s what they are prepared for.
If you really must, you can simply have the AI auto delete nsfw images, several already do this. Now to argue you can’t simply never generate or give out nsfw images, you can also gate nsfw content generation behind any number of hinderences that are highly effective against anonymous use, or underage use.
11 days? How is that even enough to see if it could many money? It sounds like they were very excited to pull the plug at the first hiccup