How can one configure their Lemmy instance to reject illegal content? And I mean the bad stuff, not just the NSFW stuff. There are some online services that will check images for you, but I’m unsure how they can integrate into Lemmy.
As Lemmy gets more popular, I’m worried nefarious users will post illegal content that I am liable for.
https://github.com/db0/fedi-safety can scan images pre and post upload for you for CSAM, including novel GenAI ones. If you need pre-scanning, you will also need to run this service https://github.com/db0/pictrs-safety along with your instance. Both of these need a budget GPU to do the scans, but you can use your home PC.