Moderation is such an important part of any community, and yet it's still an afterthought on so many platforms. Companies that are laser-focused on short-term growth might neglect moderation (if we ban users, then we have fewer users!), and people working on the product might not be fully empathizing with the experience of many people in the community. The bottom line is that it's still way too easy to be a jerk on the Internet and have a profoundly negative impact on someone else's life.
So, if I could change anything, I'd want to make technical and structural changes.
Quick, how many web frameworks can you name off the top of your head? Probably at least a few, right? Now, how many content moderation libraries can you immediately name? Probably none. The technical barrier to implementing ML-based moderation should simply be much lower than it is right now. It's certainly a hard technical problem, but one that's solvable. (And this isn't to say that ML is the only thing necessary to implement moderation, but it's a great starting point.)
Structurally, I wish more community products made strong content moderation a core pillar from the very start, as opposed to something that's tacked on at the end. Within organizations, moderation teams need to have a strong voice and ability to influence the product, which isn't the reality in many places today. Quora made moderation a top-level priority early on, and I don't think it would have grown to the same scale if it hadn't.
Thanks for the response!
Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink.
Hide child comments as well
For further actions, you may consider blocking this person and/or reporting abuse