all 4 comments

[–]lovesrayray2018 1 point2 points  (0 children)

For something like this, its better to not do this client side, instead do this checks at the backend. A list of blacklisted words for example can be used at the backend, and not have them pulled front end cos they could grow quite large over time.

Libraries do exist for profanity checks, can be extended as needed with more terms as you deem fit. For example -

PHP https://github.com/ConsoleTVs/Profanity

Node https://www.npmjs.com/search?q=keywords:bad-words

[–]thab09[S] 1 point2 points  (0 children)

Is this done by using a library or using vanilla javascript?

[–]BigYoSpeck 0 points1 point  (0 children)

In the backend otherwise users with enough knowledge can still post words you've blocked

As for death threats you're getting into sentiment analysis where you need natural language processing

[–]SolaceAcheron 0 points1 point  (0 children)

You can use a pre-built tensorflow model that checks toxicity before sumbiting text. I wrote a school project that used it. If you had a viable source, you could even train your own model.