Having long been under fire for hateful and even abusive content floating around on its platform, Twitter has stepped up their filtering mechanisms in an effort to nip the problem in the bud. To stop the problem from birthing even before it happens, Twitter will start testing an algorithm that prompts users before they send out potentially “offensive or hurtful language”.
As seen from the recent saga surrounding Singaporean YouTuber Jianhao Tan and his haters, the 27-year-old content creator has taken to Twitter to express his frustration with the very app itself, citing “a lot of negativity down here”. And Jianhao is far from alone in this sentiment.
In response to the mass of unhappiness with the wide-spread unpleasantries, Twitter will now try out a new prompt to users if they detect words in their tweets that are similar to those in posts that have been reported before. Then, users will be asked whether or not they would like to revise it.
“We’re trying to encourage people to rethink their behaviour and rethink their language before posting because they often are in the heat of the moment and they might say something they regret,” said Sunita Saligram, Twitter’s global head of site policy for trust and safety, in an interview with Reuters.
Currently, Twitter’s policies disallow users from targeting individuals with slurs, racist, sexist, or degrading language. The experiment is the first of its kind for Twitter and will run for at least a few weeks. It will be made available globally but only for English-language tweets.
Here’s to creating a more conducive online environment for all.
We're hiring lifestyle writers!