Having long been under fire for hateful and even abusive content floating around on its platform, Twitter has stepped up their filtering mechanisms in an effort to nip the problem in the bud. To stop the problem from birthing even before it happens, Twitter will start testing an algorithm that prompts users before they send out potentially “offensive or hurtful language”.
And this is why I don’t use twitter anymore. A lot of negativity down here. Do people like just come here when they are upset or unhappy?
— JianHao F² Tan (@thejianhaotan) May 2, 2020
As seen from the recent saga surrounding Singaporean YouTuber Jianhao Tan and his haters, the 27-year-old content creator has taken to Twitter to express his frustration with the very app itself, citing “a lot of negativity down here”. And Jianhao is far from alone in this sentiment.
In response to the mass of unhappiness with the wide-spread unpleasantries, Twitter will now try out a new prompt to users if they detect words in their tweets that are similar to those in posts that have been reported before. Then, users will be asked whether or not they would like to revise it.
“We’re trying to encourage people to rethink their behaviour and rethink their language before posting because they often are in the heat of the moment and they might say something they regret,” said Sunita Saligram, Twitter’s global head of site policy for trust and safety, in an interview with Reuters.
Currently, Twitter’s policies disallow users from targeting individuals with slurs, racist, sexist, or degrading language. The experiment is the first of its kind for Twitter and will run for at least a few weeks. It will be made available globally but only for English-language tweets.
Here’s to creating a more conducive online environment for all.
Twitter Offensive Tweet Prompt | Twitter
Get the latest updates by following us on Twitter @hypeandstuff & Facebook Have an interesting story to share? Email us at [email protected]