Twitter will soon test a new feature designed to discourage people from using “offensive or hurtful language” in their replies, the company said Tuesday.
In a tweet, the company explained its rationale for the two-week test:
When things get heated, you may say things you don't mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.
— Twitter Support (@TwitterSupport) May 5, 2020
When users hit “send” on their reply, the Twitter app for iOS will display a pop-up message telling them their tweet may contain offensive language, and asking if they would like to revise it.
“We’re trying to encourage people to rethink their behavior and rethink their language before posting because they often are in the heat of the moment and they might say something they regret,” Sunita Saligram, Twitter’s global head of site policy for trust and safety, said in an interview with Reuters.
Twitter polices its platform
Twitter has been criticized for not taking action to clean up hateful and abusive content on its platform. The company’s policies do not allow users to target individuals with slurs, racist or sexist tropes, or degrading content.
In 2017, Twitter began censoring profiles that posted what the company deemed “sensitive” content. Shortly before that, Twitter started placing temporary bans on “abusive” users, preventing their tweets from appearing to anyone but their followers.
Saligram said the new test will target rule-breakers who are not repeat offenders. It will run globally but only for English-language tweets.