Twitter has updated its hateful conduct policy to combat the dehumanization of people on the platform based on their age, disability, or disease. The new policy announcement is just in time with the Covid-19 outbreak, which has sparked racist comments on social media platforms. Last year, Twitter updated its hateful conduct policy to include language that dehumanizes people based on their religion.
The company says old tweets that may have gone against the new policy will not lead to account suspension, because its rules didn’t disallow the act back then. Instead, the poster will have to delete the tweets by him/herself when reported to Twitter. Moving forward, however, the new policy means that a user account may be banned if reported to the company.
As the company continues to work on keeping the platform safe for all the people, Twitter formed the Trust and Safety Council, which acts as the regulatory watchdog to oversee the matter. The group deals with “dehumanizing speech around more complex categories like race, ethnicity and national origin.”
But that doesn’t mean the platform banks a perfect score on regulation — leave alone practising their own policies.
Unlike Facebook, which hires third-party checkers to check content posted on the platform proactively, Twitter does it reactively. In essence, the company relies solely on users to flag inappropriate tweets for review. With a large number of tweets flooding the platform every minute, this turns out as a hard job to do day by day.
However, Twitter commits that they will continuously update their policies to include more categories. To make the platform a haven, Twitter is doing more in-depth training and further extended the training period to equip its reviewers better to learn how to handle needy situations.