Deep fake technology is still a new thing. Yet, its use in the real world has already raised alarms to the wellbeing of individuals in the society and the major part it could play in spreading misinformation.
Upon the wake of this technology, social media companies hurriedly tried to formulate rules that could govern its use on their platforms. The worst part is that this technology could potentially amplify the major ongoing crisis around the world regarding misinformation popularly known as “fake news.”
Facebook has its rules, and Twitter took careful steps around the topic by involving its users in an open survey.
Now, Twitter has implemented several steps on how it will handle deep fake materials including labelling tweets “containing synthetic and manipulated media,” as well as removing the content entirely if it will likely cause harm to someone’s wellbeing or privacy starting 5th March.
We know that some Tweets include manipulated photos or videos that can cause people harm. Today we’re introducing a new rule and a label that will address this and give people more context around these Tweets pic.twitter.com/P1ThCsirZ4
— Twitter Safety (@TwitterSafety) February 4, 2020
Before we even get to the point of flagging manipulated content or removing it, identification will be the crucial part.
Twitter will identify manipulated content based on several factors as listed below;
- Whether the content has been substantially edited in a manner that fundamentally alters its composition, sequence, timing, or framing;
- Any visual or auditory information (such as new video frames, overdubbed audio, or modified subtitles) that has been added or removed; and
- Whether media depicting a real person has been fabricated or simulated.
Context is another thing the company will be looking at, according to the new rules.
The company will consider the whole context of a tweet, from the text, attached media metadata, the posters’ profile information and the website linked in the poster’s profile or tweet.
The context part is quite ambiguous, but the company admits that the whole process will be a challenge. After all, the platform has had failures in the past trying to regulate tweets based on their context.
“This will be a challenge and we will make errors along the way — we appreciate the patience,” said Yoel Roth, Head of Site Integrity and Ashita Achuthan, Twitter’s Group Product Manager in a blog post.
“However, we’re committed to doing this right,” they added.