How Tech Organization giants are implementing Automated Tools in Content Moderation and its History.

How Tech Organizations giants are implementing Automated Tools in Content Moderation and its History.

In the earlier blog we read about the types of content moderation which were used in moderating the content. Here we would discuss how different technology is used to take care of the content and reduce the human intervention.

As the world is rapidly changing because of the internet, the huge scale of websites, blogs, social media platforms, marketplace websites and many more comes into existence along with these COVID-19 has fueled the transition to a next level. There are billions of billions of User Generated Content (UGC) getting posted in the online platform daily. The Social media giant itself has monthly billions of active users and trillions of content getting posted on the platform. Hence reviewing or moderating the content is not possible fully through humans. And here comes the need of Automation.

The technology which is used for moderating the content is generally the combination of different technology. All the technologies are clubbed together to give a  definite outcome. Hence the entire ecosystem of technology is required to be understood. The different technology which were used by the industries are:

Digital hash Technology

Hash means to give an arbitrary amount of input to the system, applying some algorithms into it and generating a fixed size amount of data. This technology is generally a part of block chain Technology.

But in the content moderation what it does is converting the video or images into grayscale format from the existing database. It then overlays the pic/videos into a grid and assigns each square a numerical value

The numerical value which has been designated then converts the square into a hash or a digital signature. The number which is assigned works as an identity to the image/video in the iteration process.

Digital hash technology is widely adapted for CSAM (Child Sexual Abuse Material) technology and copyright infringement material. Since Child sexually exploit content material is the worst abusable content ever imaginable in the internet platform, hence to remove the content from the platform this technology is used by top internet companies like Google, Facebook, Twitter along with Law Enforcement organisation, NGOs etc. This technology is also known as PhotoDNA and is one of  the powerful tools developed by Microsoft in 2008 with Hany Farid a computer Science professor, to combat child pornography. .

Copyright infringement is the other one of the biggest problems in the internet media especially for Google, Youtube. Same content with certain modification in terms of size, color alteration, manipulation, watermarking  is posted from the different account.

Recently few youtubers are warned even got copyright strike on its content. One of them is the India’ biggest superstar “Carry Minati” whose content got a copyright strike on its own content “YALGAAR” and there is one of the few youtubers who was warned not to copy the hindi track song is Maithili Thakur.

Now what does Youtube do with the PhotoDNA?

Youtube used this PhotoDNA technology to create content ID for copyright violations. Content ID enables Youtube users to create digital hashes for their video content to protect your material/content. Once you upload any video or photos the content gets screened/checked across the database to check if there is any same content published by others or not. If the video is authentic and new it will get added to the database to give protection to this particular video in the future.

This is how the tool gets improved/updated day by day and is one of the areas where machine learning has been deployed to study the uploaded content based on the previous content on the database.

Managing Video and Audio files is easy compared to the text based content and is one of the highest concerns because of Hate speech, Mocking in the social media platform. Because changing the length or the encoding  format of the video or audio file and uploading online can be easily recognized by machine whereas to understand the human language and to comprehend it, is really a big task..

Image Recognition

Digital Hash technology uses image recognition techniques to understand the image based on the past feeding of the data into the system, this is what we call Machine Learning. This is used in both Pre Moderation and Post Moderation. 

Let me give and example.

If a person has Guns in their hand, the tool will detect the gun, but if there is a caption then it is very difficult to understand the real meaning of the context by the machines hence the content gets flagged and then routed for human moderation.

Thus with the help of Image recognition we can really tell what is actually there in the content, and this is done by repeatedly feeding various forms of picture or videos of the same content to let the machine know what it actually is.

One of the very common examples we come across on Facebook is popping off automatically if you want to tag yourself and there is a box like figure across your face in the group photos of you right. 

Now do you ever think how facebook knows that it’s actually you….?

Because many times you have uploaded the picture and have tagged yourself that this is Tom or Michael. Now when you do it multiple times, you are actually letting know the machine that the person is really you. Hence automatically you help the system to learn. Similar thing happens for Google photos. If you want to see all the pictures of your particular friend you need to just click the image of that person and all the images that contain your friend will be shown in your gallery. This is what is called facial recognition.

eGLYPH is a game changer tool like  photoDNA which also works on hash technology and is developed by Hany Farid, the Dartmouth computer science professor who developed both photoDNA as well as eGLYPH in 2016 to remove the extremist content from the platform. This works both on video as well as image.

This was a success but eventually it has raised a concern as the definition of extremist is not clear and it varies from platform to platform. Hence based on the platform the tool must be trained specifically and the database is needed to be specific.

In June 2017  team was formed by Facebook, Microsoft,Twitter,Youtube known as Global Internet Forum to Counter Terrorism (GIFCT) and have created a shared history database with over 40,000 images to screened out the extremist content from the platform using hash technology.

Metadata Filtering

Metadata means data of data. Here with respect to content moderation it is the descriptive descriptions of the digital file which is known as file metadata. For an ex- An Audio file which has the descriptive name on the file as name of the Audio track, track length, singer Name.

So based on the meta data the tool will filter the content and decide which one to keep and which one to remove from the platform. The biggest challenge in this tool is as the metadata can be manipulated very easily it is very difficult to have a clear understanding. It is generally used for copyright infringement. But the reliability of the tool is less.

Natural Language Processing

Natural Language Processing is a set of techniques that is used by computers to understand the natural language which can be in various forms like speech/voice, text based. The fundamental idea here is how a computer can easily recognize your natural language and can give the output as desired. Natural language processing works together with image processing to understand the text written on the image or the speech in the video (more complex).

In content moderation the use of NLP is to parse the text that gets posted on the platform. Parse means to break the sentence and analyse the content (strings of symbols, strings of words) or the language that is usually done by the computers to evaluate the meaning. Currently there are wide variety of tools available in the market for spam detection, content filtering, translation.

NLP or text classifiers what they do is actually they classify the text and assign predefined tags or categories based on the content. Suppose you have a database which contains all the slangs and hate speech words. Let us suppose a content is posted on the platform “ He is a Bastard”, the tool (NLP or Text classifiers) will classify the text and try to understand or analyze if any content is matching with the content in the database on not. If it gets matched it will assign a particular predefined tag which can be “Slang”. In this way a sentence is classified and is analysed. In the social media platform it is used to analyse the content to classify between extremist and non extremist content, also to understand the sentiment of the user. This has been generally used in the Twitter platform which is called as sentimental analysis.

For the social media platform, the NLP is used mostly to filter out hate speech content, mocking content. For this the tool is first trained on datasets which includes a wide variety of hate speech to let the machine understand what type of content belongs to hate speech.

The hate speech can be with respect to religious context, caste, creed, gender, sexuality etc. Hence considering only the hate speech domain if you want to feed the data into the system it would be so wide that the tool will not be able to emphasize certain types of hate speech.  Sometimes it is also very difficult to understand for the machine whether the particular word  is used in the form of satirical context or literal/humorous context. Comprehending the overall context is a real challenge for the Automated tool and this is where human touch is needed.

Post a Comment