In 2016 it was the year in which the false news put in check the credibility of Facebook , but this phenomenon of spreading false information is not limited only to the social network of Mark Zuckerberg, in the last couple of years we have seen as the incendiary content , extremist and that seeks to bias people although it has nothing true, ends up becoming more viral than any real event.
Much of the problem is the same as warns Tim Berners-Lee, the father of the web, only a handful of companies control what is seen and shared on the web , they are the ones who mostly have control over the ideas that are spread on the Internet, and while they are not the ones who create false news, it is in their platforms where they originate and where they need to exercise better control to detect and eliminate this type of content, instead of ending up being promoted by defective algorithms.
While there are already studies that point out that the ‘fake news’ are more likely to be replicated than the real news , and that part of the reason is that they provoke emotional reactions in people, such as disgust or surprise, and this causes us to be the same users who disseminate the information; There is also the issue that these platforms are designed to keep the user in as much time as possible , the algorithms have disruptive biases that promote incendiary and false content because it generates more interaction.
In response to all this, different measures have been taken by those responsible for the main platforms where they have spread and continue spreading false news: Facebook, Google, YouTube and Twitter. This is a summary of all that these giants have done so far to detect ‘fake news’.
The long journey that Facebook has traveled and everything it needs to travel
One of the first measures taken on Facebook to try to return to be a credible source of information was to modify the trending topics sacrificing the personal interests of users to combat false news, giving priority to topics that supposedly reflect “world events real”.
That was not the only change that the algorithm underwent, in February 2017 they did it again, this time with small changes that would penalize false and sensational news . In April, they began to test a tool for the users themselves to learn to identify false news , and also began to show warning messages to those who tried to publish false news to discourage people from spreading lies.
Facebook and 20 other technology companies also joined by investing 14 million dollars to finance the “News Integrity Initiative”, an initiative for the research and financing of experts in the communication industry that would seek to combat false news.
They also created the Facebook Journalism Project to work with journalists and try to “promote a healthy news ecosystem where journalism can prevail and prosper.” And, they decided to start showing related articles and fact-checking tests before users opened links.
They even published a list of 10 tips to detect false news . And this year they decided to take a controversial measure: a survey of two questions to users that would determine the reliability of a medium for the social network.
Facebook continues to insist that it will prioritize news coming from “reliable, informative and local” sources, but still does not know how to determine which media meet those conditions, which users are the ones that decide is more than questionable , because very well what they can do it by the affinity they have with a medium and not with the objectivity and credibility of it.
Google and YouTube
Google launched the First Draft Coalition in June 2015 with the support of Google News Lab, an initiative that Facebook and Twitter joined in September 2016 . The objective of this is to maintain a platform so that questionable news can be verified and for that it has the participation of 30 international media and organizations.
Google also modified its algorithm so that the autocomplete would stop suggesting search terms that could lead users to fake articles . In 2016 they showed how they had detected more than 1,300 advertisers who have gone through the media . These are the so-called “tabloid cloakers”, ads that look like news and seek to attract the attention of users and that are false.
In April 2017, they released a verification tag in Google’s search results , something that is only available in Google News (therefore not in Spain) and that indicates that the information has been verified by fact-checking organizations and publishers. of news.
Just as Google’s algorithm has undergone changes to promote false news, so has YouTube’s. In October of 2017 they had to change the algorithm of the video platform after sources like 4chan appeared as reliable means when looking for information about the tragic Las Vegas shooting.
Google has also eliminated the monetization of “inappropriate and incendiary” content, seeking to discourage those who upload this type of videos, but the videos remain on the platform, only fewer people make money with them.
YouTube’s algorithm has been widely criticized for having disturbing and dangerous biases and promoting incendiary content for the user to spend more time on the site, especially content about conspiracy theories and false information. A problem that is still present, on YouTube there are videos on anti-vaccines and cancer that can cause people to die , but Google still does not eliminate them.
Instead they have taken warmer measures, such as offering workshops to teenagers to teach them to identify false news.
Disturbing content such as YouTube Kids made Google decide to put more than 10,000 people to review the platform , and also say that the algorithms have done the work of 180,000 people working 40 hours a week to eliminate extremist videos, but the reality is They have a more serious problem than they think they are solving .
On Twitter part of the problem are fake accounts, bots, and trolls. Between 2016 and 2017, 632,248 accounts were expelled thanks mainly to the internal team of the company, and in very little quantity to user reports. But this still is not enough.
They have just begun to take measures against the harassment and violence on the platform , it took them too much to do something against the accounts of neo-Nazis that were even verified, and only in February 2018 is it announced measures against the use of multiple profiles and another type of manipulationslike tweetdecking .
Although Twitter is still trying to deal with bots, spam, false display of tweets, links and hashtags , the platform has not taken any specific measures to prevent the spread of false news on the platform beyond its collaboration with the First Draft Coalition. initiated by Google.
These platforms have a long way to go and the problem of false news is one that has been unfortunately generated by people and is enhanced by the way search engines and social networks work. The ‘fake news’ continues to fuel social tensions , we are in a constant debate about the intervention of external agents in various democratic processes in addition to the US presidential elections, and many believe that it is no longer enough for these companies to self-regulate.
We must also remember that no company is taking action only because of the goodwill of their hearts . The governments of the world have been giving them ultimatums , and after the Russian interference in the elections in the United States where Donald Trump won, the government of that country continues in a long investigation to determine the impact of thousands of false accounts Russians will spend thousands of dollars to influence voters on social networks.