- August 29, 2018
- Posted by: admin
- Category: Uncategorized
Be it online or in the real world, that scary Black Mirror episode is fast becoming a reality. Reputation rankings, social credit scores and embedded microchips, the big brother is always watching.
When fiction turns to reality. Black Mirror, the popular Netflix show, started out in life as an outlet that cast a light on how our approach technology and social media can perhaps open avenues of deeper surveillance, tracking, social ranking and potential for misuse by the people in control. Potentially a scarier society than the one we live in today. All that is coming true, slowly and steadily, unfortunately. The 2016 episode, called Nosedive, follows Lacie (Bryce Dallas Howard) in a world where people can rate each other on a scale of one to five stars—based on every interaction they have. Needless to say, there is a craving for the highest possible ratings. The socioeconomic standing depends on how many stars you currently have. And if you fall on the wrong side of the ratings, life becomes a pain.
That was TV. That was a Netflix show. Time to get back to studies, children? Not so fast, because the direction we are headed in, isn’t too different from the world that Black Mirror writer Charlie Brooker depicted in the show.
Facebook, as it turns out, is now assigning a reputation score for its users. The motive, at least what we are being led to believe, is to identify and weed out malicious user accounts. A Facebook user’s reputation score isn’t designed to be the final verdict about a person’s credibility. However, it will be just one more metric among many others that Facebook uses, to make a virtual character sketch of a Facebook user, and identify the risky ones. One of the reasons for this sort of a metric is the growing intolerance in general, which directly led to the misuse of the option to report incorrect, dangerous, fake or malicious content. As it turns out, it is “not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they’re intentionally trying to target a particular publisher,” said Facebook’s Tessa Lyons, in an interview with The Washington Post.
The social network is also monitoring which users actively tend to report content as problematic, which may actually be from publishers who are generally deemed trustworthy by other users. False reporting, when done as a coordinated effort, can be used to game the systems put in place by tech companies to monitor and weed out genuinely unacceptable content. Let us take a look at this example. If you report an article you saw on Facebook as false or incorrect, and the Facebook fact-checking process confirms that, it is a good mark on your checklist. In the future, your feedback about content on the social network will have more weightage than someone who in the past has been known to report content which isn’t against the policies of content sharing on the platform.
Read More Here
Article Credit: News18
The post Big Data And us: Are we All Being Given a Reputation Score? appeared first on erpinnews.