What Is Facebook Implied Credict Rating And Where  Should It Fall On Credict Spectrum

The Washington Post recently posted that Facebook is giving “trustworthiness” ratings to each of its users, which rate their reputation. Among other things, the firm admitted to utilizing the scores to decide whether content reported as incorrect by users should be sent to fact checkers for examination or if the user’s concerns should simply be disregarded. In contrast to its public pledges of transparency, the company keeps its user rankings under wraps, acknowledging their existence but refusing to comment on how they are determined or used, and refusing to allow external evaluation due to concerns of racial, sexual, or other demographic and cultural bias. What happens when Facebook’s new ranking algorithm becomes more widely used, and perhaps even exported as a service to other businesses and governments?

In breadth and emphasis, Facebook’s latest initiatives are strikingly comparable to China’s social credit system. Facebook, like China, wants to leverage the wide variety of behavioral and other indications that it has long used to profile users for advertising, but now to openly use those profiles to rank and score its users and give them various privileges and rights based on those scores.

While referring to other users, Facebook only mentioned the usage of its scores in its attempts to fight “false news.” In its present form, the corporation admitted to using the scores to decide whether to disregard actions made by a user to flag content they think are fraudulent. Users with a high trust rating will have their flagged posts and news items reviewed by third-party fact checkers, while those with low trust ratings will be disregarded. Given the prominence of “opinion checking” and circular referencing in the main fact-checking websites, there is a high risk of reinforcement bias.

However, it is the apps that go beyond basic fact checking that cause the most worry about Facebook’s new user ratings. Governments and businesses all around the globe have long sought methods to evaluate their people. Some nations, like as China, have significantly invested in building a cutting-edge technical dystopia, while others rely on more fundamental indicators such as contacts with the criminal justice system, credit ratings, and other basic data.

It is not difficult to envision governments across the globe taking notice of Facebook’s new ratings. Why create your own huge surveillance and rating system when Facebook can do it all for you and has access to data that your government can only dream of acquiring?

Facebook has already been experimenting with collecting its enormous library of face data for its two billion users and conducting realtime facial recognition using surveillance cameras to identify people as they move about the offline world. It even went so far as to imagine retail businesses receiving a “trust” score that would indicate which people strolling through its store could be trusted with high-value goods and which security should carefully monitor. When asked whether the business would promise on the record to never provide such widespread face recognition as a commercial service to other companies and countries for surveillance, espionage, and military purposes, the company refused.

The company also did not deny that foreign governments have used court orders to obtain lists of their citizens who have been determined by the company’s algorithms to be homosexual or interested in anti-government topics, including in repressive regimes where such labels could result in the death penalty. According to the business, marketers’ need to precisely target people trumps that persons’ right to be protected from categories that might result in their arrest, torture, or death.

Unlike China’s country-specific system, which only applies to its own people, Facebook has the unique opportunity to apply its ratings to more than a quarter of the world’s population. Governments will very certainly be knocking on the door with court orders to get such scores for their people to supplement all of the personality and behavioral indications they are also likely to seek.

While businesses have long computed “influencer” and other related scores for social media users, such scores have mainly been utilized by marketers to find accounts to sell their goods or ad campaigns to. A low score would result in a lack of commercial sponsorship opportunities at best. Facebook’s ratings, on the other hand, openly punish users by denying them the rights and advantages enjoyed by others.

This begs the crucial issue of what sorts of hidden biases may lie in Facebook’s new ratings. Silicon Valley’s workforce is very limited in terms of demographics and experience, and it has traditionally been oblivious to its inability to look beyond its own prejudices.

For years, the business refused to disclose aggregate demographic data about its content moderators, including the number of moderators who spoke each language. Instead, the business continued telling the public and politicians that it had enough reviewers for each language and culture and that they should simply trust it. When the company’s inability to filter material properly in Burma contributed to fan violence there, the expected answer was that it believed it had enough of reviewers and that no one could have anticipated or forecast that its handful of reviewers wasn’t enough. Similarly, after repeatedly claiming that its Trending Topics module was completely unbiased and that the company was fully representing the geographic diversity of its user base, when the company was finally forced under immense pressure to release its media source list, Africa was almost entirely absent. It wasn’t that Facebook purposefully skewed its algorithm against Africa; rather, its Trending Topics employees and the management supervising them were unable to look beyond their own implicit prejudices to identify a problem.

As with any sort of user “trust” rating, Facebook’s new score system is susceptible to a plethora of potential biases, particularly given the unknowns surrounding the entire collection of signals utilized to compute them.

The business said that it was unable to comment on the signals used to calculate its ratings since doing so would enable users to cheat the system. However, the fact that Facebook’s system may be so easily gamed raises significant concerns about its resilience. A system that correctly and holistically evaluates a person’s activities throughout the full spectrum of their interactions with Facebook’s services would be more difficult to rig, since altering a huge number of behaviors would have no effect on the totality and momentum of their score.

Furthermore, even if Facebook was genuinely worried that its algorithms were much weaker than it wanted the public to believe and that they might be easily gamed, it could at the very least enlist the help of an independent team of experts to review their algorithmic inputs for bias. Such an expert panel would not in any way assist bad actors in learning how to rig the system and would provide the public and politicians with at least some basic reassurance that the algorithms were not so prejudiced as to directly punish particular races, genders, or cultures.

However, when asked whether the business would at least agree to such an independent examination of its rating system, the company, predictably, refused.

Conclusion

 we witness in Facebook a business that is transitioning from a passive data archive that hoovered up and stored all it could about us to an active profiling services company that is mining all of that data to create profiles of us that can be used for much more than simply advertising. From offline bulk surveillance face recognition to “trust” ratings, we witness a business that undermines even China’s mass surveillance and societal-scale profiling aspirations.