Read part one of the report here.
Facebook may not be keen on fixing its algorithm despite calls from advocates calling for lasting reforms. But in keeping its dangerous algorithm, the company has effectively chosen profit over users’ safety, internal company documents and a whistleblower testimony revealed.
The algorithm, implemented in 2018, supposedly aimed to encourage interactions among friends and families. It meant posts from people close to a user will be given a higher ranking in their news feeds. But this new algorithm has instead only turned up feeds where extremely partisan and divisive content can constantly reappear, according to a Washington Post analysis of the leaked “Facebook papers” and a testimony from a former company executive-turned-whistleblower Frances Haugen.
Facebook had already known the effects of such a dangerous algorithm, according to leaked internal research published in the Wall Street Journal. The disclosure of Facebook’s internal documents and communications has precipitated intense scrutiny and a Congressional inquiry over the social media company in the US. Philippine lawmakers, however, have yet to take action on these developments.
Haugen revealed to US lawmakers that Facebook’s news feed algorithm, called “meaningful social interactions,” encourages polarization and has the tendency to group users into “echo chambers.” While all social media algorithms tend to create these echo chambers, this phenomenon was more pronounced on Facebook, a 2021 study from various Italian universities revealed.
On Facebook, this finding meant that users with similar biases could interact more with each other, allowing disinformation and malicious posts to eventually reach a larger audience, repeatedly, as the echo chamber grows. And as the algorithm also collects users’ behavior, such as how the user scrolls through their feed, advertisers can use this data to target content for a specific audience.
“Big tech companies intentionally manipulate the algorithm to create a certain outcome, which in this case is to create a certain narrative favorable to a particular individual or group,” the internet rights advocate Computer Professionals’ Union (CPU) told the Collegian in an email.
CPU added that Facebook, being a business, has no incentive in reforming its dangerous algorithm. Any significant changes in the way content is presented to a user can have serious implications for advertisers and publishers on Facebook.
Philippine users, per a 2020 report by several advertising firms, spend on average four hours and 15 minutes on social media in a day—the highest, globally—and 22 minutes up from 2019. Facebook noted a similar trend. As its users and their time on the platforms increased, Facebook’s ad revenue as of September 2021 rose to USD28.3 billion—up almost a third from the same period last year.
“Facebook, like many social media sites, generates their income out of our data and our engagement with their platform. The more engagement, the more money Facebook makes,” CPU said.
The same can be said of politically charged content, Philip Jamilla of rights group Karapatan said.
“It (Facebook) incentivizes disinformation, incentivizes divisive content, yung content na nage-elicit talaga ng strong, negative, and angry emotions,” he added. “[Because of the algorithm], nabu-boost yung content na puno ng kasinungalingan, violent, and hateful content.”
In June 2020, Karapatan told Facebook to probe cases of online red-tagging in the country. In its letter, the group noted various forms of vilification and red-tagging, including those from official government accounts like the National Task Force to End Local Communist Armed Conflict, and the pages of local police and military units.
Per leaked documents, Facebook has set a different rulebook for users like government channels and officials, and celebrities. This, in effect, meant that high-profile users could post content that may already violate Facebook’s own community standards, but may not be taken down due to the company’s concern about a public backlash.
While Facebook has not confirmed if Philippine state forces’ Facebook accounts are exempt from its moderation rules, the company did respond to Karapatan’s letter. Jamilla recalled that Facebook had taken down malicious accounts that sent threats to various users. But the company, in a statement, said they had yet to see proof of “coordinated or malicious activity” in the creation of these fake accounts.
For Jamilla, although Facebook has taken steps to improve content moderation and protection of users’ data, such as through the creation of a semi-independent oversight board and regulatory policy recommendations, these are mere band-aid solutions.
The oversight board, which consists of former politicians, journalists, and rights defenders, for instance, reviews only posts that were taken down. Meanwhile, content reported by users that are allowed to remain are reviewed by content moderators who are typically contractors for Facebook.
“It’s not enough that they just took them down. Kailangang magkaroon ng assessment ang Facebook with how its algorithm works, how its business model affects not just one country, but the entire world,” Jamilla said. “[Their actions] somehow whitewash yung structural na problema ng algorithm ng Facebook.” ●
This is the second of a three-part series on how Facebook amplifies malicious content that potentially heightens risk to Filipino rights defenders’ safety and civil liberties. Read part three of the report here.