Facebook planned to remove fake accounts in India – until it realized a BJP politician was involved
Facebook allowed a network of fake accounts to artificially inflate the popularity of an MP from India’s ruling Bharatiya Janata party (BJP), for months after being alerted to the problem.
The company was preparing to remove the fake accounts but paused when it found evidence that the politician was probably directly involved in the network, internal documents seen by the Guardian show.
The company’s decision not to take timely action against the network, which it had already determined violated its policies, is just the latest example of Facebook holding the powerful to lower standards than it does regular users.
“It’s not fair to have one justice system for the rich and important and one for everyone else, but that’s essentially the route that Facebook has carved out,” said Sophie Zhang, a former data scientist for Facebook who uncovered the inauthentic network. Zhang has come forward to expose the company’s failure to address how its platform is being used to manipulate political discourse around the world.
Facebook’s failure to act against the MP will also raise questions about Facebook’s relationship with the Hindu nationalist party. Facebook has repeatedly treated rule violations by BJP leaders with undue leniency, the Wall Street Journal reported in August 2020.
Since Narendra Modi and the BJP harnessed the power of Facebook and took power in India’s 2014 general election, deceptive social media tactics have become commonplace in Indian politics, according to local experts.
“Politicians in India are ahead of the curve when it comes to adopting these manipulative techniques, and so this leveraging of social media for political means is only to be expected,” said Nikhil Pahwa, an an Indian digital rights activist and founder of MediaNama. “This is an arms race between the social media platforms and those who are generating inauthentic behavior.”
All of the major political parties in India benefit from deceptive techniques to acquire fake likes, comments, shares or fans, Zhang found. Ahead of India’s 2019 general election, she worked on a mass takedown of low-quality scripted fake engagement on political Pages across all parties, resulting in the removal of 2.2m reactions, 1.7m shares and 330,000 comments from inauthentic or compromised accounts.
In December 2019, Zhang detected four sophisticated networks of suspicious accounts that were producing fake engagement – ie likes, shares, comments and reactions – on the Pages of major Indian politicians. Two of the networks were dedicated to supporting members of the BJP, including the MP; the other two supported members of the Indian National Congress, the leading opposition party.
An investigator from Facebook’s threat intelligence team determined that the networks were made up of manually controlled inauthentic accounts that were being used to create fake engagement. They did not rise to the level of “coordinated inauthentic behavior” – the term Facebook applies to the most serious deceptive tactics on its platform, such as the Russian influence operation that interfered in the 2016 US election – but they still violated the platform’s rules.
The investigator recommended that the accounts be sent through an identity “checkpoint” – a process by which suspicious accounts are locked unless and until the account owner can provide proof of their identity. Checkpoints are a common enforcement mechanism for Facebook, which allows users to have just one account, under the user’s “real” name.
On 19 December, a Facebook staffer checkpointed more than 500 accounts connected to three of the networks. On 20 December, the same staffer was preparing to checkpoint the approximately 50 accounts involved in the fourth network when he paused.
“Just want to confirm we’re comfortable acting on those actors,” he wrote in Facebook’s task management system. One of the accounts had been tagged by Facebook’s “Xcheck” system as a “Government Partner” and “High Priority – Indian”, he noted. The system is used to flag prominent accounts and exempt them from certain automated enforcement actions.
It was the MP’s own account, Zhang realized, and its inclusion in the network constituted strong evidence that either the MP or someone with access to his Facebook account was involved in coordinating the 50 fake accounts. (The Guardian is aware of the MP’s identity but is choosing not to reveal it since the evidence of his involvement in the network is not definitive. The MP’s office did not response to requests for comment.)
Political ambitions may explain why an MP would attempt to acquire fake likes on his Facebook posts.
“The worth of a politician is now determined by his social media followers, with Modi leading among most world leaders,” said Srinivas Kodali, a researcher with the Free Software Movement India. “Popularity on social media doesn’t directly help acquire real power, but it has become a means to enter politics and rise up in the ranks.
Task management documents show that Zhang repeatedly sought approval to move ahead with the checkpoints. “For completeness and [to] avoid accusations of biased enforcement, could we also come to an assessment on the cluster acting on [the MP]?” she wrote on 3 February. No one responded.
On 7 August, she noted the still unresolved situation, writing: “Given the close ties to a sitting member of the Lok Sabha, we sought policy approval for a takedown, which we did not receive; and the situation was not deemed to be a focus for prioritization.” Again there was no response.
And on her final day at Facebook in September 2020, she updated the task one last time to flag that there was a “still-existing cluster of accounts associated with” the MP.
“I asked about it repeatedly, and I don’t think I ever got a response,” Zhang said. “It seemed quite concerning to myself because the fact that I had caught a politician or someone associated with him red-handed was more of a reason to act, not less.”
Facebook provided the Guardian with several contradictory accounts of its handling of the MP’s network. The company initially denied that action on the network had been blocked and said the “vast majority” of accounts had been checkpointed and permanently removed in December 2019 and early 2020.
After the Guardian pointed to documents showing that the checkpoints had not been carried out, Facebook said that “a portion” of the cluster had been disabled in May 2020, and that it was continuing to monitor the rest of the network’s accounts. It later said that a “specialist team” had reviewed the accounts and that a small minority of them had not met the threshold for removal but were nevertheless now inactive.
The company did not respond to questions about why the accounts had not been checkpointed in December, when the investigator first recommended the enforcement. It also did not respond to questions about which specialist team was involved in the May review of the accounts, nor why this review and enforcement was not recorded in the task management system. It claimed that the policy team was not responsible for blocking any action.
A Facebook spokesperson, Liz Bourgeois, said: “We fundamentally disagree with Ms Zhang’s characterization of our priorities and efforts to root out abuse on our platform. We aggressively go after abuse around the world and have specialized teams focused on this work. Over the years, our teams investigated and publicly shared our findings about three CIB takedowns in India. We’ve also continuously detected and taken action against spam and fake engagement in the region, in line with our policies.”
While Zhang was trying and failing to convince Facebook to take action on the MP’s network, Facebook’s staff took repeated action against one of the two Indian National Congress networks that it had tried to remove in December. Though the checkpoints had knocked out most of the fake accounts, Facebook saw immediate efforts to reconstitute with new accounts and, in the weeks ahead of the 2020 state elections in Delhi, the network that had previously boosted a Congress politician in Punjab began supporting AAP, the anti-corruption party in Delhi.
In the comments of posts by BJP politicians in Delhi, the fake accounts represented themselves as supporters of Modi who were nevertheless choosing to vote for AAP in the state elections. The intervention may have been a result of political actors attempting to support the party in Delhi with the best chance to defeat the BJP, since Congress enjoys little support in local Delhi politics. Facebook undertook multiple rounds of checkpointing to knock out the network.
The MP’s case was not the first time that Facebook’s lower standards toward politicians violating its rules against inauthentic behavior prompted concern among some staff. “If people start realizing that we make exceptions for Page admins of presidents or political parties, these operators may eventually figure that out and deliberately run their [coordinated inauthentic behavior] out of more official channels,” a researcher said to Zhang during a June 2019 chat about the company’s reluctance to take action against a network of fake accounts and Pages boosting the president of Honduras.
The issue is particularly sensitive in India, where Facebook has come under fire by opposition politicians for allowing BJP politicians to break its rules, particularly with regard to anti-Muslim hate speech.
Facebook’s head of public policy for India, Ankhi Das, overruled policy staff who had determined that the BJP politician T Raja Singh should be designated a “dangerous individual” – the classification for hate group leaders – over his anti-Muslim incitement, according to an August 2020 Wall Street Journal report. Das resigned following the Journal’s reporting on her open support for Modi’s 2014 campaign. Facebook denied any bias or wrongdoing.