And much of it is as we’ve come to expect – Meta removed two clusters of hacker groups operating in South East Asia, as well as a troll farm originating from Russia, which targeted global public discourse about the war in Ukraine, in an effort to seed pro-Russia sentiment.
Those are pretty much in line with Meta’s usual threat reports, but the company also took action on two new fronts, which could, eventually, have broader-reaching impacts.
In the first case, Meta took action against a group of accounts in India for ‘brigading’, or engaging in mass, coordinated action against certain users, in order to effectively silence them online.
As explained by Meta:
“We took down a brigading network of about 300 accounts on Facebook and Instagram in India that worked together to mass-harass people, including activists, comedians, actors and other influencers. The individuals behind this activity relied on a combination of authentic and duplicate accounts, and would call on others to harass people who posted content that this group deemed offensive to Hindus. The members of this network would then post high volumes of negative comments under the targets’ posts. In response, some people would hide or delete their posts leading to celebratory comments claiming a “successful raid.”
That’s interesting, because depending on your definition, this likely happens a lot, with groups of people coming together to flood comment streams and attack both the original poster and/or other users, in order to push forward their agenda, and essentially intimidate those with dissenting views.
That could also then extend to political tactics to ‘flood the zone’ with misinformation and rumor, in order to disorient audiences and sow distrust of the media in general. Such tactics are reliant on a form of brigading, which could also fall under this same enforcement approach, if it were to be extended.
It’ll be interesting to see whether brigading becomes a bigger focus for Meta’s team moving forward, and how, exactly, it defines brigading attacks, with the specific traits and trends playing a key role in dictating how such can be used to restrict information online.
On another front, Meta also took action against a group of accounts for mass reporting, which looks to use Meta’s own moderation tools for content suppression.
“In Q2 of 2022, we removed a network of about 2,800 accounts, Groups and Pages in Indonesia that worked together to falsely report people for various violations, including hate speech, impersonation, terrorism and bullying, in an attempt to have them and their posts wrongfully removed from Facebook. Most of these reports focused on people in Indonesia, primarily within the Wahhabi Muslim community. To conceal their activity and avoid detection, the individuals in this network would replace letters with numbers when posting about their targets. They, at times, created fake accounts that impersonated real people and then used them to report authentic users for impersonation.”
That’s another rising form of abuse, and it’s interesting to see how Meta is evolving its tactics to deal with these new threats, and approaching each in a more proactive way, as opposed to letting them become bigger concerns before quashing such practices.
The new enforcement elements provide some interesting perspective on the ever-changing threat landscape around online misinformation and political tactics, and as noted, if Meta were to expand such, that could have a big impact on how users coordinate and focus their efforts in this way.
Could sharing a divisive post into a group, where you know that the members will disagree, also count as brigading, if those members then go on to leave comments on the post attacking the person and their claims?
That seems like the exact same process, though maybe not as intentional, and that’s the type of next-level enforcement that we may be looking at next as Meta continues to improve its approaches.
You can read Meta’s full Q2 2022 Adversarial Threat Report here.