Facebook is feeling the heat from remarks expressed recently by the UK’s Prime Minister Theresa May in the wake of the latest terrorist attack, and is pushing back. Apparently, Facebook wants to be seen as being on the offensive in the terrorism fight, so as to not be fighting a defensive battle. It has begun a coordinated campaign aimed at showing it is being proactive in promptly responding to anything that may be considered incendiary, terrorism-related content.
Facebook will employ sophisticated algorithms to mine words, images and videos, in order to root out and remove extremists’ propaganda and messages. It will demonstrate its resolve by not just relying on AI, but by using human assets as well.
Acknowledging that AI alone is insufficient to be effective, Facebook has assembled a cadre of 150 counterterrorism and technical experts, who are focused on tracking and taking down propaganda and other materials. This is an attempt to keep pace with, if not stay ahead of, the so-called Islamic State’s ever-changing tactics.
A bigger problem, of course, is that articulated by Brian Fishman, lead policy manager for counterterrorism at Facebook and the author of The Master Plan: ISIS, al-Qaeda, and the Jihadi Strategy for Final Victory. He emphasises that “there is no switch you can flip. There is no find the terrorist button.” Thus the heavy-lifting is being done by humans.
The Facebook counterterrorism team is headed up by a former federal prosecutor Monika Bickert, and is tasked with overcoming that obstacle. Bickert is keen on keeping the terrorists off stride and off balance:
“Just as terrorist propaganda has changed over the years, so have our enforcement efforts. We are now really focused on using technology to find this content so that we can remove it before people are seeing it. We want Facebook to be a very hostile environment for terrorists and we are doing everything we can to keep terror propaganda off Facebook.”
Bickert’s team is employing new, sophisticated digital fingerprinting technologies called “hashes,” which are helping flag up and intercept extremist videos before they are posted. As good as this sounds, it isn’t enough to deal with the sheer volume of content being posted on the site and, as yet, is insufficient in keeping terrorists from gathering on Facebook to recruit and communicate with followers.
It is interesting to note that, on the heels of the recent attacks in the UK and politicians scolding the tech companies for not doing more, companies such as Facebook are trotting out examples of things they are already doing that the police are asking them to do. This seems to highlight the ignorance of government leaders, and implies that their frantic calls for technological action now are merely attempts to detract the pubic from the government’s own culpability and lack of leadership.
For its part, Facebook, fearing that governments may look to hold it legally liable for failing to police extremist materials resulting in future attacks, has responded as forcefully against May’s criticisms:
“They want to hear that social media companies are taking this seriously. We are taking it seriously. The measures they are talking about, we are already doing.”
This is a prickly proposition, as Facebook and other internet companies have had to balance the threat to free speech with their efforts to weed out terrorist propaganda. But with terrorist attacks occurring so frequently, it is feeling ever greater pressure from government officials – and not just from them. More and more, the public as a whole is tiring of the assaults, and alarmingly, seem more and more willing to concede freedom to be made safer.
Personally, I think that politicians, with their antennae always attuned to public perception, are playing – nay, even preying – on these public perceptions to mine votes.
But it seems much like trying to tamp down a wildfire. Just as you put out one flame, another flares. So it is here with Facebook. While addressing extremism content on its main platform, much terrorist activity has left Facebook, and has migrated to encrypted messaging services such as Telegram and Facebook-owned WhatsApp – not to mention being still active in Facebook private groups.
Artificial Intelligence, to be sure, has its limitations, and technology can only go so far in playing a part in thwarting the terrorist advance. Therefore, Facebook is relying on its human participants to play a bigger role. Facebook, which relies on its nearly two billion users to alert the company to content that violates its rules, says it now finds more than half of accounts that are removed from Facebook for terrorist activity on its own.
The human members of Facebook’s task force, headed by Bickert, are pivotal, as they can distinguish among, say, images, where AI perhaps cannot.
Facebook has many admirers of its anti-terrorism campaign. Probably the entire tech industry is rooting for it to succeed and deflect the attention of those bent on trying to coerce companies into eliminating end-to-end encryption. Facebook has drawn a line in the sand on that issue, arguing (with others) that strong encryption technology has legitimate uses, such as for human rights activists and journalists who need to know their communications can only be read by the sender and the recipient.
If Facebook’s campaign succeeds, it will have a ripple effect that not only will put a damper on terrorist activities, but also reduce the friction between the tech industry and law enforcement over the encryption issue. Then, and only then, will personal freedom be preserved and, at the same time, the citizens of the world be made safer.