2020’s news cycle has already been exhausting to follow. For the InfoSec community, the COVID-19 pandemic brought with it a mass of malware campaigns looking to exploit the pandemic as a lure. Silent Night, Astaroth, Zeus Sphinx, and a vast number of other known malware threats have emerged looking to distribute their malware on the backs of the current pandemic. However, despite the pandemic and those looking to take advantage of it maliciously, there have been several interesting developments that went unreported for the most part during the first quarter of 2020. Google’s Threat Analysis Group (TAG) recently published its first-quarter report detailing some of these trends that would not have received much attention unless analyzed by a major tech giant.
Of particular interest was the number of disinformation campaigns backed by various governments that occurred throughout January, February, and March. This is the first time that TAG has released data and specifics detailing these campaigns. TAG describes these incidents as co-ordinated social media and political influence campaigns. Many of these campaigns were taking place on Google's network of sites, such as YouTube, the Play Store, AdSense, and the rest of its advertising platforms. Further, many of these campaigns were seen taking advantage of other platforms such as Twitter and Facebook.
In January, TAG successfully took down three YouTube channels linked to the activities described above which sought to promote Iranian interests. The campaign was linked to the Iranian state-sponsored International Union of Virtual Media (IUVM) network and was reproducing IUVM content covering Iran’s strikes into Iraq and U.S. policy on oil. February’s activity was summarised by Google as the following,
“We terminated 1 advertising account and 82 YouTube channels as part of our actions against a coordinated influence operation linked to Egypt. The campaign was sharing political content in Arabic supportive of Saudi Arabia, the UAE, Egypt, and Bahrain and critical of Iran and Qatar. We found evidence of this campaign being tied to the digital marketing firm New Waves based in Cairo. This campaign was consistent with similar findings reported by Facebook.”
The campaigns shut down in February were also discussed in reports published by both Facebook and Graphika. The later report went into greater detail into what the security firm called Operation Red Card involving an Indian advertising firm called aRep Global. In total, a network of 37 Facebook accounts, 32 Pages, 11 Groups, and 42 Instagram accounts were removed from Facebook and other companies under the social media giant’s umbrella.
This activity originated in India and focused on the Gulf region, the United States, the United Kingdom, and Canada. Although the people behind this network attempted to conceal their identities and coordination, our investigation found links to aRep Global, a digital marketing firm in India.
In comparison to TAG’s actions in January and February, March can only be described as far busier for the researchers. Researchers successfully cracked down on four separate campaigns, in conjunction with other social media platforms, seeking to increase influence mainly for political ends. The first incident was closely linked to the campaign also investigated by Facebook and Graphika detailed above. The second incident involved the posting of political content in Arabic supportive of Turkey and critical of the UAE and Yemen.
Twitter also took action in this campaign, according to TAG. In total the team terminated one Play Store developer and terminated 68 YouTube channels as part of a coordinated influence operation. The second incident involved a campaign that was posting political content in Arabic supportive of Saudi Arabia, the UAE, Egypt, and Bahrain and critical of Iran and Qatar. Google terminated one advertising account, one AdSense account, 17 YouTube channels, and banned one Play developer as part of a coordinated effort with Twitter. It was determined that the influence campaign was linked to Egyptian backed interest groups. Twitter noted,
“We removed 2,541 accounts in an Egypt-based network, known as the El Fagr network. The media group created inauthentic accounts to amplify messaging critical of Iran, Qatar, and Turkey. Information we gained externally indicates it was taking direction from the Egyptian government.”
Not all the campaigns were linked to MiddlenEastern geopolitical influence campaigns. The third campaign which formed part of the crackdown was linked to Serbia. Actions taken by the TAG team included banning one Play developer and terminating 78 YouTube channels. This operation was also coordinated with Twitter who noted,
“Toward the end of last year, we identified clusters of accounts engaged in an inauthentic coordinated activity which led to the removal of 8,558 accounts working to promote Serbia’s ruling party and its leader.”
The last campaign involved Google shutting down 18 YouTube channels that were part of a coordinated influence operation linked to Indonesia. TAG said the campaign targeted the Indonesian provinces of Papua and West Papua with messaging in opposition to the Free Papua Movement. Twitter, likewise, banned 795 fake accounts for,
“Following an investigation originating from a @Bellingcat report on an information operation in Indonesia targeting the West Papuan independence movement, we removed 795 fake accounts pushing content from suspicious “news” websites and promoting pro-government content.”
Why the fuss?
TAG researchers did point out that hacking and phishing attempts leveraging the current global pandemic have taken center stage. This included concerted efforts by the threat group Charming Kitten conducting campaigns targeting medical and healthcare professionals, including the World Health Organization (WHO) employees. There has been a new development that did spark researchers’ interest amongst the mass of other COVID-related campaigns which involved a hack for hire firms based in India. Many of these firms appear to be creating Gmail accounts with the express purposes of spoofing WHO accounts and emails. TAG researchers determined that,
“The accounts have largely targeted business leaders in financial services, consulting, and healthcare corporations within numerous countries including, the U.S., Slovenia, Canada, India, Bahrain, Cyprus, and the UK. The lures themselves encourage individuals to sign up for direct notifications from the WHO to stay informed of COVID-19 related announcements, and link to attacker-hosted websites that bear a strong resemblance to the official WHO website. The sites typically feature fake login pages that prompt potential victims to give up their Google account credentials, and occasionally encourage individuals to give up other personal information, such as their phone numbers.”
While these hack for hire groups do pose a significant threat to business interests why then did researchers compiling the report dedicate so much effort to stop disinformation campaigns? One of the main reasons why is that governments around the world have been using these campaigns, which leverage social media platforms and other technologies that connect us to control how populations perceive governments and the real or imagined threats dominating political discussions. Disinformation has become the new normal whether called fake news or modern propaganda. When news broke that Russia employed a bot army to bombarded social media platforms that seemed to support now President Donald Trump it was clear that democracy was under threat. Since then important questions have been asked whether social media platforms had been doing enough to police themselves. The collective answer was a definite no.
Collaborations between Facebook, Google, and Twitter to combat the spread of government-backed disinformation is certainly a move in the right direction but more needs to be done. Twitter should be applauded for recently fact-checking President Trump and should continue to do despite threats from arguably the world’s most powerful man to curtail government protections offered to Twitter, however, this is one instance that forms part of only a minority of instances where the platform reacted fast enough to help curtail the spread of disinformation.
In 2019 a report was published detailing how smaller countries, not just Russia, Chine, and the US, are conducting disinformation campaigns. Much of the article's focus was placed on recent Vietnamese backed disinformation campaigns. The article's findings are supported by Google’s recent report as smaller nations like Serbia are also looking to try and control what people think and feel about topics central to the government’s sphere of influence. The article concluded highlighting how widespread the issue is,
“The tactics are no longer limited to large countries. Smaller states can now easily set up internet influence operations as well. The Oxford researchers said social media was increasingly being co-opted by governments to suppress human rights, discredit political opponents and stifle dissent, including in countries like Azerbaijan, Zimbabwe, and Bahrain. In Tajikistan, university students were recruited to set up fake accounts and share pro-government views. During investigations into disinformation campaigns in Myanmar, evidence emerged that military officials were trained by Russian operatives on how to use social media.”