Meta has published its latest “Adversarial Threat Report” which looks at coordinated influence behavior detected across its apps.
And in this report, Meta’s also provided some insight into the key trends that its team has noted throughout the year, which point to ongoing and emerging concerns within the global cybersecurity threat landscape.
First off, Meta notes that the majority of coordinated influence efforts continue to come out of Russia, as Russian operatives seek to bend global narratives in their favor.
As per Meta:
“Russia remains the number one source of global CIB networks we’ve disrupted to date since 2017, with 39 covert influence operations. The next most frequent sources of foreign interference are Iran, with 31 CIB networks, and China, with 11.”
Russian influence operations have been focused on interfering in local elections, and pushing pro-Kremlin talking points in relation to Ukraine. And the scope of activity coming from Russian sources points to ongoing concern, and shows that Russian operatives remain dedicated to manipulating information wherever they can, in order to boost the nation’s global standing.
Meta’s also shared notes on the advancing use of AI in coordinated manipulation campaigns. Or really, the relative lack of such thus far.
“Our findings so far suggest that GenAI-powered tactics have provided only incremental productivity and content-generation gains to the threat actors, and have not impeded our ability to disrupt their covert influence operations.”
Meta says that AI was most commonly used by threat actors to generate headshots for fake profiles, which it can largely detect through its latest systems, as well as “fictitious news brands posting AI-generated video newsreaders across the internet.”
Advancing AI tools will make these even harder to pinpoint, especially on the video side. But it is interesting that AI tools haven’t provided the boost that many expected for scammers online.
At least not yet.
Meta also notes that most of the manipulation networks that it detected were also using various other social platforms, including YouTube, TikTok, X, Telegram, Reddit, Medium, Pinterest, and more.
“We’ve seen a number of influence operations shift much of their activities to platforms with fewer safeguards. For example, fictitious videos about the US elections– which were assessed by the US intelligence community to be linked to Russian-based influence actors– were seeded on X and Telegram.”
The mention of X is notable, in that the Elon Musk-owned platform has made significant changes to its detection and moderation processes, which various reports suggest have facilitated such activity in the app.
Meta shares data on its findings with other platforms to help inform broader enforcement of such activities, though X is absent from many of these groups. As such, it does seem like Meta is casting a little shade X’s way here, by highlighting it as a potential concern, due its reduced safeguards.
It’s an interesting overview of the current cybersecurity landscape, as it relates to social media apps, and the key players seeking to manipulate users with such tactics.
I mean, these trends are no surprise, as it’s long been the same nations leading the change on this front. But it’s worth noting that such initiatives are not easing, and that state-based actors continue to manipulate news and information in social apps for their own means.
You can read Meta’s full third quarter Adversarial Threat Report here.