In brief: Meta identified around 2,000 Facebook accounts during the previous quarter that violated its policy against coordinated inauthentic behavior. These mostly consist of accounts posting fake political messages and influence campaigns spreading propaganda, and the country with the largest identifiable origin of these accounts was Israel.
Meta’s quarterly Adversarial Threat Report includes research into six new covert influence operations that it disrupted, including ones from Bangladesh, China, Croatia, Iran, Israel, and a CIB network that targeted Moldova and Madagascar.
While Meta removed 1,326 Facebook accounts from unknown origin, the identified country where the largest number of accounts originated was Israel, where 510 FB accounts, 11 Pages, one Group, and 32 accounts on Instagram were removed.
The targeted audience was primarily in the US and Canada. The fake accounts, which posed as Jewish students, African Americans, or “concerned citizens,” posted mostly about the Israel-Hamas war, including calls for the release of hostages; praise for Israel’s military actions; and criticism of campus antisemitism, the United Nations Relief and Works Agency (UNRWA), and Muslims, claiming that “radical Islam” poses a threat to liberal values in Canada.
Meta writes that while the operators of the accounts attempted to hide their identities, it found links to Stoic, a political marketing and business intelligence firm based in Tel Aviv, Israel. Meta banned the group and issued a cease-and-desist order.
Stoic’s website states that it uses generative AI to “create targeted content and organically distribute it quickly to the relevant platforms.” Similar posts were discovered on YouTube and X, most of it AI-generated responses to other users’ often-unrelated content.
“This campaign appeared to have purchased inauthentic engagement (i.e. likes and followers) from Vietnam in an attempt to make its content appear more popular than it was,” Meta writes.
Despite the use of generative AI in that campaign, Meta says it has not seen threat actors use photo-realistic AI-generated media of politicians as a broader trend. The technology has so far been limited to photo and image creation, AI-generated video news readers, and text generation.
“Right now we’re not seeing gen AI being used in terribly sophisticated ways,” said David Agranovich, Meta’s policy director of threat disruption.
China is another country that Meta identified as the place of origin for some of the inauthentic accounts (37). While none of the posted content Meta deleted was related to the upcoming US elections, Microsoft last month warned that threat actors in the Asian nation are using generative AI to sow disruption in the United States during this election year.