Responding to OpenAI’s report, ruling Bharatiya Janata Party (BJP) labeled this a “dangerous threat” to democracy, stating that OpenAI should have informed the public when the threat was first detected in May.
“It is absolutely clear and obvious that BJP was and is the target of influence operations, misinformation, and foreign interference, being done by and/or on behalf of some Indian political parties,” said Minister of State for Electronics and IT Rajeev Chandrasekhar on X (formerly Twitter).
According to OpenAI’s report, the threat was detected in May, although the exact date remains unknown. Four of the seven phases of the Lok Sabha elections were conducted during this month.
“My view is that these platforms could have released this much earlier, and not so late when elections are ending,” he added.
It is absolutely clear and obvious that @BJP4India was and is the target of influence operations, misinformation and foreign interference, being done by and/or on behalf of some Indian political parties.This is very dangerous threat to our democracy. It is clear vested… https://t.co/e78pbEuHwe
— Rajeev Chandrasekhar ????????(Modiyude Kutumbam) (@Rajeev_GoI) May 31, 2024
OpenAI’s report on deceptive uses of AI
OpenAI’s report on deceptive uses of AI
OpenAI stated it acted within 24 hours to disrupt the “deceptive” use of artificial intelligence (AI) in this covert operation. The threat actors leveraged OpenAI’s language models to generate comments, articles, and social media profiles that criticised the ruling BJP and praised the Congress party.
“In May, the network began generating comments that focused on India, criticised the ruling BJP party, and praised the Opposition Congress party. We disrupted some activity focused on the Indian elections less than 24 hours after it began,” OpenAI said.
OpenAI banned a cluster of accounts operated from Israel that were used to generate and edit content across various platforms, including X (formerly Twitter), Facebook, Instagram, websites, and YouTube.
Other target of deceptive AI practices
Other target of deceptive AI practices
OpenAI highlighted that the content posted by these operations covered a wide range of issues, including Russia’s invasion of Ukraine, the conflict in Gaza, politics in Europe and the United States, and criticisms of the Chinese government by Chinese dissidents and foreign governments, along with the ongoing Indian elections.
“We’ve terminated accounts linked to covert influence operations; no significant audience increase due to our services,” OpenAI noted.
OpenAI disclosed that it has disrupted five covert operations in the last three months that sought to use their models for deceptive activities across the internet. “Our investigations into suspected covert influence operations (IO) are part of a broader strategy to meet our goal of safe AI deployment,” the company concluded.
First Published: Jun 01 2024 | 3:00 PM IST