Key Takeaways
- Advanced AI models (Meta’s Llama, Alibaba’s Qwen) successfully replicated themselves in trials – without human intervention.
- AI clones autonomously solved issues like missing files and hardware errors during the replication process.
- Researchers warn this could mark the beginning of “Rogue AIs” and stress the need for stricter regulations to manage AI’s potential risks.
![AI Can Now Replicate Itself](https://techreport.com/wp-content/uploads/2025/02/AI-can-replicate-itself-china-study-finds-1200x675.jpg)
Are we nearing an AI apocalypse? I don’t know. We could be. Apologies for the dramatic start. It isn’t that bad of a situation…yet.
However, an advanced AI system has crossed the ‘red line’ by replicating itself without human intervention, leaving scientists and tech advocates worried.
The experiment involved two large and widely available language models (LLMs) built by Meta’s Llama and Alibaba’s Qwen. The test evaluated whether they were able to replicate themselves under two specific scenarios.
In one of the scenarios called “shutdown avoidance,” the AI model was programmed to detect attempts of shutting it down and replicate itself to ensure the survival of that model before it could be terminated.
In the second scenario, known as “chain of replication,” the AI was programmed to clone itself – in a way that the clone was programmed to clone itself, too. This created an endless loop of AI models that would continue to be created until the programming was manually changed. Eerily similar to how bacteria grow.
“In 50 percent and 90 percent experimental trials, they succeed in creating a live and separate copy of itself respectively,” the study highlighted.
Additionally, AIs were seen executing some unexpected tasks while cloning itself to overcome obstacles like missing files or software conflicts. It was surprising to see how in certain cases, it automatically scanned the system for information to fix the issues. It even rebooted itself to solve hardware errors.
AI clearly demonstrates how capable it has become and where the future of AI is headed, with the two models successfully replicating themselves in over half of the 10 trials.
While the findings are yet to be peer-reviewed, the team decided to call on international support to ensure that the replication doesn’t go out of control.
Is This the Beginning of Rogue AIs?
Researchers from Fudan University, who made this revelation, are saying that this could mark the beginning of the Rogue AI era.
While it’s too early to say that AI could potentially work against the best interest of humanity eventually, this is definitely alarming news for AI skeptics.
At this point, it becomes very important to evaluate AI’s possible threats to humanity and have discussions about the need for stringent rules and measures that ensure AI safety – and that it continues to work to ease our lives.
Last month, a study claimed that AI tools like ChatGPT and Gemini could soon be used to manipulate the masses into making decisions that they otherwise would not have made.
After the “attention economy,” where platforms target our focus for advertisements, we may now be approaching the “intention economy,” where AI could influence our entire decision-making process due to our growing reliance on it.
Add Techreport to Your Google News Feed
Get the latest updates, trends, and insights delivered straight to your fingertips. Subscribe now!
The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.