When fiction surpasses reality, the boundary between information and manipulation dangerously narrows. This is what emerged from an investigation conducted by the American weekly TIME on Veo 3, the new AI-based video generator developed by Google DeepMind. The magazine tested the tool’s potential and found unsettling results: in just a few minutes, the algorithm was able to produce highly realistic deepfake videos depicting riots, election fraud, interethnic violence, and geopolitical disinformation.
According to the investigation, the platform allowed the creation of content such as a Hindu temple set on fire by a crowd in Pakistan, Chinese researchers in a suspicious laboratory, or a fake arrest in Liverpool with racial implications. Although these videos contained small errors, experts consulted by TIME warned that if shared online with manipulated context, they could plausibly inflame public opinion or even trigger social unrest.
What makes Veo 3 particularly alarming is its visual and audio accuracy: unlike previous generators, the sequences include dialogues, ambient sounds, and cinematic cuts that appear convincing even to the most attentive viewers. In many cases, even security watermarks, visual or digital elements embedded within an image to ensure authenticity were deemed too small to be effective. The software is currently available for about $249 per month to Google AI Ultra subscribers in 73 countries, including the United States and the United Kingdom.
Google stated that it had integrated protective mechanisms and was working on a tool to detect generated videos (called SynthID Detector), but these barriers currently seem insufficient. Some attempts to produce sensitive content were blocked, but others, such as videos of manipulated elections or imaginary disasters, were created without significant obstacles.
Connor Leahy, CEO of the startup Conjecture, expressed that the availability of such powerful systems without strict controls represents a global warning. He said the fact that the tech industry fails to manage known and predictable risks shows it is not ready to handle even more advanced AI.
The risk, which is now evident, is not only technical but also social: the erosion of collective trust in digital content. Nina Brown, a professor at Syracuse University in New York, explained that the real threat is that no one will be able to distinguish what is real anymore.
Amid violated copyrights, viral misleading content, and growing media confusion, TIME’s experiment with Veo 3 is therefore not just a warning, but an open window into a future where truth, lacking safeguards, risks becoming irrelevant.