You’ve probably seen them – AI-generated videos sprawled across your feed. Their next target: London.
Our social media feeds are saturated with artificial intelligence. Automated voices, synthetic artwork, and videos that increasingly look like they were shot on an iPhone, in person.
Apps like Sora and Veo give consumers the chance to become producers, creating videos that express their views without the bureaucracy or negotiation of a film crew. But AI video production hasn’t only replicated cinematic imagery.
Low quality, grainy videos of the London Underground have proliferated on X and TikTok, depicting gangs of masked men filling the tube.
London is now falling victim to this new tech revolution. And lately this content has depicted the city as a threatening, criminal and chaotic environment.
A generative AI image showing crime on London Underground using Chat GPT
And what’s more alarming is text-to-video AI models are rapidly improving every day. Nick Ridley, director of FIT Digital, says videos generated by Sora 2 are “already one-hundred percent believable. You have to really analyse it many times to work out if it’s real.”
“I think the average consumer is not able to make a distinguishing decision about whether something is AI or not.”
Quint Boa, founder of video agency Synima, expressed his concerns to City News. “The ability of AI to produce high-quality video has increased exponentially.”
“I would say this is an emergency – full caps, flashing red, triple underscored – with the way this is going. I think we are really heading into something that we have no guardrails for at all.”
Synima offer video production services aided by generative artificial intelligence, alongside human talent.
“Personally, I think London is the greatest city in the world,” Quint says. “I’ve lived here for fifty years.”
“It is going to be possible for any production company or any individual to generate content, to produce anything – whether that’s a place or a person – and there will be an agenda behind it.”
Those living outside of the capital often draw their own conclusions about London. Online AI content may only amplify these concerns, exaggerating issues within the city to an audience outside of it.
Nick Ridley believes state intervention is necessary. “I think there needs to be some kind of governance around that… making consumers aware of AI, where it’s needed.”
“It’s not just a London thing – it’s a global thing. I think every country and every city will have the same problem.”
Submitted Article
Headline
Short Headline
Standfirst
Published Article
HeadlineThese AI images are warping people’s view of London
Short HeadlineAI images are warping people's view of London
StandfirstIncreasingly sophisticated AI models are presenting the capital as a dangerous destination.
You’ve probably seen them – AI-generated videos sprawled across your feed. Their next target: London.
Our social media feeds are saturated with artificial intelligence. Automated voices, synthetic artwork, and videos that increasingly look like they were shot on an iPhone, in person.
Apps like Sora and Veo give consumers the chance to become producers, creating videos that express their views without the bureaucracy or negotiation of a film crew. But AI video production hasn’t only replicated cinematic imagery.
Low quality, grainy videos of the London Underground have proliferated on X and TikTok, depicting gangs of masked men filling the tube.
London is now falling victim to this new tech revolution. And lately this content has depicted the city as a threatening, criminal and chaotic environment.
A generative AI image showing crime on London Underground using Chat GPT
And what’s more alarming is text-to-video AI models are rapidly improving every day. Nick Ridley, director of FIT Digital, says videos generated by Sora 2 are “already one-hundred percent believable. You have to really analyse it many times to work out if it’s real.”
“I think the average consumer is not able to make a distinguishing decision about whether something is AI or not.”
Quint Boa, founder of video agency Synima, expressed his concerns to City News. “The ability of AI to produce high-quality video has increased exponentially.”
“I would say this is an emergency – full caps, flashing red, triple underscored – with the way this is going. I think we are really heading into something that we have no guardrails for at all.”
Synima offer video production services aided by generative artificial intelligence, alongside human talent.
“Personally, I think London is the greatest city in the world,” Quint says. “I’ve lived here for fifty years.”
“It is going to be possible for any production company or any individual to generate content, to produce anything – whether that’s a place or a person – and there will be an agenda behind it.”
Those living outside of the capital often draw their own conclusions about London. Online AI content may only amplify these concerns, exaggerating issues within the city to an audience outside of it.
Nick Ridley believes state intervention is necessary. “I think there needs to be some kind of governance around that… making consumers aware of AI, where it’s needed.”
“It’s not just a London thing – it’s a global thing. I think every country and every city will have the same problem.”
A 2024 Freedom of Information request showed Bromley Council had £4.5 million invested in arms companies that year with £1.5 million reportedly linked to Israel.
Undercover officers raided three linked shops in Barking town centre, revealing how illicit tobacco is being hidden in walls, ceilings and nearby locations to evade detection.
More than 330,000 people across the UK sought support from the Stop It Now helpline in 2025 over concerns about their own or someone else’s online sexual behaviour towards children, according to new charity data. The anonymous service says contacts by phone, email and webchat rose significantly over the year.