You may have seen me on our @citylondonnews social media, talking about how UK journalists are using AI. It might have looked a lot like me. It wasn’t.
This week, the City London News team decided to conduct an experiment.
We wanted to know the extent to which we could – and should – use AI in the newsroom.
As student journalists, we are entering an industry where AI is becoming increasingly saturated into day-to-day operations, whether it’s finding stories, writing scripts, or editing and creating images.
So, we used a range of different software to help us in our journalistic duties.
In my case, I created an entirely new team member. Meet Joe, our new robo-reporter.
Say hello to my digital doppelganger, who can say whatever I want!
I came in today, and filmed myself for a couple of minutes talking. I then uploaded that video into the software, and it created a whole new me.
Robo-Joe looked eerily similar to real me, apart from with a strange accent (the software did not like my Yorkshire accent, giving me a mix of southern English and Australian), and a slightly lifeless presenting style.
However, our AI reporter did not stutter or stumble over words. There was no need for ‘takes’. I could be anywhere in the world, and he could say whatever I typed into the input box. He always reports from the same setting, the newsroom, in the same clothes, in a consistent tone.
I had to film myself saying a passcode in order to access the avatar, meaning no one else could use my likeness to make me say whatever they want – which was reassuring.
I then moved onto the topic. I wanted to make a video about how newsrooms are using AI, along with the benefits and ethical challenges.
Usually I would do research, speak to experts, and then I spend hours pouring over my script. Today, with a few prompts from our editorial guidelines, encouraging it to cite sources and avoid generalisations, it took me minutes to generate ninety seconds worth of text.
I fact checked what the software spat out, and when I was happy it was accurate, it took me minutes to get a video of Robo-Joe presenting my story.
Then, it was just a case generating some illustrative videos to overlay, adding some graphics, adding captions, and it was done.
We learned a lot from the process, and the outcome.
The team using AI just to assist with their workload found that it made journalistic processes faster, and helped their creative and investigative work.
For me, it was both awesome, and awful.
The software allowed me to create this story in significantly less time, with a fraction of the effort. It was accurate and, using my prompts and guidance, it created content which almost passed as something I might have published.
But it also felt wrong – like I was lying to the viewer. It wasn’t real journalism.
I hadn’t done the research, only fact checked. They weren’t my words or arguments, I was merely the editor. And watching yourself speak with a posh Australian accent will always be alarming.
I also learned that I do not think AI is currently in a place to replace human journalists. Even without warnings on the video, I think viewers would be able to tell it wasn’t a completely human-made report.
We asked the General Secretary of the National Union of Journalists, Laura Davison, what she thought of our experiment, and what role the NUJ thinks AI should play in the newsroom.
She said: “Artificial intelligence should only be used as an assistive tool with human oversight”.
She advised that “AI-generated articles should be clearly labelled as such and journalists must have full input over the deployment of any new technology in newsrooms.”
The NUJ recognises that “AI can be helpful for certain tasks,” but warned that “it can pose a real threat to journalistic standards and jobs”.
Finally, she summarised what the team and I found from our experiments: “Ultimately, an AI reporter can never replace the nous, knowledge, creativity or skill of a journalist”.
Some parts of our experiment showed the inspiring and powerful technology enhancing journalism across the UK and the world. Other aspects demonstrated how AI puts the integrity of the newsroom at risk. What is clear that there is a je ne sais quoi of human to human communication that AI generated content is far from ready to replace.
After this experiment, I don’t think it ever could.
Featured image is AI generated.
Submitted Article
Headline
Short Headline
Standfirst
Published Article
HeadlineOur Post About AI in UK Newsrooms? It’s AI.
Short HeadlineOur Post About AI? It's AI.
StandfirstWe let the tech mark it's own homework, and the results were both impressive and unnerving.
You may have seen me on our @citylondonnews social media, talking about how UK journalists are using AI. It might have looked a lot like me. It wasn’t.
This week, the City London News team decided to conduct an experiment.
We wanted to know the extent to which we could – and should – use AI in the newsroom.
As student journalists, we are entering an industry where AI is becoming increasingly saturated into day-to-day operations, whether it’s finding stories, writing scripts, or editing and creating images.
So, we used a range of different software to help us in our journalistic duties.
In my case, I created an entirely new team member. Meet Joe, our new robo-reporter.
Say hello to my digital doppelganger, who can say whatever I want!
I came in today, and filmed myself for a couple of minutes talking. I then uploaded that video into the software, and it created a whole new me.
Robo-Joe looked eerily similar to real me, apart from with a strange accent (the software did not like my Yorkshire accent, giving me a mix of southern English and Australian), and a slightly lifeless presenting style.
However, our AI reporter did not stutter or stumble over words. There was no need for ‘takes’. I could be anywhere in the world, and he could say whatever I typed into the input box. He always reports from the same setting, the newsroom, in the same clothes, in a consistent tone.
I had to film myself saying a passcode in order to access the avatar, meaning no one else could use my likeness to make me say whatever they want – which was reassuring.
I then moved onto the topic. I wanted to make a video about how newsrooms are using AI, along with the benefits and ethical challenges.
Usually I would do research, speak to experts, and then I spend hours pouring over my script. Today, with a few prompts from our editorial guidelines, encouraging it to cite sources and avoid generalisations, it took me minutes to generate ninety seconds worth of text.
I fact checked what the software spat out, and when I was happy it was accurate, it took me minutes to get a video of Robo-Joe presenting my story.
Then, it was just a case generating some illustrative videos to overlay, adding some graphics, adding captions, and it was done.
We learned a lot from the process, and the outcome.
The team using AI just to assist with their workload found that it made journalistic processes faster, and helped their creative and investigative work.
For me, it was both awesome, and awful.
The software allowed me to create this story in significantly less time, with a fraction of the effort. It was accurate and, using my prompts and guidance, it created content which almost passed as something I might have published.
But it also felt wrong – like I was lying to the viewer. It wasn’t real journalism.
I hadn’t done the research, only fact checked. They weren’t my words or arguments, I was merely the editor. And watching yourself speak with a posh Australian accent will always be alarming.
I also learned that I do not think AI is currently in a place to replace human journalists. Even without warnings on the video, I think viewers would be able to tell it wasn’t a completely human-made report.
We asked the General Secretary of the National Union of Journalists, Laura Davison, what she thought of our experiment, and what role the NUJ thinks AI should play in the newsroom.
She said: “Artificial intelligence should only be used as an assistive tool with human oversight”.
She advised that “AI-generated articles should be clearly labelled as such and journalists must have full input over the deployment of any new technology in newsrooms.”
The NUJ recognises that “AI can be helpful for certain tasks,” but warned that “it can pose a real threat to journalistic standards and jobs”.
Finally, she summarised what the team and I found from our experiments: “Ultimately, an AI reporter can never replace the nous, knowledge, creativity or skill of a journalist”.
Some parts of our experiment showed the inspiring and powerful technology enhancing journalism across the UK and the world. Other aspects demonstrated how AI puts the integrity of the newsroom at risk. What is clear that there is a je ne sais quoi of human to human communication that AI generated content is far from ready to replace.
After this experiment, I don’t think it ever could.
The UK Health Security Agency has confirmed cases of the water-borne disease in parts of North-West and South-West London but are yet to identify a source.