News

Hazel Baker, head of UGC newsgathering at Reuters, on deepfakes, misinformation and verification

At Newsrewired next month, we will hear from industry experts about the latest techniques that are adopted to produce misleading and false content, equip you with the tools and advice to tackle misinformation, and best practice for verifying user-generated content online.

Joining us for that panel is Hazel Baker, global head of user-generated content newsgathering at Reuters, who directs a team of reporters dedicated to sourcing, verifying and clearing UGC for distribution to clients as quickly as possible.

As part of her research into the phenomenon of ‘deepfakes’, AI-based technology to alter video content, Baker and her colleagues created their own deepfake to better understand what goes into their production and use it as an example of what red flags may appear in the verification process.

Baker spoke to Journalism.co.uk about content verification, user-generated content and the threat that deepfakes pose to quality journalism in the years ahead. Сейчас такое предложение распространяется только на казино. Размер возврата составляет 10% от проигрыша. мостбет Однако при наличии исключительно международного документа ресурсы конторы на территории страны могут поддаваться блокировке из-за государственных ограничений.

Q You are speaking on a Newsrewired panel discussing how quality journalism can thrive in an age of disinformation. For those who are unfamiliar with your work, what is your connection to this topic?

I lead a team at Reuters who verify third-party material. It is massively important to our storytelling because we have to be extremely careful with anything that is not filmed by a trusted journalist.

We spend a lot of time verifying pieces of video and imagery that come to our desk and through this work, we do encounter material that is not authentic.

Sometimes it is disinformation and sometimes it is misinformation – we do not always know to be honest when we encounter it whether it is being deliberately shared to mislead or whether it is accidental. Other times we do not have time to look into that aspect of it, but we are definitely aware of it on a daily basis.

Q What can delegates at the conference look forward to hearing about from you?

The key takeaway I am hoping to deliver at the conference is that the best single way that I have found to arm yourself against disinformation in this environment is to learn everything you can about your enemy.

It is really about understanding the types of misleading visuals. At one end of the scale are deepfakes and Reuters created a deepfake video.

I will be talking a bit about that experiment and why we did it, and I will also show how, although we have seen a lot of fearmongering headlines about deepfakes, they are actually just the latest iteration of fake video that we have encountered before.

Q What sort of challenges does user-generated content pose for journalists when reporting breaking news events?

Very often we see dramatic pictures circulating quickly on social media but they are often lost from their source. They are scraped, copied, and without speaking directly to the source, we cannot establish its authenticity.

So, although we can see at times some really important footage, we do not distribute it on the Reuters wire until we manage to trace it back to the source and ask the questions that we need to so that we feel happy that it is an authentic portrayal of events as they occurred.

Other difficulties are that the volume of material can be quite high, particularly for globally interesting events. Sifting through that to find original sources and original material is challenging and time-consuming. The social networks are not always the easiest to search and going back chronologically can be tricky.

We need to find the first people that share material on social media, which can be easier said than done, and then once we find the person who filmed it, getting hold of them in a timely manner and also winning their trust, which is not a quick process. That can be a challenge but one that we think we are well placed to overcome.

Q Are the problems raised by disinformation only going to get worse with the rise of deepfakes?

I think that the challenge of misinformation is only going to get more significant and I think that is probably true for every newsroom in the world. The means by which people can share information are increasing and definitely deepfakes have the potential to generate content which is even harder to detect.

That means we have got to invest properly in this area and make sure that we are well equipped to face that threat.

I started to research deepfakes a year ago and I had very few examples to work from, which is one of the key reasons why we made our own. Fast-forward nearly twelve months and there are certainly way more examples out there. The latest report from DeepTrace suggests there are 14,600 deepfakes in the wild. It is going to be an issue in the mainstream consciousness very soon.

You might be interested in …