Home > Latest News > BBC Blocks Data Scrubbing But Open To AI-Powered Journalism

BBC Blocks Data Scrubbing But Open To AI-Powered Journalism

The BBC intends to use AI responsibly they say by incorporating generative AI best practices to apply it within the realms of research and production of journalism, archival, and “personalized experiences”.

To focus their activities, the UK’s largest news organization has created principles around integrating AI into their organisation because, as the BBC director of nations, Rhodri Talfan Davies, said BBC sees the potential of technology that can bring “more value to our audiences and society.”

The BBC will still continue to act in the public’s best interests, but by instituting three new guiding principles, the news outlet will focus on prizing the rights of artists and their talent and creativity while remaining transparent about any content made with AI.

Additionally, the BBC announced it will partner with other media organizations, tech companies, and regulators to responsibly foster generative AI while simultaneously sustaining trust in the news industry.

“In the next few months, we will start a number of projects that explore the use of Gen AI in both what we make and how we work – taking a targeted approach in order to better understand both the opportunities and risks,” Davies said.

“These projects will assess how Gen AI could potentially support, complement or even transform BBC activity across a range of fields, including journalism research and production, content discovery and archive, and personalized experiences.”

What precisely these projects will include was not specified.

Along with the BBC, The Associated Press has not only shared its own AI guidelines but also is working with OpenAI to use its content to train GPT models.

The BBC is still in the discovery phase of how best to work with AI, and until it establishes how to utilise generative AI best, it also barred sites like OpenAI and Common Crawl from gaining access to BBC websites.

Others, such as CNN, The New York Times, Reuters, and other news organizations have already instituted similar access restrictions for web crawlers in order to protect their copyrighted content.

There is the added layer of paywalls for many of the best journalism online today, and Davies said the restriction was put in place to “safeguard the interests of license fee payers”.

For now, she says that the BBC will not train AI models with major online news provider’s data without its consent because it is not in the public interest.

You may also like
Consumers Say “No” To Government Picking TV Apps
Alexa Accused Of Sexism But Bigger AI Problems Afoot
Twitter Adds ‘Government-Funded Media’ Labels To ABC, SBS
Facebook, BBC, Deutsche Welle Blocked In Russia
Fetch TV Renews Seven-Channel BBC Deal