Quick News Bit

Future of TV: We’re putting new personalized features into shows using an ethical version of AI

0
Future of TV: we're putting new personalised features into shows using an ethical version of AI
Credit: LightField Studios/Shutterstock

“Look away now if you don’t want to know the score,” they say on the news before reporting the football results. But imagine if your television knew which teams you follow, which results to hold back—or knew to bypass football altogether and tell you about something else. With media personalization, which we’re working on with the BBC, that sort of thing is becoming possible.

Significant challenges remain for adapting live production, but there are other aspects to media personalization which are closer. Indeed, media personalization already exists to an extent. It’s like your BBC iPlayer or Netflix suggesting content to you based on what you’ve watched previously, or your Spotify curating playlists you might like.

But what we’re talking about is personalization within the program. This could include adjusting the program duration (you might be offered an abridged or extended version), adding subtitles or graphics, or enhancing the dialog (to make it more intelligible if, say, you’re in a noisy place or your hearing is starting to go). Or it might include providing extra information related to the program (a bit like you can access now with BBC’s red button).

The big difference is that these features wouldn’t be generic. They would see shows re-packaged according to your own tastes, and tailored to your needs, depending on where you are, what devices you have connected and what you’re doing.

To deliver new kinds of media personalization to audiences at scale, these features will be powered by artificial intelligence (AI). AI works via machine learning, which performs tasks based on information from vast datasets fed in to train the system (an algorithm).

This is the focus of a partnership between the BBC and the University of Surrey’s Centre for Vision, Speech and Signal Processing. Known as Artificial Intelligence for Personalized Media Experiences, or AI4ME, this partnership is seeking to help the BBC better serve the public, especially new audiences.

Acknowledging AI’s difficulties

The AI principles of the Organization for Economic Cooperation and Development (OECD)require AI to benefit humankind and the planet, incorporating fairness, safety, transparency and accountability.

Yet AI systems are increasingly accused of automating inequality as a consequence of biases in their training, which can reinforce existing prejudices and disadvantage vulnerable groups. This can take the form of gender bias in recruitment, or racial disparities in facial recognition technologies, for example.

Another potential problem with AI systems is what we refer to as generalization. The first recognized fatality from a self-driving car is an example of this. Having been trained on road footage, which likely captured many cyclists and pedestrians separately, it failed to recognize a woman pushing her bike across a road.

We therefore need to keep retraining AI systems as we learn more about their real-world behavior and our desired outcomes. It’s impossible to give a machine instructions for all eventualities, and impossible to predict all potential unintended consequences.

We don’t yet fully know what sort of problems our AI could present in the realm of personalized media. This is what we hope to find out through our project. But for example, it could be something like dialog enhancement working better with male voices than female voices.

Ethical concerns don’t always cut through to become a priority in a technology-focused enterprise, unless government regulation or a media storm demand it. But isn’t it better to anticipate and fix these problems before getting to this point?

The citizen council

To design our personalization system well, it calls for public engagement from the outset. This is vital for bringing a broad perspective into technical teams that may suffer from narrowly defined performance metrics, “group think” within their departments, and a lack of diversity.

Surrey and the BBC are working together to test an approach to bring in people—normal people, rather than experts—to oversee AI’s development in media personalization. We’re trialing “citizen councils” to create a dialog, where the insight we gain from the councils will inform the development of the technologies. Our citizen council will have diverse representation and independence from the BBC.

First, we frame the theme for a workshop around a particular technology we’re investigating or a design issue, such as using AI to cut out a presenter in a video, for replacement into another video. The workshops draw out opinions and facilitate discussion with experts around the theme, such as one of the engineers. The council then consults, deliberates and produces its recommendations.

The themes give the citizen council a way to review specific technologies against each of the OECD AI principles and to debate the acceptable uses of personal data in media personalization, independent of corporate or political interests.

There are risks. We might fail to adequately reflect diversity, there might be misunderstanding around proposed technologies or an unwillingness to hear others’ views. What if the council members are unable to reach a consensus or begin to develop a bias?

We cannot measure what disasters are avoided by going through this process, but new insights that influence the engineering design or new issues that allow remedies to be considered earlier will be signs of success.

And one round of councils is not the end of the story. We aim to apply this process throughout this five-year engineering research project. We will share what we learn and encourage other projects to take up this approach to see how it translates.

We believe this approach can bring broad ethical considerations into the purview of engineering developers during the earliest stages of the design of complex AI systems. Our participants are not beholden to the interests of big tech or governments, yet they convey the values and beliefs of society.


People’s perception of media messages personalized in real-time


Provided by
The Conversation


This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
Future of TV: We’re putting new personalized features into shows using an ethical version of AI (2022, March 8)
retrieved 8 March 2022
from https://techxplore.com/news/2022-03-future-tv-personalized-features-ethical.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsBit.us is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment