YouTube’s looking to improve its countermeasures to police AI deepfakes, via new detection processes that will be able to alert creators, and/or their publishers, whenever their face or voice is used in another clip.
Deepfakes have become a major concern in the evolving generative AI era, with various artists and politicians already being depicted by computer-generated replicants.
And now, YouTube’s advancing its detection on both fronts, in order to avoid misrepresentation, and misinformation, in the app.
First off, YouTube’s developed a new “synthetic-singing identification technology” which will enable creators and publishers to automatically detect and manage AI-generated content on YouTube “that simulates their singing voices”.
The technology will use audio matching to highlight likely fakes and copies, which will enable artists and publishers to better manage any false depictions of their work.
Which music industry folk will definitely welcome. Most music publishers now have full-time departments dedicated to scouring the web to police copyright violations, in varying form, and this new advance will give them another weapon in this fight.
YouTube’s also developing a new tool that will be able to detect and manage AI-generated content depicting real people’s faces.
That’ll give talent agents and celebrities similar detection tools to music publishers, enabling them to crack down on illegal use of their clients, while political parties will also be paying attention to this new option.
The two new capabilities will expand YouTube’s existing copyright protection tools, which are already seeing heavy use.
As per YouTube:
“Since 2007, Content ID has provided granular control to rightsholders across their entire catalogs on YouTube, with billions of claims processed every year, while simultaneously generating billions in new revenue for artists and creators through reuse of their work. We’re committed to bringing this same level of protection and empowerment into the AI age.”
As YouTube creators know, copystrikes have becoming increasingly restrictive over time, but that gives rights holders more capacity to manage their clients’ likeness, which also better ingratiates YouTube with the publishing industry.
In addition to this, YouTube’s also looking to give creators more control over how their content may be used by third parties, including AI developers, with advanced permissions for usage. YouTube says that it’s currently working on this new process, and will share more info later in the year.
These are good updates, which are also likely to become the industry norm, with all platforms eventually adopting new processes to detect AI depictions of real people across their apps. That’ll facilitate more control, and help to stamp out misuse, and ideally, these tools will be able to halt the spread of such before it misleads users.
Source link