There are artificial intelligence clones of musicians on YouTube


A Comparison of Artificial Intelligence Tools for Political & Political Detection in Videos and Podcasts on YouTube: Algorithms and Requirements

Advertisers will be required to reveal the use of artificial intelligence in advertisements about elections, politics and social issues from next year. Meta’s own generative Artificial Intelligence tools have not been allowed to be used by political advertisers.

TikTok requires depictions of realistic scenes be labeled and prohibits deepfakes of young people. AI-generated content depicting public figures are allowed in certain situations, but can’t be used in political or commercial endorsements on the short-form video app.

YouTube will have two sets of content guidelines for AI-generated deepfakes: a very strict set of rules to protect the platform’s music industry partners, and another, looser set for everyone else.

The company says that it will put labels on some videos with sensitive topics such as elections and public health crises.

From there, it gets more complicated — vastly more complicated. YouTube will allow people to request removal of videos that “simulate an identifiable individual, including their face or voice” using the existing privacy request form. So if you get deepfaked, there’s a process to follow that may result in that video coming down — but the company says it will “evaluate a variety of factors when evaluating these requests,” including whether the content is parody or satire and whether the individual is a public official or “well-known individual.”

The proliferation of artificial intelligence, which can create realistic images, video and audio called deepfakes, has raised concerns over how it could be used to deceive people.

YouTube isn’t going to sell you a song: Why the music industry is so special that it shouldn’t be taken for free

At the same time, YouTube parent company Google is pushing ahead on scraping the entire internet to power its own AI ambitions — resulting in a company that is at once writing special rules for the music industry while telling everyone else that their work will be taken for free. At some point, someone will ask Google why the music industry is so special, as the tension builds.

This special protection for singing and rapping voices won’t be a part of YouTube’s automated Content ID system when it rolls out next year; Malon tells us that “music removal requests will be made via a form” that partner labels will have to fill out manually. And the platform isn’t going to penalize creators who trip over these blurred lines, at least not in these early days — Malon says “content removed for either a privacy request or a synthetic vocals request will not result in penalties for the uploader.”

Making things even more complex, there will be no exceptions for things like parody and satire when it comes to AI-generated music content from YouTube’s partners “that mimics an artist’s unique singing or rapping voice,” meaning Frank Sinatra singing The Killers’ Mr. Brightside is likely in for an uphill battle if Universal Music Group decides it doesn’t like it.

There is no definition for parody and satire yet, but there will be examples when the policy goes into effect next year.

The labels will appear in video descriptions, and on top of the videos themselves for sensitive material. There is no specific definition of what YouTube thinks “realistic” means yet; YouTube spokesperson Jack Malon tells us that the company will provide more detailed guidance with examples when the disclosure requirement rolls out next year.