Key takeaways
- YouTube changed its policies to require creators to label content as being real or AI-generated in some form.
- People will be able to request the removal of certain AI-generated content, including AI-generated music created without the original artist’s permission.
YouTube has changed its terms and policies to combat forms of AI misuse.
If you’ve traversed 2023 without hearing Johnny Cash singing ‘Barbie Girl’ or Elvis singing ‘Baby Got Back,’ then you’ve still got time to catch up before 2024 brings the next wave of incredibly life-like AI-generated music.
As view counts on AI-generated music increase across platforms like Spotify and YouTube, companies have been forced to take action.
For example, the organization behind the Grammys shut down any possibility of AI-created songs being considered for awards, which followed the controversy surrounding ‘Heart on My Sleeve,’ an AI song produced using lyrics from Drake and The Weeknd without their permission. The vocals are generated wholly using AI tools.
This also drew criticism from major industry players like Universal Music Group, igniting discussions about AI’s artistic and legal ramifications.
In other words, who gets the money when an AI-generated track cashes in?
Social media platforms like Meta and YouTube are well aware of the hype this content generates – which means views and, in turn, ad revenue. Resultantly, they’ve been busy creating AI features for their sites, such as YouTube’s Dream Track.
YouTube adjusts policies to combat AI misuse
To stay on the right side of creators, musicians, and record labels, YouTube has introduced strict disclosure requirements surrounding AI.
Creators must now identify any AI-generated or manipulated content, especially concerning sensitive topics like elections or public health.
Specifically, we’ll require creators to disclose when they’ve created altered or synthetic content that is realistic, including using AI tools. When creators upload content, we will have new options for them to select to indicate that it contains realistic altered or synthetic material.
YouTube
Additionally, YouTube has vowed to clearly label AI-generated content, particularly in the case of sensitive subjects.
Labeling will also help YouTube comply with governments worldwide, which are urging social media platforms to take every measure possible to remove ‘deep fake’ content
We’ll inform viewers that content may be altered or synthetic in two ways. A new label will be added to the description panel indicating that some of the content was altered or synthetic.
And for certain types of content about sensitive topics, we’ll apply a more prominent label to the video player.
YouTube
Moreover, people can request that YouTube remove videos where someone’s face is obviously copied.
YouTube reserves the right to remove or penalize non-compliant content and ban associated accounts.
This includes AI media that violates community guidelines, such as artificially created violent content.
Further, YouTube’s new music policies enable music partners to request the removal of AI-generated tracks that imitate an artist’s unique voice. The company is also tightening its automatic moderation algorithms to better detect AI-generated content.
Overall, this is a step forward in establishing ethical guidelines in AI-augmented content generation. But, you’ve got to say that it’s a tricky line to tread as YouTube simultaneously pushes AI-integrated features like Dream Track.
We must also consider what new AI-powered features like Dream Track can contribute to the creative ecosystem. Are they genuinely useful, or just an attempt to piggyback the wider hype surrounding generative AI?
It’s easy to be skeptical and dismiss them as gimmicks, but musicians and creators will almost certainly find a way to produce weird and wonderful results.