No more targeted ads on videos “for kids” — an understandable decision, but creators bear the pain

13 September 2019
 

 

Targeted ads will no longer be allowed on YouTube videos “for kids” starting January 1, 2020, YouTube announced on September 4. Comments and notifications will also be disabled on these videos. It’s an understandable decision. But it looks like it will be implemented in a way that, like so many changes since the first Adpocalypse, puts most of the pain and risk on creators.

 

The decision is part of a settlement reached with the United States Federal Trade Commission (FTC). The FTC established that the tracking technologies used to serve targeted ads violate US child privacy laws. YouTube also accepted a fine of $170 million in association with the violations. Although the law only applies in the US, the changes will apply globally.

 

YouTube creators will be required to indicate which of their videos are “made for kids.” YouTube will also use automated “machine learning” systems or “classifiers” to “identify videos that clearly target” kids, similar to those currently used to assess videos for compliance with YouTube’s Community Guidelines and Advertiser-friendly Content Guidelines.

 

While non-personalized ads will still be served on videos for kids, YouTube acknowledges that the removal of targeted ads “may result in a decrease in revenue for some creators.”

 

And: “If a creator attempts to avoid categorizing their content correctly” - presumably in an attempt to avoid losing the revenue associated with targeted ads - “there may be consequences on the YouTube platform for that creator.”

 

Information about exactly what those consequences will be, or how YouTube will determine that a creator has “attempt[ed] to avoid categorizing their content correctly” - is not yet, to our knowledge, available. It is not yet clear if honest creators will be punished for having a different “understanding” of what “for kids” means than YouTube’s automated systems (which YouTube admits “don’t always get it right”), or human reviewers who may check the work of these systems. It is not yet clear if the rules used by the classifier to determine whether a video is “for kids” or not will be made available to creators, or if creators will simply have to guess where the boundaries of the new category are. It is not yet clear if creators will get a detailed explanation of why their videos classified as being “for kids” were classified that way, or if YouTube will simply point to their “high level” guidelines without further explanation - as they often do now with demonetized videos. It is not clear if appeal to a human reviewer will be possible, and if it is, what rules the human reviewers will use. And it is not yet clear if YouTube will apply the new rules uniformly for all creators, or if big creators be blessed with special, if unofficial, exemptions - as appears to have happened in the past with other guidelines.

 

Given all this, it is no surprise that YouTube creators are very worried about the coming changes. In her first article on the changes for The Verge, journalist and YouTube expert Julia Alexander wrote:

 

[The changes] might not seem like a big deal to viewers, but they could be catastrophic for creators. If channels can’t send notifications for certain videos, fewer people will watch those videos within the first crucial hours. This could lead to YouTube recommending fewer videos from that creator because people are less engaged. If videos aren’t recommended as much, it means fewer views, which means less money.

As Alexander’s reportage shows, creators are starting to feel squeezed. Violent and other mature content has long been financially risky on YouTube because of the Advertiser-friendly Content Guidelines. But the demonetization of kids’ content — that is, non-violent, family-friendly content, which many creators thought was “safe” - has some creators asking what content they can make that’s still “monetizable.” In a follow-up article, Julia Alexander reported that some creators are already changing their content in an effort to make it clear that their content is not “for kids.” But for creators making content that’s a good fit for other platforms such as Twitch, the best answer in the long term may be to go elsewhere.

 

And unless transparency and communication around the implementation of the new classification rules are significantly better than the current demonetization systems, the lack of transparency will continue to weigh heavily on creators’ well-being. Alexander quotes YouTuber Een, of the channel Nerd City: “There is a serious impact on the mental health of creators from a lack of transparency by YouTube.”

 

The upcoming changes are an opportunity for YouTube to improve transparency and communication with creators. The FairTube proposals will be just as relevant, if not more, in 2020:

 

  • Tell us the categories that affect monetization - now including the category “for kids” - and the criteria and processes (automated and human) used to assign videos to these categories.
  • Give clear, detailed explanations for decisions about individual videos, not just reference to “high level” guidelines.
  • Let creators contest these decisions. This doesn’t just mean letting creators click a link to request a human review. It means letting creators communicate their point of view about the decision to a qualified human being authorized to assess the situation, make a decision, and communicate the rationale for the decision to the creator - ideally, so that they can change the video to fit the rules, or, at the very least, avoid the same fate with future videos.
  • Create an independent mediation procedure for seriously disputed cases.
  • Involve creators in discussing and responding to changes like this via a formal structure such as a Creator Advisory Board.

We understand that our first two proposals may pose technical challenges, especially when decisions are made by automated systems. However, at the very least, the proposals can be implemented in the short term for decisions made by human reviewers. YouTube can release the guidelines for video classification used by human reviewers, and can share the reasoning for these decisions with creators. (Google already shares their search quality rater guidelines. And Facebook published their content moderation guidelines last year.)

 

If YouTube is worried about “bad actors” using these guidelines to game the rules to monetize harassing or otherwise truly harmful content, they could add a clause to the Partner Program contract saying that “strategic behavior” will get you kicked out. (Of course, creators judged as “gaming the system” should be given warnings in advance, and it should be possible to contest such a judgment via independent mediation.)

 

But really, most creators just want to make good content and get paid, and they want to have a good idea, in advance, what content they can make that will get monetized. They want to know the rules so they can follow them.

 

In 2019, a more reasonable proposal is hard to find.