For years, YouTube has said that videos with hate speech, misinformation and other content that violates its policies represent only a minuscule amount of views on its platform.

Now it’s releasing a new metric, dubbed the Violative View Rate, that YouTube says proves the amount of banned content comprises just a tiny fraction of the overall pie — and that the rate has declined.

For the fourth quarter of 2020, YouTube said, the Violative View Rate averaged 0.16%-0.18%, which means that out of every 10,000 views on the platform 16 to 18 came from violative content. That’s down by more than 70% from Q4 2017, an improvement YouTube credits largely to its investment in machine learning to automatically identify and pull down content that violates its Community Guidelines.

“Our ongoing goal is for the YouTube community to thrive as we continue to live up to our responsibility,” the Google-owned video service said a blog post announcing the VVR metric.

Still, given YouTube’s massive size — it says users watch some 1 billion hours of video every day — that means videos that run afoul of its policies are likely being watched by millions of people on a daily basis before they’re taken down.

Popular on Variety

YouTube’s announcement of the Violative View Rate comes as the video platform (along with Facebook and Twitter) has come under increased scrutiny by U.S. lawmakers who are looking to reform Section 230 of Communications Decency Act, a provision of the law that shields internet platforms from liability for user-shared content. Politicos on both sides of the aisle have called for changes, but for different reasons: Democrats want services like YouTube to be more accountable for misinformation and other harmful content spread on their platforms, while Republicans have alleged Silicon Valley companies have a bias against conservative views and want platforms to be restricted from “censoring” content.

In part, YouTube’s release of the Violative View Rate and similar metrics appears aimed to assuage the concerns of Congress by buttressing the argument that it’s proactively policing the platform — and that therefore no additional U.S. laws are required.

Since YouTube released its first Community Guidelines Enforcement Report in 2018, covering the third quarter of that year, the service says it has removed more than 83 million videos and 7 billion comments for violating its policies. Today, YouTube claims it is able to detect 94% of all violative content through automated systems and that 75% of those are removed before receiving even 10 views.

To calculate the Violative View Rate, YouTube takes a sample of videos on the platform and sends those to content reviewers (who denote which videos violate the policies). YouTube plans to include the VVR metric in future transparency reports.

YouTube noted that the Violative View Rate will fluctuate both up and down. For example, immediately after it updates a policy — as when prior to the 2020 U.S. election, YouTube moved to prohibit content with “conspiracy theories that have been used to justify real-world violence,” including videos related to the QAnon movement — the VVR may temporarily increase “as our systems ramp up to catch content that is newly classified as violative,” according to YouTube.

“We’re committed to these changes because they are good for our viewers, and good for our business — violative content has no place on YouTube,” the platform said in Tuesday’s blog post. “We invest significantly in keeping it off, and the VVR holds us accountable and helps us better understand the progress we’ve made in protecting people from harmful content on YouTube.”