Moderation vs. Censorship: Diving into Spotify and Joe Rogan

Photo courtesy of Creative Commons.

“Irresponsible people are spreading lies that are costing people their lives,” wrote singer Joni Mitchell. This was part of her condemnation of Spotify for the platform’s role in allowing popular podcaster Joe Rogan, host of “The Joe Rogan Experience,” to spread misinformation about COVID-19 and the efficacy of vaccines. Mitchell, following the footsteps of artist Neil Young, vowed to remove her music from Spotify in an act of revolt against vaccine skepticism and solidarity with the “global scientific and medical communities on this issue.” The controversy came to public attention when Rogan delivered a podcast featuring controversial infectious-disease ‘expert’ Dr. Robert Malone, who had already been banned from Twitter for making several false claims about the COVID-19 vaccine. In early January, 270 professors and public health officials wrote an open letter to Spotify, pleading with the company to “crack down” on creators who spread COVID-19 misinformation. The group asked Spotify to formulate a clear public policy on how it would handle misinformation.

In response to Mitchell and other artists who have removed their content, Spotify’s CEO Daniel Ek has claimed that Spotify is a neutral platform with a critical role in promoting creative expression and avoiding acting as a “content censor.” Concerning the public outcry, Spotify also published its Platform Rules for the first time and agreed to add a content advisory notice to any podcast that discusses COVID-19. Among these rules are terms that bar “content that promotes dangerous false or dangerous deceptive medical information that may cause offline harm or poses a direct threat to public health.” The statement includes specific examples, including claiming that COVID-19 and other serious life-threatening illnesses are a hoax, encouraging the consumption of bleach to cure a disease, and “promoting or suggesting that vaccines approved by local health authorities are designed to cause death.” The Platform Rules also prohibit content creators from “encouraging people to purposefully get infected with COVID-19 in order to build immunity.” 

However, can a company really claim to be neutral and have no editorial power given that they have a $100 million exclusive contract with Rogan and profit generously off his work?

Concerning the backlash and complaints leveled at him, Rogan posted a 10-minute video on Instagram where he promised to bring more experts onto his podcast with differing opinions after he talks with the more controversial figures in order to “balance things out.” However, can we really classify blatant misinformation as controversial opinions? And how should Spotify enforce its Platform Rules, especially considering that “The Joe Rogan Experience” is the platform’s most popular podcast with a global reach? Despite this, the company has continued to assert that it is a distributor of content, not a publisher, and therefore they do not have editing power. However, can a company really claim to be neutral and have no editorial power given that they have a $100 million exclusive contract with Rogan and profit generously off his work? Earlier this week Spotify demonstrated that they could make editorial decisions when they quietly removed 113 episodes of Rogan’s podcast in which he used racial slurs, stating that his comments do not align with the values of Spotify.

Looking past the debate over the polarization of COVID-19 vaccines and politicization of the pandemic, the issue of misinformation on Spotify is the latest proxy of a much larger, evolving debate over content control. We are forced to ask if and how streaming services in our media ecosystem should regulate content posted to their platforms. Although it may seem that anyone can post content instantaneously to the web, numerous social media companies actively moderate the content posted to their platforms through self-regulating algorithms. Despite popular discourse surrounding concerns over free speech, private companies like Spotify are within their legal right to remove content published to their platform, as the First Amendment only protects free speech from government censorship. Unlike traditional newspapers and publishers, § 230 of the Communications Decency Act of 1996 prohibits online intermediaries from being held liable for content posted on their site. This essentially means that platform companies wield the power to decide which values they seek to protect and if they wish to self-regulate online speech posted to their sites or not.

Platforms, such as Twitter, Facebook, and now Spotify, face the cumbersome challenge of finding a balance between facilitating a space that upholds the norm of free speech and open debate while also moderating millions of pieces of daily-posted content. In this process, platform companies must define what they view as misinformation and distinguish their company values in their community guidelines, a task that is hardly ideologically neutral but often kept opaque from the public. 

 In this process, platform companies must define what they view as misinformation and distinguish their company values in their community guidelines, a task that is hardly ideologically neutral but often kept opaque from the public.

This task is further complicated by their business model, which requires platforms to reach a broad section of society and implement algorithms that will generate user engagement. Part of this is done by filtering content that people agree and associate with, called echo-chambers, through their algorithms, in order to create the environment that their users expect and enjoy participating in. In adhering to this expectation, the design of media companies is to allow for users to spread their own ideas and content very quickly. In this, they bypass traditional gatekeepers, such as publishers, who have historically wielded the power to decide which discourse is allowed to be published. Thus, media companies must intricately balance between keeping up as much content as possible while taking down harmful content that violates their own community guidelines. The fact that we are dependent on private profit-driven companies to decide which values should be promoted and what type of discourse can be considered as free speech is of high concern. This is especially concerning as these companies become increasingly powerful.  

Our media ecosystem is very powerful and influential. This is evidenced by the spread of hyper-partisan, right-wing disinformation during the 2016 U.S. Presidential Election and by the January 6th insurgency on the U.S. Capitol, which was largely organized by individuals in Facebook groups. As the internet and the use of social media continue to evolve, debates over content moderation are at the forefront of public consciousness. Our society is more vulnerable. Many people now get their news from opinion-dominated sources, like Rogan’s podcast, that circumvent the traditional journalistic process—meaning they are not filtered and fact-checked. The controversy over Spotify’s inability and refusal to moderate Rogan’s spread of misinformation will only grow as more artists pull their content from Spotify and demand a transparent misinformation policy be applied to the platform. Questions over Spotify’s role in moderating its content and enforcing its Platform Rules is only one proxy of this contemporary challenge; however, it will not be the last.

Leave a Reply

Your email address will not be published.