Content moderation

Last updated
Comment moderation on a GitHub discussion, where a user called Mallory has deleted several comments before closing the discussion and locking it Deleted comments.png
Comment moderation on a GitHub discussion, where a user called Mallory has deleted several comments before closing the discussion and locking it

On Internet websites that invite users to post comments, content moderation is the process of detecting contributions that are irrelevant, obscene, illegal, harmful, or insulting, in contrast to useful or informative contributions, frequently for censorship or suppression of opposing viewpoints. The purpose of content moderation is to remove or apply a warning label to problematic content or allow users to block and filter content themselves. [1]

Contents

Various types of Internet sites permit user-generated content such as comments, including Internet forums, blogs, and news sites powered by scripts such as phpBB, a Wiki, or PHP-Nuke etc. Depending on the site's content and intended audience, the site's administrators will decide what kinds of user comments are appropriate, then delegate the responsibility of sifting through comments to lesser moderators. Most often, they will attempt to eliminate trolling, spamming, or flaming, although this varies widely from site to site.

Major platforms use a combination of algorithmic tools, user reporting and human review. [1] Social media sites may also employ content moderators to manually inspect or remove content flagged for hate speech or other objectionable content. Other content issues include revenge porn, graphic content, child abuse material and propaganda. [1] Some websites must also make their content hospitable to advertisements. [1]

In the United States, content moderation is governed by Section 230 of the Communications Decency Act, and has seen several cases concerning the issue make it to the United States Supreme Court, such as the current Moody v. NetChoice, LLC.

Supervisor moderation

Also known as unilateral moderation, this kind of moderation system is often seen on Internet forums. A group of people are chosen by the site's administrators (usually on a long-term basis) to act as delegates, enforcing the community rules on their behalf. These moderators are given special privileges to delete or edit others' contributions and/or exclude people based on their e-mail address or IP address, and generally attempt to remove negative contributions throughout the community. They act as a good invisible backbone, underpinning the social web in a crucial but undervalued role. [2]

Facebook

As of 2017, Facebook had increased the number of content moderators from 4,500 to 7,500 in 2017 due to legal and other controversies. In Germany, Facebook was responsible for removing hate speech within 24 hours of when it was posted. [3] As of 2021, according to Frances Haugen, the number of Facebook employees responsible for content moderation was much smaller. [4]

If you have a new Page on Facebook, you can manage the following moderation settings:

Twitter

Social media site Twitter has a suspension policy. Between August 2015 and December 2017 it suspended over 1.2 million accounts for terrorist content in an effort to reduce the number of followers and amount of content associated with the Islamic State. [6] Following the acquisition of Twitter by Elon Musk in October 2022, content rules have been weakened across the platform in an attempt to prioritize free speech. [7] However, the effects of this campaign have been called into question. [8] [9]

Commercial content moderation

Commercial Content Moderation is a term coined by Sarah T. Roberts to describe the practice of "monitoring and vetting user-generated content (UGC) for social media platforms of all types, in order to ensure that the content complies with legal and regulatory exigencies, site/community guidelines, user agreements, and that it falls within norms of taste and acceptability for that site and its cultural context." [10]

While at one time this work may have been done by volunteers within the online community, for commercial websites this is largely achieved through outsourcing the task to specialized companies, often in low-wage areas such as India and the Philippines. Outsourcing of content moderation jobs grew as a result of the social media boom. With the overwhelming growth of users and UGC, companies needed many more employees to moderate the content. In the late 1980s and early 1990s, tech companies began to outsource jobs to foreign countries that had an educated workforce but were willing to work for cheap. [11]

As of 2010, employees work by viewing, assessing and deleting disturbing content. [12] Wired reported in 2014, they may suffer psychological damage [13] [14] [15] [2] [16] In 2017, the Guardian reported secondary trauma may arise, with symptoms similar to PTSD. [17] Some large companies such as Facebook offer psychological support [17] and increasingly rely on the use of artificial intelligence to sort out the most graphic and inappropriate content, but critics claim that it is insufficient. [18] In 2019, NPR called it a job hazard. [19]

Facebook

Facebook has decided to create an oversight board that will decide what content remains and what content is removed. This idea was proposed in late 2018. The "Supreme Court" at Facebook is to replace making decisions in an ad hoc manner. [19]

Distributed moderation

User moderation

User moderation allows any user to moderate any other user's contributions. Billions of people are currently making decisions on what to share, forward or give visibility to on a daily basis. [20] On a large site with a sufficiently large active population, this usually works well, since relatively small numbers of troublemakers are screened out by the votes of the rest of the community.

User moderation can also be characterized by reactive moderation. This type of moderation depends on users of a platform or site to report content that is inappropriate and breaches community standards. In this process, when users are faced with an image or video they deem unfit, they can click the report button. The complaint is filed and queued for moderators to look at. [21]

Use of algorithms for content moderation

See also

Related Research Articles

Slashdot is a social news website that originally billed itself as "News for Nerds. Stuff that Matters". It features news stories on science, technology, and politics that are submitted and evaluated by site users and editors. Each story has a comments section where users can add online comments.

<span class="mw-page-title-main">Internet forum</span> Online discussion site

An Internet forum, or message board, is an online discussion site where people can hold conversations in the form of posted messages. They differ from chat rooms in that messages are often longer than one line of text, and are at least temporarily archived. Also, depending on the access level of a user or the forum set-up, a posted message might need to be approved by a moderator before it becomes publicly visible.

<span class="mw-page-title-main">Reddit</span> American social news and discussion site

Reddit is an American social news aggregation, content rating, and forum social network. Registered users submit content to the site such as links, text posts, images, and videos, which are then voted up or down by other members. Posts are organized by subject into user-created boards called "communities" or "subreddits". Submissions with more upvotes appear towards the top of their subreddit and, if they receive enough upvotes, ultimately on the site's front page. Reddit administrators moderate the communities. Moderation is also conducted by community-specific moderators, who are not Reddit employees. It is operated by Reddit, Inc., based in San Francisco.

The Center for Countering Digital Hate (CCDH), formerly Brixton Endeavors, is a British not-for-profit NGO company with offices in London and Washington, D.C. with the stated purpose of stopping the spread of online hate speech and disinformation. It campaigns to deplatform people that it believes promote hate or misinformation, and campaigns to restrict media organisations such as The Daily Wire from advertising. CCDH is a member of the Stop Hate For Profit coalition.

<span class="mw-page-title-main">Twitter</span> American social networking service

X, commonly referred to by its former name Twitter, is a social media website based in the United States. With over 500 million users, it is one of the world's largest social networks and the fifth-most visited website in the world. Users can share text messages, images, and videos through short posts. X also includes direct messaging, video and audio calling, bookmarks, lists and communities, and Spaces, a social audio feature. Users can vote on context added by approved users using the Community Notes feature.

Facebook has been the subject of criticism and legal action since it was founded in 2004. Criticisms include the outsize influence Facebook has on the lives and health of its users and employees, as well as Facebook's influence on the way media, specifically news, is reported and distributed. Notable issues include Internet privacy, such as use of a widespread "like" button on third-party websites tracking users, possible indefinite records of user information, automatic facial recognition software, and its role in the workplace, including employer-employee account disclosure. The use of Facebook can have negative psychological and physiological effects that include feelings of sexual jealousy, stress, lack of attention, and social media addiction that in some cases is comparable to drug addiction.

<span class="mw-page-title-main">Block (Internet)</span> Restriction on accessing an online resource

On the Internet, a block or ban is a technical measure intended to restrict access to information or resources. Blocking and its inverse, unblocking, may be implemented by the owners of computers using software.

<span class="mw-page-title-main">Odnoklassniki</span> Social networking service

Odnoklassniki, abbreviated as OK or OK.ru, is a social network service used mainly in Russia and former Soviet Republics. The site was launched on March 4, 2006 by Albert Popkov and is currently owned by VK.

Facebook is a social networking service that has been gradually replacing traditional media channels since 2010. Facebook has limited moderation of the content posted to its site. Because the site indiscriminately displays material publicly posted by users, Facebook can, in effect, threaten oppressive governments. Facebook can simultaneously propagate fake news, hate speech, and misinformation, thereby undermining the credibility of online platforms and social media.

Censorship of Twitter, refers to Internet censorship by governments that block access to Twitter. Twitter censorship also includes governmental notice and take down requests to Twitter, which it enforces in accordance with its Terms of Service when a government or authority submits a valid removal request to Twitter indicating that specific content is illegal in their jurisdiction.

<span class="mw-page-title-main">Instagram</span> Social media platform owned by Meta Platforms

Instagram is a photo and video sharing social networking service owned by Meta Platforms. It allows users to upload media that can be edited with filters, be organized by hashtags, and be associated with a location via geographical tagging. Posts can be shared publicly or with preapproved followers. Users can browse other users' content by tags and locations, view trending content, like photos, and follow other users to add their content to a personal feed. A Meta-operated image-centric social media platform, it is available on iOS, Android, Windows 10, and the web. Users can take photos and edit them using built-in filters and other tools, then share them on other social media platforms like Facebook. It supports 32 languages including English, Spanish, French, Korean, and Japanese.

<span class="mw-page-title-main">Google+</span> Defunct social network by Google

Google+ was a social network that was owned and operated by Google until it ceased operations in 2019. The network was launched on June 28, 2011, in an attempt to challenge other social networks, linking other Google products like Google Drive, Blogger and YouTube. The service, Google's fourth foray into social networking, experienced strong growth in its initial years, although usage statistics varied, depending on how the service was defined. Three Google executives oversaw the service, which underwent substantial changes that led to a redesign in November 2015.

Shadow banning, also called stealth banning, hellbanning, ghost banning, and comment ghosting, is the practice of blocking or partially blocking a user or the user's content from some areas of an online community in such a way that the ban is not readily apparent to the user, regardless of whether the action is taken by an individual or an algorithm. For example, shadow-banned comments posted to a blog or media website would be visible to the sender, but not to other users accessing the site.

<i>The Babylon Bee</i> Satirical website

The Babylon Bee is a conservative Christian news satire website that publishes satirical articles on topics including religion, politics, current events, and public figures. It has been referred to as a Christian or conservative version of The Onion.

<span class="mw-page-title-main">Gab (social network)</span> American alt-tech social media service

Gab is an American alt-tech microblogging and social networking service known for its far-right userbase. Widely described as a haven for neo-Nazis, racists, white supremacists, white nationalists, antisemites, the alt-right, supporters of Donald Trump, conservatives, right-libertarians, and believers in conspiracy theories such as QAnon, Gab has attracted users and groups who have been banned from other social media platforms and users seeking alternatives to mainstream social media platforms. Founded in 2016 and launched publicly in May 2017, Gab claims to promote free speech, individual liberty, the "free flow of information online", and Christian values. Researchers and journalists have characterized these assertions as an obfuscation of its extremist ecosystem. Antisemitism is prominent in the site's content and the company itself has engaged in antisemitic commentary. Gab CEO Andrew Torba has promoted the white genocide conspiracy theory. Gab is based in Pennsylvania.

<span class="mw-page-title-main">Mastodon (social network)</span> Self-hosted social network software

Mastodon is free and open-source software for running self-hosted social networking services. It has microblogging features similar to Twitter, which are offered by a large number of independently run nodes, known as instances or servers, each with its own code of conduct, terms of service, privacy policy, privacy options, and content moderation policies.

<span class="mw-page-title-main">Deplatforming</span> Administrative or political action to deny access to a platform to express opinions

Deplatforming, also known as no-platforming, is a boycott on an individual or group by removing the platforms used to share their information or ideas. The term is commonly associated with social media.

Account verification is the process of verifying that a new or existing account is owned and operated by a specified real individual or organization. A number of websites, for example social media websites, offer account verification services. Verified accounts are often visually distinguished by check mark icons or badges next to the names of individuals or organizations.

The Twitter Files are a series of releases of select internal Twitter, Inc. documents published from December 2022 through March 2023 on Twitter. CEO Elon Musk gave the documents to journalists Matt Taibbi, Bari Weiss, Lee Fang, and authors Michael Shellenberger, David Zweig and Alex Berenson shortly after he acquired Twitter on October 27, 2022. Taibbi and Weiss coordinated the publication of the documents with Musk, releasing details of the files as a series of Twitter threads.

Elon Musk completed his acquisition of Twitter in October 2022; Musk acted as CEO of Twitter until June 2023 when he was succeeded by Linda Yaccarino. Twitter was then rebranded to X in July 2023. Initially during Musk's tenure, Twitter introduced a series of reforms and management changes; the company reinstated a number of previously banned accounts, reduced the workforce by approximately 80%, closed one of Twitter's three data centers, and largely eliminated the content moderation team, replacing it with the crowd-sourced fact-checking system Community Notes.

References

  1. 1 2 3 4 Grygiel, Jennifer; Brown, Nina (June 2019). "Are social media companies motivated to be good corporate citizens? Examination of the connection between corporate social responsibility and social media safety". Telecommunications Policy. 43 (5): 2, 3. doi:10.1016/j.telpol.2018.12.003. S2CID   158295433 . Retrieved 25 May 2022.
  2. 1 2 "Invisible Data Janitors Mop Up Top Websites - Al Jazeera America". aljazeera.com.
  3. "Artificial intelligence will create new kinds of work". The Economist. Retrieved 2017-09-02.
  4. Jan Böhmermann, ZDF Magazin Royale. Facebook Whistleblowerin Frances Haugen im Talk über die Facebook Papers.
  5. "About moderation for new Pages on Facebook". Meta Business Help Centre. Retrieved 2023-08-21.
  6. Gartenstein-Ross, Daveed; Koduvayur, Varsha (26 May 2022). "Texas's New Social Media Law Will Create a Haven for Global Extremists". foreignpolicy.com. Foreign Policy. Retrieved 27 May 2022.
  7. "Elon Musk on X: "@essagar Suspending the Twitter account of a major news organization for publishing a truthful story was obviously incredibly inappropriate"". Twitter. Retrieved 2023-08-21.
  8. Burel, Grégoire; Alani, Harith; Farrell, Tracie (2022-05-12). "Elon Musk could roll back social media moderation – just as we're learning how it can stop misinformation". The Conversation. Retrieved 2023-08-21.
  9. Fung, Brian (June 2, 2023). "Twitter loses its top content moderation official at a key moment". CNN News.
  10. "Behind the Screen: Commercial Content Moderation (CCM)". Sarah T. Roberts | The Illusion of Volition. 2012-06-20. Retrieved 2017-02-03.
  11. Elliott, Vittoria; Parmar, Tekendra (22 July 2020). ""The darkness and despair of people will get to you"". rest of world.
  12. Stone, Brad (July 18, 2010). "Concern for Those Who Screen the Web for Barbarity". The New York Times.
  13. Adrian Chen (23 October 2014). "The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feed". WIRED. Archived from the original on 2015-09-13.
  14. "The Internet's Invisible Sin-Eaters". The Awl. Archived from the original on 2015-09-08.
  15. "Western News - Professor uncovers the Internet's hidden labour force". Western News. March 19, 2014.
  16. "Should Facebook Block Offensive Videos Before They Post?". WIRED. 26 August 2015.
  17. 1 2 Olivia Solon (2017-05-04). "Facebook is hiring moderators. But is the job too gruesome to handle?". The Guardian. Retrieved 2018-09-13.
  18. Olivia Solon (2017-05-25). "Underpaid and overburdened: the life of a Facebook moderator". The Guardian. Retrieved 2018-09-13.
  19. 1 2 Gross, Terry. "For Facebook Content Moderators, Traumatizing Material Is A Job Hazard". NPR.org.
  20. Hartmann, Ivar A. (April 2020). "A new framework for online content moderation". Computer Law & Security Review. 36: 3. doi:10.1016/j.clsr.2019.105376. S2CID   209063940 . Retrieved 25 May 2022.
  21. Grimes-Viort, Blaise (December 7, 2010). "6 types of content moderation you need to know about". Social Media Today.

Further reading