传媒教育网

 找回密码
 实名注册

QQ登录

只需一步,快速开始

搜索
传媒教育网 新闻聚焦 查看内容

Facebook to Remove Misinformation That Leads to Violence

2018-7-20 09:14| 发布者: 刘海明| 查看: 173| 评论: 0|原作者: By Sheera Frenkel|来自: NYT

摘要: 面对越来越多的批评,Facebook表示,将开始移除可能会导致人们身体受到伤害的不实信息,扩大其删除内容类型的规则
A Rohingya Muslim woman at a displacement camp in Myanmar. Facebook has been accused of facilitating attacks on the Rohingya in the country by allowing anti-Muslim hate speech on its platform.CreditLauren Decicca/Getty Images

SAN FRANCISCO — Facebook, facing growing criticism for posts that have incited violence in some countries, said Wednesday that it would begin removing misinformation that could lead to people being physically harmed.

The policy expands Facebook’s rules about what type of false information it will remove, and is largely a response to episodes in Sri Lanka, Myanmar and India in which rumors that spread on Facebook led to real-world attacks on ethnic minorities.

“We have identified that there is a type of misinformation that is shared in certain countries that can incite underlying tensions and lead to physical harm offline,” said Tessa Lyons, a Facebook product manager. “We have a broader responsibility to not just reduce that type of content but remove it.”

Facebook has been roundly criticized over the way its platform has been used to spread hate speech and false information that prompted violence. The company has struggled to balance its belief in free speech with those concerns, particularly in countries where access to the internet is relatively new and there are limited mainstream news sources to counter social media rumors.

ADVERTISEMENT

[Watch our cybersecurity correspondent, Sheera Frenkel, answer readers’ questions on Facebook’s efforts to tackle misinformation.]

In Myanmar, Facebook has been accused by United Nations investigators and human rights groups of facilitating violence against Rohingya Muslims, a minority ethnic group, by allowing anti-Muslim hate speech and false news.

This is your last free article.

Subscribe to The Times

In Sri Lanka, riots broke out after false news pitted the country’s majority Buddhist community against Muslims. Near-identical social media rumors have also led to attacks in India and Mexico. In many cases, the rumors included no call for violence, but amplified underlying tensions.

The new rules apply to one of Facebook’s other big social media properties, Instagram, but not to WhatsApp, where false news has also circulated. In India, for example, false rumors spread through WhatsApp about child kidnappers have led to mob violence.

ADVERTISEMENT

In an interview published Wednesday by the technology news site Recode, Mark Zuckerberg, Facebook’s chief executive, tried to explain how the company is trying to differentiate between offensive speech — the example he used was people who deny the Holocaust — and posts which promoted false information that could lead to physical harm.

“I think that there’s a terrible situation where there’s underlying sectarian violence and intention,” Mr. Zuckerberg told Recode’s Kara Swisher, who will become an opinion contributor with The New York Times later this summer. “It is clearly the responsibility of all of the players who were involved there.”

The social media company already has rules in place in which a direct threat of violence or hate speech is removed, but it has been hesitant to remove rumors that do not directly violate its content policies.

Under the new rules, Facebook said it would create partnerships with local civil society groups to identify misinformation for removal. The new rules are already being put in effect in Sri Lanka, and Ms. Lyons said the company hoped to soon introduce them in Myanmar, then expand elsewhere.

Mr. Zuckerberg’s example of Holocaust denial quickly created an online furor, and on Wednesday afternoon he clarified his comments in an email to Ms. Swisher. “I personally find Holocaust denial deeply offensive, and I absolutely didn’t intend to defend the intent of people who deny that,” he said.

He went on to outline Facebook’s current policies around misinformation. Posts that violate the company’s community standards, which ban hate speech, nudity and direct threats of violence, among other things, are immediately removed.

The company has started identifying posts that are categorized as false by independent fact checkers. Facebook will “downrank” those posts, effectively moving them down in each user’s News Feed so that they are not highly promoted across the platform.

ADVERTISEMENT

The company has also started adding information boxes under demonstrably false news stories, suggesting other sources of information for people to read.

But expanding the new rules to the United States and other countries where objectionable speech is still legally protected could prove tricky, as long as the company uses free speech laws as the guiding principles for how it polices content. Facebook also faces pressure from conservative groups that argue the company is unfairly targeting users with a conservative viewpoint.

When asked in an interview how Facebook defined misinformation that could lead to harm and should be removed versus that material it would simply downrank because it was objectionable, Ms. Lyons said, “There is not always a really clear line.”

“All of this is challenging — that is why we are iterating,” she said. “That is why we are taking serious feedback.”

Correction: 

An earlier version of this article misstated the Facebook social media platforms that are subject to new rules about misinformation. The rules apply to Instagram, but not WhatsApp.

Follow Sheera Frenkel on Twitter: @sheeraf

 

 面对越来越多的批评,Facebook表示,将开始移除可能会导致人们身体受到伤害的不实信息,扩大其删除内容类型的规则。

新的规则也将适用于Instagram,但不会适用于WhatsApp(两款应用程序均为Facebook所有)。

社交媒体上流传的谣言已经导致发生了许多致命和暴力袭击事件。点击此处阅读我们在

最新评论

掌上论坛|小黑屋|传媒教育网 ( 蜀ICP备16019560号-1

Copyright 2013 小马版权所有 All Rights Reserved.

Powered by Discuz! X3.2

© 2016-2022 Comsenz Inc.