老外说谣言是怎么传播的(网络谣言为何如此盛行)
老外说谣言是怎么传播的(网络谣言为何如此盛行)Exploiting Social Media作为一名专门研究虚假信息如何在社交媒体上传播的研究员,我知道,限制新闻造假者销售广告的能力是一项非常正确的举措,谷歌和Facebook最近已经宣布这么做了,但这种举措并不能限制出于政治目的的恶劣做法。In fact my research team's analysis of data from Columbia University's Emergent rumor tracker suggests that this misinformation is just as likely to go viral as reliable information.实际上,我的研究团队分析了哥伦比亚大学名为“Emergent”的谣言跟踪器上的数据,发现这些虚假信息和可靠信息一样会快速传播。As a researcher on the spread o
The Truth Can Be Very Hard to Discern
真相可能难以辨
If you get your news from social media you are exposed to a daily dose of hoaxes rumors conspiracy (阴谋) theories and misleading news. When it's all mixed in with reliable information from honest sources the truth can be very hard to discern.
如果你从社交媒体中获得消息,那么你每天都会接触到大量的恶作剧信息、谣言、阴谋论和误导性新闻。这些消息和来自可靠来源的可靠信息混杂在一起,真相可能就会难以辨别。
In fact my research team's analysis of data from Columbia University's Emergent rumor tracker suggests that this misinformation is just as likely to go viral as reliable information.
实际上,我的研究团队分析了哥伦比亚大学名为“Emergent”的谣言跟踪器上的数据,发现这些虚假信息和可靠信息一样会快速传播。
As a researcher on the spread of misinformation through social media I know that limiting news fakers' ability to sell ads as recently announced by Google and Facebook is a step in the right direction. But it will not curb abuses driven by political motives.
作为一名专门研究虚假信息如何在社交媒体上传播的研究员,我知道,限制新闻造假者销售广告的能力是一项非常正确的举措,谷歌和Facebook最近已经宣布这么做了,但这种举措并不能限制出于政治目的的恶劣做法。
Exploiting Social Media
利用社交媒体
About 10 years ago my colleagues and I ran an experiment in which we learned 72 percent of college students trusted links that appeared to originate from friends—even to the point of entering personal login information on phishing (网络钓鱼) sites. This widespread vulnerability suggested another form of malicious (恶意的) manipulation: People might also believe misinformation they receive when clicking on a link from a social contact.
大约十年前,我和同事们做了一个实验,结果显示72%的大学生都相信那些看似来自他们朋友的链接,甚至到了在钓鱼网站上填写个人登录信息的程度。这个常见的弱点让另一种形式的恶意操纵有机可乘:人们点击社交联系人发来的链接时,可能也会相信接收到的虚假信息。
To explore that idea I created a fake web page with random computer-generated gossip news—things like "Celebrity X caught in bed with Celebrity Y!" Visitors to the site who searched for a name would trigger (引发) the script to automatically fabricate a story about the person. I included on the site a disclaimer (免责声明) saying the site contained meaningless text and made-up "facts." I also placed ads on the page. At the end of the month I got a check in the mail with earnings from the ads. That was my proof: Fake news could make money by polluting the internet with falsehoods.
为了验证这个想法,我建立了一个假网站,在上面发布了一些计算机随机生成的消息,比如“名人甲和名人乙被捉奸在床!”等等。访问该网站的人搜索人名时,会触发一个脚本,网站就会自动编造一个和此人相关的故事。我在网站上挂了免责声明,声称网站上有一些毫无意义的文字以及一些捏造的“事实”。我还在网页上挂了广告。月底,我邮箱里收到了一张支票,是这些广告的盈利。这就是我的证据:假消息因为虚假污染了互联网,但却可以盈利。
Sadly I was not the only one with this idea. Ten years later we have an industry of fake news and digital misinformation. Clickbait (点击诱饵) sites manufacture hoaxes to make money from ads while so-called hyperpartisan (极度拥护自己所属党派的) sites publish and spread rumors and conspiracy theories to influence public opinion.
遗憾的是,想到这一点的人不止我一个。十年后,假消息和数字化虚假信息成了一个产业。诱饵类网站编造假消息,通过广告挣钱。而所谓“绝对拥护自己党派”的网站则会发布、散播谣言和阴谋论以影响公众舆论。
This industry is bolstered (支持) by how easy it is to create social bots (机器人程序) fake accounts controlled by software that look like real people and therefore can have real influence. Research in my lab uncovered many examples of fake grassroots campaigns also called political astroturfing (伪草根营销).
创建社交机器人或是设立由软件控制的账号都很容易,就像真人一样,因而也可以产生实实在在的影响。这极大地推动了假消息和数字化虚假信息产业的发展。我的实验室就通过研究发现了很多伪草根运动,也叫伪草根的政治营销。
In response we developed the BotOrNot tool to detect social bots. It's not perfect but accurate enough to uncover persuasion campaigns in the Brexit and antivax movements. Using BotOrNot our colleagues found that a large portion of online chatter about the 2016 elections was generated by bots.
为此,我们研发了一个名叫BotOrNot的工具来检测社交机器人。该工具还不完善,但是足够准确,可以揭露英国脱欧和美国反疫苗活动中的劝说活动。我同事利用BotOrNot发现,有关2016年美国大选的线上聊天有很大一部分都是机器人程序编造的。
Creating Information Bubbles
制造信息泡泡
We humans are vulnerable to manipulation by digital misinformation thanks to a complex set of social cognitive economic and algorithmic biases. Some of these have evolved for good reasons: Trusting signals from our social circles and rejecting information that contradicts our experience served us well when our species adapted to evade predators (损人利己者). But in today's shrinking online networks a social network connection with a conspiracy theorist on the other side of the planet does not help inform my opinions.
由于一系列复杂的社会、认知、经济和算法偏好,人类很容易被数字化虚假信息操控。有些偏见的产生是有原因的:相信我们社交圈里的信号,拒绝违背我们经验的信息——这些在我们躲避天敌的时代非常有效。但是在当今这个不断缩小的在线网络中(在当今让世界不断缩小的在线网络中?),与地球另一边某位阴谋论者建立社交网络联系并不能让我的观点更明智。
Copying our friends and unfollowing those with different opinions give us echo chambers so polarized that researchers can tell with high accuracy whether you are liberal or conservative by just looking at your friends. The network structure is so dense that any misinformation spreads almost instantaneously within one group and so segregated that it does not reach the other.
跟着朋友人云亦云、取消关注意见不同的人,这些做法让我们身处严重分化的“回音室”中,研究者甚至只需看一下你的朋友就能准确判断出你是自由派还是保守派。网络的结构是如此紧密,任何虚假信息几乎都能在瞬间传遍一个群体;而网络又是如此隔绝,以至于让这些消息无法传播到其他群体之中。
Inside our bubble we are selectively exposed to information aligned with our beliefs. That is an ideal scenario to maximize engagement but a detrimental one for developing healthy skepticism. Confirmation bias (证实性偏见) leads us to share a headline without even reading the article.
我们在所处的“泡泡”中选择性地接触那些与我们信仰一致的信息。这对于最大限度地与人交往很理想,但却不利于形成健全的怀疑精神。由于“证实性偏见”的影响,我们甚至看都不看内容就会分享某一头条新闻。
Our lab got a personal lesson in this when our own research project became the subject of a vicious misinformation campaign in the run-up (重要事件的前夕) to the 2014 U.S. midterm elections. When we investigated what was happening we found fake news stories about our research being predominantly shared by Twitter users within one partisan echo chamber a large and homogeneous (同种类的) community of politically active users. These people were quick to retweet and impervious (似乎不关注的) to debunking information.
我们的实验室就有过这样的教训。临近2014年美国中期选举时,我们的研究项目成了一个恶意的虚假信息活动的攻击目标。我们调查事情经过后发现,关于我们研究的虚假新闻主要被某一党派“回音室”里的推特用户所分享。这个“回音室”是一个规模很大的同质化社区,成员都是一些政治活跃用户,他们转推迅速,而且对信息的证伪并不关注。
Viral Inevitability
传播的必然性
Our research shows that given the structure of our social networks and our limited attention it is inevitable that some memes (迷姆) will go viral irrespective of their quality. Even if individuals tend to share information of higher quality the network as a whole is not effective at discriminating between reliable and fabricated information. This helps explain all the viral hoaxes we observe in the wild.
我们的研究表明,鉴于我们的社交网络结构就是如此,我们的注意力又很有限,这就决定了有些迷姆必然会快速传播,无论质量高低。即便个体倾向于分享更高质量的信息,但网络作为一个整体还是无法有效区分可信信息和虚假信息。我们看到一些恶作剧信息不受限制地快速传播,就是这个原因。
The attention economy takes care of the rest: If we pay attention to a certain topic more information on that topic will be produced. It's cheaper to fabricate information and pass it off as fact than it is to report actual truth. And fabrication can be tailored to each group: Conservatives read that the pope endorsed Trump liberals read that he endorsed Clinton. He did neither.
注意力不足也是造成这些现象的原因之一:如果我们关注某一特定话题,就会有更多关于这个话题的信息产生。编造信息并将其作为事实传播,比报道真实情况成本要低。而且编造信息还可以针对每个群体量体裁衣:保守派看到的信息是教皇支持特朗普,而自由派看到的信息则是教皇支持希拉里,实际上教皇谁也不支持。
Beholden to Algorithms
受制于算法
Since we cannot pay attention to all the posts in our feeds algorithms determine what we see and what we don't. The algorithms used by social media platforms today are designed to prioritize engaging posts—ones we’re likely to click on react to and share. But a recent analysis found intentionally misleading pages got at least as much online sharing and reaction as real news.
由于我们不可能关注推送里的全部推文,所以我们看得见什么、看不见什么就由算法来决定。目前,社交媒体平台使用的算法都会优先推送互动性强的内容,也就是我们更有可能点击、回应和分享的内容。不过最近一项分析发现,有意误导的网页获得的在线关注和回应一点儿都不比真实新闻少。
This algorithmic bias toward engagement over truth reinforces our social and cognitive biases. As a result when we follow links shared on social media we tend to visit a smaller more homogeneous set of sources than when we conduct a search and visit the top results.
这种重视互动高过事实的算法偏见也加剧了我们的社会与认知偏见。正因如此,在看社交媒体上的共享链接时,我们访问的资源要比我们搜索并访问置顶结果时范围更小、同质性更强。
Existing research shows that being in an echo chamber can make people more gullible (轻信的) about accepting unverified rumors. But we need to know a lot more about how different people respond to a single hoax: Some share it right away others fact-check it first.
现有研究证明,身处“回音室”会让人更容易上当,更容易接受未经证实的谣言。但我们应该多了解一下不同人对恶作剧信息的不同反应:有的人会立刻分享,有的人会先进行事实审核。
We are simulating a social network to study this competition between sharing and fact-checking. We are hoping to help untangle conflicting evidence about when fact-checking helps stop hoaxes from spreading and when it doesn't. Our preliminary results suggest that the more segregated the community of hoax believers the longer the hoax survives. Again it's not just about the hoax itself but also about the network.
我们正在模拟一个社交网络,来研究“分享”和“事实审核”之间的这场竞争。关于事实审核什么时候有助于阻止恶作剧信息的传播,什么时候无法阻止,相关的证据自相矛盾,我们想将之理清楚。初步结果显示,恶作剧信息的信众所处的社区越孤立,这类信息传播的时间就越长。这再次证明,恶作剧信息的传播不仅与它本身有关,还与网络有关。
Many people are trying to figure out what to do about all this. According to Mark Zuckerberg's latest announcement Facebook teams are testing potential options. And a group of college students has proposed a way to simply label shared links as "verified" or not.
很多人正在思考该怎么解决这些问题。扎克伯格最近宣称,Facebook团队正在对一些有潜力的选项进行测试。此外,一群大学生提出,直接在共享链接上标明是否“经证实”就可以了。
Some solutions remain out of reach at least for the moment. For example we can't yet teach artificial intelligence systems how to discern between truth and falsehood. But we can tell ranking algorithms to give higher priority to more reliable sources.
有一些解决方法仍然无法实现,至少目前是这样的。例如,我们还无法教会人工智能系统区分事实和谎言。但我们可以利用排位算法,优先选择那些更加可靠的来源。
Studying the Spread of Fake News
研究虚假信息的传播
We can make our fight against fake news more efficient if we better understand how bad information spreads. If for example bots are responsible for many of the falsehoods we can focus attention on detecting them. If alternatively the problem is with echo chambers perhaps we could design recommendation systems that don't exclude differing views.
如果知道虚假信息如何传播,我们与假消息的战斗就会更有效。举例来说,如果是机器人程序编造了大部分谎言,那我们就可以着力检测出这些程序;如果问题在于“回音室”,那我们就可以设计一些不会排除异议的推荐系统。
To that end (目标) our lab is building a platform called Hoaxy to track and visualize the spread of unverified claims and corresponding fact-checking on social media. That will give us real-world data with which we can inform our simulated social networks. Then we can test possible approaches to fighting fake news.
为此,我们实验室正在搭建一个名叫“Hoaxy”的平台,可以对社交媒体上未证实信息的传播以及相应的事实审核加以跟踪,并进行可视化演示。这样我们就可以获得现实世界的数据,然后用这些数据改进我们的模拟社交网络。然后我们就可以测试应对虚假信息的各种方法了。
Hoaxy may also be able to show people how easy it is for their opinions to be manipulated by online information—and even how likely some of us are to share falsehoods online. Hoaxy will join a suite of tools in our Observatory on Social Media which allows anyone to see how memes spread on Twitter. Linking tools like these to human fact-checkers and social media platforms could make it easier to minimize duplication of efforts and support each other.
Hoaxy还可以让人们知道自己的观点多么容易受到在线信息的操纵,甚至可以显示我们当中一些人在网络上分享谎言的可能性。Hoaxy会成为我们社交媒体观测台的工具套件之一。有了这个套件,任何人都可以看到迷姆在推特上是如何传播的。把这样的工具与事实审核人员以及社交媒体平台结合起来,我们就能更好地减少重复劳动,实现相互支持。
It is imperative that we invest resources in the study of this phenomenon. We need all hands on deck: Computer scientists social scientists economists journalists and industry partners must work together to stand firm against the spread of misinformation.
我们必须要投入资源来研究这个现象。我们需要各行各业的加入:计算机科学家、社会科学家、经济学家、记者以及业界合作者必须通力合作,坚决抵制虚假信息的传播。
译者丁力