传媒教育网

 找回密码
 实名注册

QQ登录

只需一步,快速开始

搜索
做个试验
楼主: admin
打印 上一主题 下一主题

新闻真实案例库

  [复制链接]
261#
 楼主| 发表于 2019-1-19 22:09:25 | 只看该作者
【案例】
眼见不再为实|【外交事务】:照片可P,视频可P,直播也可P

【世界决定视界】【视界决定世界】
欢迎打开“我与我们的世界”,从此,让我们一起纵览世界之风云变幻、洞察社会之脉搏律动、感受个体之生活命运、挖掘自然之点滴奥妙。
我与我们的世界,既是一个奋斗的世界,也是一个思考的世界。奋而不思则罔,思而不奋则殆。这个世界,你大,它就大;你小,它就小。
欢迎通过上方公众号名称,来挖掘本公众号与大家共享的往期文章,因为,每期都能让你“走近”不一样的世界、带给你不一样的精彩。  
本期导读:人脸解锁、刷脸购物、人脸识别的动物表情小游戏,生活中已很常见,这在一定程度上也就说明了一个问题,那就是,人工智能(Artificial Intelligence)在人脸这件事儿上已经越来越精通了。

如果说识别只是AI对人脸做出的第一件事,那么第二件事是什么呢?从种种迹象来看,答案只有一个,那就是给人换脸。当然,AI不会真的去给人整容(至少目前不会),它能做的是在视频里给人换脸。比如曾被刷屏的小视频可能有些朋友就已看过。
视频中的女主角(确切的说是女主角的脸)是《神奇女侠》的扮演者盖尔·加朵。但这当然不是其本人出演了什么令人羞耻的小电影。而是有人用深度学习(Deep Learning)技术把盖尔·加朵的脸替换到了原片女主角的身体上。
这就是源自2017 年 12 月,一位自稱 DeepFakes 的網友利用深度學習技術,將 A 片女主角換臉成神力女超人女主角蓋兒加朵(Gal Gadot),此技術當時引起一陣轟動,近日有外國的研究報告估計,DeepFakes將會成為對下屆美國總統大選構成影響,甚至造成威脅的人工智能技術。

根據《The Nextweb》報導,研究人員認為只要依賴人工智慧,加上事先蒐集大量的語音訓練數據,就可以製造出以假亂真語音記錄和影片,預計在 5 年內,這項技術將會越來越成熟,足以欺騙沒有訓練過的人們。
隨著人工智慧(AI)技術日漸成熟,民眾越來越難分辨網路上假新聞及假影片的真實性。就像是之前美國男演員喬登皮爾(Jordan Peele)和美國數位新聞網站 BuzzFeed,就利用前陣子火紅的 AI 換臉技術「FakeApp」,聯合製作一條歐巴馬談論假新聞的政令宣傳影片,逼真到讓人難以分辨。
视频截图
根據《The Verge》報導,這隻影片除了採用之前美國網友 Deepfakes 所使用的 AI 人臉交換技術「FakeApp」外,也有使用 Adobe 的視覺特效製作軟體After Effects,兩個軟體一起結合運用,成功地用歐巴馬的臉把喬登皮爾的臉換掉。
AI人工智能在全球开花,而这个技术也是被运用到各行业中来。据CNET报道称,Naughty America公司打算通过AI人工智能,为用户提供定制化的换脸视频,当然主要是集中在一些成人电影上,而这种人工智能驱动的换脸技术掀起了行业的热潮。
据悉,Naughty America正在使用的AI技术,可以做的不仅仅是替换面容。用户可以和他们最喜欢的女演员或男演员一起出现在一个场景中,或者可以把自己置身于现实生活中不可能的性环境中,甚至情侣也可以一起放到同一场景中。
目前Naughty America团队将与外部人工智能研究人员合作制作这类视频,不过国外的一些社交网站已经禁止换脸色情视频。美国国防高级研究计划局(DARPA)正在研究一种检测深度伪造视频的方法。
这个ID叫做DeepFakes的网友,始终致力于在Reddit上分享其利用AI技术制作的明星换脸小视频。差不多所有当红的好莱坞女星都被他“炮制”了一遍。各位是不是感觉有点兴奋?以后想看哪位明星的片子自己动手做就是了,甚至可以把自己的脸替换上去演对手戏,各种YY皆能成真。
可是,如果是你亲戚朋友的脸被替换了呢?如果把犯罪现场所拍摄嫌疑人的脸换成你呢?如果在你不知情的情况下,不法分子发给你家人一段有你露脸的绑架视频呢?当我们不能相信自己的眼睛,各种混乱和罪恶的重量,绝对会大于那一点点违法的“福利”。
换脸的恐怖之处,在于AI很简单。
回到前面提到制作女星换脸小电影的DeepFakes。这哥们不仅是个老司机,还是一位热爱分享的“技术型活雷锋”,不仅免费发布他的成果,还不厌其烦的分享自己制作换脸视频的教程,以及自己编写的深度学习代码和相关数据集。大概他的意思是,别再问我要谁谁的视频了,你们自己做去吧。

当然,这哥们也不是专注女明星,上边这张就是他分享的如何把尼古拉斯·凯奇换成川普的教程。根据他的分享,制作一个明星换脸视频非常简单。以盖尔·加朵的视频为例,他首先会在谷歌、YouTube以及各种网络图集中收集盖尔·加朵的各个角度的视频和图片。组成一个能满足深度学习任务进行脸部替换的素材库。
然后通过TensorFlow上提供的机器视觉相关模型,学习和理解原版小电影中女主角的面部特征、轮廓、动作和嘴型等。继而让模型在素材库中寻找各种角度、各种表情下AI认为合适的图片与视频,对原本视频进行替换。
虽然可以看到,他做的视频在很多细节上还是有瑕疵,不够自然。但是大体一看已经可以蒙混过关,并且制作效果在日渐提高。这里隐藏的真正问题,在于利用开源的AI架构进行视频换脸这件事,不是太复杂太前卫了,而是太简单太容易了!
这东西毫无技术难度,要会用TensorFlow的基础功能,电脑显卡不至于太烂,差不多几个小时就可以搞出来一个。哪怕连编程基础都没有的人,跟着教程一步步走,搜集足够多的素材,也可以自己搞出来换脸视频。
设想一下,当你身边某个仇人想要陷害你的时候,只要收集你的照片和自拍,就可以随意把你和任何罪恶甚至肮脏的视频结合到一起,然后在你的社交圈里散播的到处都是,那场面何其令人胆寒?这就像枪支可以无审查、无监管的随意买卖,并且价格低廉。
在机器视觉开发的底层技术日益完善后,视频换脸必然继续在三个层面加强它的普及化:
1.近乎无门槛使用。换脸相关的数据集、源代码和架构,在今天只要有心就可以轻易找到,随着技术的成熟,这种趋势大概只会愈演愈烈。
2.可以工具化。由于技术并不复杂,这个功能被工具化的可能性很大。也就是说不法分子可以把它做成一个应用,购买了之后只要按要求添加视频和希望替换人的图像,就可以自动生成换脸视频,达成真正的无门槛。
3.欺骗性不断增强:有相关AI从业者认为,DeepFakes的视频仅仅经历了初步的学习和替换过程,没有进行修补和细节雕琢,就已经获得了很高的完成度。那么假如进一步结合对抗生成网络进行修饰,大概就可以生成真伪难辨的视频了。
总之,当我们知道照片可以PS之后,视频也不再可信了。而且,不仅仅是视频。
山雨欲来:下一站是直播+换脸
去年年初的时候,德国纽伦堡大学的相关团队发布了一个应用,也就是非常出名的Face2Face。这款应用的能力,是通过摄像头进行脸部追踪,从而让视频里的人跟着你说话。
由于其精准的捕捉效果和实时化能力,Face2Face在诞生之日起就引起了轩然大波。在其演示视频下,无数网友质疑这项技术将成为网络诈骗、绑架勒索的帮凶,质疑如果视频电话的另一端,竟然不是你认识的那个人,那将会是多么恐怖的一件事。
当然,Face2Face目前是个封闭状态,用户只能扮演其提供的角色尝尝鲜而已。但经过一年多的发展,直播中的脸部捕捉和替换技术也已大幅度提升。如今我们可以在直播平台上看到实时替换的背景和道具,而利用AI在直播中进行脸部替换,也已经是近在咫尺的事了。
与之相配合的,是AI进行声纹识别与声音合成技术也在突飞猛进。比如Adobe在近两年陆续发布了新的声音合成技术。普通人用AI来进行柯南用蝴蝶结完成的换声,已经不是多困难的事情。
借助AI,直播中换脸和换声正在同步向前跨越。那么带来的影响会是什么呢?双头人开播?川普坐在白宫办公室里跟你连麦?某当红小鲜肉在直播中跪着给你唱《征服》?没问题,统统都可以。有没有很开心?当然,你跟直播平台可能都开心了,小鲜肉却不开心了。
而换个角度想想,假如同样的技术运用在视频电话里呢?假如你接到的亲人/朋友的视频电话,套取你的隐私或者跟你借钱,事后竟然发现是陌生人处心积虑伪造的。假如一个人可以彻底伪装成另一个人,还会有人开心吗?
当我们打开手机电脑,发现一切都不是真的。真是挺让人丧心病狂的一件事。
AI换脸并不难,由于多种应用场景的存在和超高的娱乐性,我们也很难阻止它的到来。于是真正该让我们头疼的,大概就是深藏其中的法律问题与伦理陷阱。
基本可以很靠谱的说,今天国内外的很多直播与视频平台,都在研发直播换脸技术。并且某些解决方案已经相当成熟。试想一下,换脸之后的当红女神与小鲜肉,整晚开直播说一些迎合猎奇心理的话,礼物还不多到把平台挤爆了?——即使用户明知是假的。
当然,正规直播平台大概不敢这么做,使用这种技术会非常克制。但是假如有第三方插件可以做这件事呢?或者在缺乏监管的地下直播/半地下直播平台上呢?毕竟利益和猎奇可以驱使人去做各种事情,技术的门槛一旦解禁,滚滚而来的法律问题很可能决堤。

这里隐藏的伦理陷阱,是肖像权这个东西可能会前所未有的复杂化。无论是明星还是普通人,大概都不希望被别人“易容”成自己的样子来进行直播。
但问题是,你如何证明易容的是你呢?或者说你如何证明你是你?我们知道,肖像权是指你本人拍摄的图像和视频。但是用你的面部数据搭建起来的AI模型还属于你的肖像权范畴吗?
更困难的是,你根本无从证明AI搭建出来的肖像模型跟你有直接关系。毕竟深度学习训练是在看不见的后端完成的,制作者大可以说是臆想出来,或者用跟你很像的人来搭建的。再或者只比你脸上多一颗痣,是不是就不是你了呢?
更复杂的伦理情况还有很多,比如一个人享有故去亲人的肖像权吗?假如一个人希望用AI来重现已故的亲属,与亡者进行视频通话,但另一个亲属却坚决认为这是违法行为,那么到底该听谁的?
这还是基础的伦理与法律矛盾,在这之外,是大把可以用AI换脸术进行的非法勾当。比如诈骗、勒索、诬陷等等等等。总而言之,AI换脸术这件事在今天可以归纳为三句话:一、火是肯定要火的;二、乱是一定要乱的;三、如何监管,大概是不知道的。
哦对了,最后应该说一下如何防止别人做出你的AI换脸视频:不要发太多自拍。
  
Deepfakes and the New Disinformation War: The Coming Age of Post-Truth Geopolitics
深度换脸技术与新型虚假信息战争:后真相地缘政治时代来临
By Robert Chesney and Danielle Citron
小编注:译文部分仅供参考;本期共享文章摘自“外交事务”网站,文章有点小长,也有点小难,因此搜了好多相关信息,整理出上面那篇较长的导读;本公众号更多优质文章,见文末“往期精彩”。
A picture may be worth a thousand words, but there is nothing that persuades quite like an audio or video recording of an event. At a time when partisans can barely agree on facts, such persuasiveness might seem as if it could bring a welcome clarity. Audio and video recordings allow people to become firsthand witnesses of an event, sparing them the need to decide whether to trust someone else’s account of it. And thanks to smartphones, which make it easy to capture audio and video content, and social media platforms, which allow that content to be shared and consumed, people today can rely on their own eyes and ears to an unprecedented degree.
与文字比起来,照片更具直观性。远比照片更具直观性的,则是音视频。现如今这个信息时代,人们通过眼睛耳朵所能接触到的音视频信息数量之巨,前所未有。
Therein lies a great danger. Imagine a video depicting the Israeli prime minister in private conversation with a colleague, seemingly revealing a plan to carry out a series of political assassinations in Tehran. Or an audio clip of Iranian officials planning a covert operation to kill Sunni leaders in a particular province of Iraq. Or a video showing an American general in Afghanistan burning a Koran. In a world already primed for violence, such recordings would have a powerful potential for incitement. Now imagine that these recordings could be faked using tools available to almost anyone with a laptop and access to the Internet—and that the resulting fakes are so convincing that they are impossible to distinguish from the real thing.
有很多影音视频,可以是假的。
Advances in digital technology could soon make this nightmare a reality. Thanks to the rise of “deepfakes”—highly realistic and difficult-to-detect digital manipulations of audio or video—it is becoming easier than ever to portray someone saying or doing something he or she never said or did. Worse, the means to create deepfakes are likely to proliferate quickly, producing an ever-widening circle of actors capable of deploying them for political purposes. Disinformation is an ancient art, of course, and one with a renewed relevance today. But as deepfake technology develops and spreads, the current disinformation wars may soon look like the propaganda equivalent of the era of swords and shields.
技术的进步,也带来了虚假信息的泛滥。
DAWN OF THE DEEPFAKES
深度换脸技术时代来临
Deepfakes are the product of recent advances in a form of artificial intelligence known as “deep learning,” in which sets of algorithms called “neural networks” learn to infer rules and replicate patterns by sifting through large data sets. (Google, for instance, has used this technique to develop powerful image-classification algorithms for its search engine.) Deepfakes emerge from a specific type of deep learning in which pairs of algorithms are pitted against each other in “generative adversarial networks,” or GANS. In a GAN, one algorithm, the “generator,” creates content modeled on source data (for instance, making artificial images of cats from a database of real cat pictures), while a second algorithm, the “discriminator,” tries to spot the artificial content (pick out the fake cat images). Since each algorithm is constantly training against the other, such pairings can lead to rapid improvement, allowing GANS to produce highly realistic yet fake audio and video content.
通过生成式对抗网络技术,可做出极其逼真但实际上却是虚假的影音视频。
小编注:生成式对抗网络(GAN, Generative Adversarial Networks )是一种深度学习模型,是近年来复杂分布上无监督学习最具前景的方法之一。模型通过框架中(至少)两个模块:生成模型(Generative Model)和判别模型(Discriminative Model)的互相博弈学习产生相当好的输出。该方法由伊恩·古德费洛等人于2014年提出。生成网络从潜在空间(latent space)中随机采样作为输入,其输出结果需要尽量模仿训练集中的真实样本。判别网络的输入则为真实样本或生成网络的输出,其目的是将生成网络的输出从真实样本中尽可能分辨出来。而生成网络则要尽可能地欺骗判别网络。两个网络相互对抗、不断调整参数,最终目的是使判别网络无法判断生成网络的输出结果是否真实。生成对抗网络常用于生成以假乱真的图片。此外,该方法还被用于生成视频、三维物体模型等。
This technology has the potential to proliferate widely. Commercial and even free deepfake services have already appeared in the open market, and versions with alarmingly few safeguards are likely to emerge on the black market. The spread of these services will lower the barriers to entry, meaning that soon, the only practical constraint on one’s ability to produce a deepfake will be access to training materials—that is, audio and video of the person to be modeled—to feed the GAN. The capacity to create professional-grade forgeries will come within reach of nearly anyone with sufficient interest and the knowledge of where to go for help.
技术很容易获得,唯一的约束性条件就是控制“训练材料”的获得。
Deepfakes have a number of worthy applications. Modified audio or video of a historical figure, for example, could be created for the purpose of educating children. One company even claims that it can use the technology to restore speech to individuals who have lost their voice to disease. But deepfakes can and will be used for darker purposes, as well. Users have already employed deepfake technology to insert people’s faces into pornography without their consent or knowledge, and the growing ease of making fake audio and video content will create ample opportunities for blackmail, intimidation, and sabotage. The most frightening applications of deepfake technology, however, may well be in the realms of politics and international affairs. There, deepfakes may be used to create unusually effective lies capable of inciting violence, discrediting leaders and institutions, or even tipping elections.
对很多领域来说,这项技术具有很大应用价值。但技术是把双刃剑,技术能应用于善,也能应用于恶。
Deepfakes have the potential to be especially destructive because they are arriving at a time when it already is becoming harder to separate fact from fiction. For much of the twentieth century, magazines, newspapers, and television broadcasters managed the flow of information to the public. Journalists established rigorous professional standards to control the quality of news, and the relatively small number of mass media outlets meant that only a limited number of individuals and organizations could distribute information widely. Over the last decade, however, more and more people have begun to get their information from social media platforms, such as Facebook and Twitter, which depend on a vast array of users to generate relatively unfiltered content. Users tend to curate their experiences so that they mostly encounter perspectives they already agree with (a tendency heightened by the platforms’ algorithms), turning their social media feeds into echo chambers. These platforms are also susceptible to so-called information cascades, whereby people pass along information shared by others without bothering to check if it is true, making it appear more credible in the process. The end result is that falsehoods can spread faster than ever before.
These dynamics will make social media fertile ground for circulating deepfakes, with potentially explosive implications for politics. Russia’s attempt to influence the 2016 U.S. presidential election—spreading divisive and politically inflammatory messages on Facebook and Twitter—already demonstrated how easily disinformation can be injected into the social media bloodstream. The deepfakes of tomorrow will be more vivid and realistic and thus more shareable than the fake news of 2016. And because people are especially prone to sharing negative and novel information, the more salacious the deepfakes, the better.
DEMOCRATIZING FRAUD
民主幻像
The use of fraud, forgery, and other forms of deception to influence politics is nothing new, of course. When the USS Maine exploded in Havana Harbor in 1898, American tabloids used misleading accounts of the incident to incite the public toward war with Spain. The anti-Semitic tract Protocols of the Elders of Zion, which described a fictional Jewish conspiracy, circulated widely during the first half of the twentieth century. More recently, technologies such as Photoshop have made doctoring images as easy as forging text. What makes deepfakes unprecedented is their combination of quality, applicability to persuasive formats such as audio and video, and resistance to detection. And as deepfake technology spreads, an ever-increasing number of actors will be able to convincingly manipulate audio and video content in a way that once was restricted to Hollywood studios or the most well-funded intelligence agencies.
Deepfakes will be particularly useful to nonstate actors, such as insurgent groups and terrorist organizations, which have historically lacked the resources to make and disseminate fraudulent yet credible audio or video content. These groups will be able to depict their adversaries—including government officials—spouting inflammatory words or engaging in provocative actions, with the specific content carefully chosen to maximize the galvanizing impact on their target audiences. An affiliate of the Islamic State (or ISIS), for instance, could create a video depicting a U.S. soldier shooting civilians or discussing a plan to bomb a mosque, thereby aiding the terrorist group’s recruitment. Such videos will be especially difficult to debunk in cases where the target audience already distrusts the person shown in the deepfake. States can and no doubt will make parallel use of deepfakes to undermine their nonstate opponents.
Deepfakes will also exacerbate the disinformation wars that increasingly disrupt domestic politics in the United States and elsewhere. In 2016, Russia’s state-sponsored disinformation operations were remarkably successful in deepening existing social cleavages in the United States. To cite just one example, fake Russian accounts on social media claiming to be affiliated with the Black Lives Matter movement shared inflammatory content purposely designed to stoke racial tensions. Next time, instead of tweets and Facebook posts, such disinformation could come in the form of a fake video of a white police officer shouting racial slurs or a Black Lives Matter activist calling for violence.
Perhaps the most acute threat associated with deepfakes is the possibility that a well-timed forgery could tip an election. In May 2017, Moscow attempted something along these lines. On the eve of the French election, Russian hackers tried to undermine the presidential campaign of Emmanuel Macron by releasing a cache of stolen documents, many of them doctored. That effort failed for a number of reasons, including the relatively boring nature of the documents and the effects of a French media law that prohibits election coverage in the 44 hours immediately before a vote. But in most countries, most of the time, there is no media blackout, and the nature of deepfakes means that damaging content can be guaranteed to be salacious or worse. A convincing video in which Macron appeared to admit to corruption, released on social media only 24 hours before the election, could have spread like wildfire and proved impossible to debunk in time.
Deepfakes may also erode democracy in other, less direct ways. The problem is not just that deepfakes can be used to stoke social and ideological divisions. They can create a “liar’s dividend”: as people become more aware of the existence of deepfakes, public figures caught in genuine recordings of misbehavior will find it easier to cast doubt on the evidence against them. (If deepfakes were prevalent during the 2016 U.S. presidential election, imagine how much easier it would have been for Donald Trump to have disputed the authenticity of the infamous audio tape in which he brags about groping women.) More broadly, as the public becomes sensitized to the threat of deepfakes, it may become less inclined to trust news in general. And journalists, for their part, may become more wary about relying on, let alone publishing, audio or video of fast-breaking events for fear that the evidence will turn out to have been faked.
DEEP FIX
深度技术需深度应付
There is no silver bullet for countering deepfakes. There are several legal and technological approaches—some already existing, others likely to emerge—that can help mitigate the threat. But none will overcome the problem altogether. Instead of full solutions, the rise of deepfakes calls for resilience.
Three technological approaches deserve special attention. The first relates to forensic technology, or the detection of forgeries through technical means.Just as researchers are putting a great deal of time and effort into creating credible fakes, so, too, are they developing methods of enhanced detection. In June 2018, computer scientists at Dartmouth and the University at Albany, SUNY, announced that they had created a program that detects deepfakes by looking for abnormal patterns of eyelid movement when the subject of a video blinks. In the deepfakes arms race, however, such advances serve only to inform the next wave of innovation. In the future, GANS will be fed training videos that include examples of normal blinking. And even if extremely capable detection algorithms emerge, the speed with which deepfakes can circulate on social media will make debunking them an uphill battle. By the time the forensic alarm bell rings, the damage may already be done.
A second technological remedy involves authenticating content before it ever spreads—an approach sometimes referred to as a “digital provenance” solution. Companies such as Truepic are developing ways to digitally watermark audio, photo, and video content at the moment of its creation, using meta data that can be logged immutably on a distributed ledger, or blockchain. In other words, one could effectively stamp content with a record of authenticity that could be used later as a reference to compare to suspected fakes.
In theory, digital provenance solutions are an ideal fix. In practice, they face two big obstacles. First, they would need to be ubiquitously deployed in the vast array of devices that capture content, including laptops and smartphones. Second, their use would need to be made a precondition for uploading content to the most popular digital platforms, such as Facebook, Twitter, and YouTube. Neither condition is likely to be met. Device makers, absent some legal or regulatory obligation, will not adopt digital authentication until they know it is affordable, in demand, and unlikely to interfere with the performance of their products. And few social media platforms will want to block people from uploading unauthenticated content, especially when the first one to do so will risk losing market share to less rigorous competitors.
A third, more speculative technological approach involves what has been called “authenticated alibi services,” which might soon begin emerging from the private sector. Consider that deepfakes are especially dangerous to high-profile individuals, such as politicians and celebrities, with valuable but fragile reputations. To protect themselves against deepfakes, some of these individuals may choose to engage in enhanced forms of “lifelogging”—the practice of recording nearly every aspect of one’s life—in order to prove where they were and what they were saying or doing at any given time. Companies might begin offering bundles of alibi services, including wearables to make lifelogging convenient, storage to cope with the vast amount of resulting data, and credible authentication of those data. These bundles could even include partnerships with major news and social media platforms, which would enable rapid confirmation or debunking of content.
Such logging would be deeply invasive, and many people would want nothing to do with it. But in addition to the high-profile individuals who choose to adopt lifelogging to protect themselves, some employers might begin insisting on it for certain categories of employees, much as police departments increasingly require officers to use body cameras. And even if only a relatively small number of people took up intensive lifelogging, they would produce vast repositories of data in which the rest of us would find ourselves inadvertently caught, creating a massive peer-to-peer surveillance network for constantly recording our activities.
LAYING DOWN THE LAW
张开法网
If these technological fixes have limited upsides, what about legal remedies? Depending on the circumstances, making or sharing a deepfake could constitute defamation, fraud, or misappropriation of a person’s likeness, among other civil and criminal violations. In theory, one could close any remaining gaps by criminalizing (or attaching civil liability to) specific acts—for instance, creating a deepfake of a real person with the intent to deceive a viewer or listener and with the expectation that this deception would cause some specific kind of harm. But it could be hard to make these claims or charges stick in practice. To begin with, it will likely prove very difficult to attribute the creation of a deepfake to a particular person or group. And even if perpetrators are identified, they may be beyond a court’s reach, as in the case of foreign individuals or governments.
Another legal solution could involve incentivizing social media platforms to do more to identify and remove deepfakes or fraudulent content more generally. Under current U.S. law, the companies that own these platforms are largely immune from liability for the content they host, thanks to Section 230 of the Communications Decency Act of 1996. Congress could modify this immunity, perhaps by amending Section 230 to make companies liable for harmful and fraudulent information distributed through their platforms unless they have made reasonable efforts to detect and remove it. Other countries have used a similar approach for a different problem: in 2017, for instance, Germany passed a law imposing stiff fines on social media companies that failed to remove racist or threatening content within 24 hours of it being reported.
Yet this approach would bring challenges of its own. Most notably, it could lead to excessive censorship. Companies anxious to avoid legal liability would likely err on the side of policing content too aggressively, and users themselves might begin to self-censor in order to avoid the risk of having their content suppressed. It is far from obvious that the notional benefits of improved fraud protection would justify these costs to free expression. Such a system would also run the risk of insulating incumbent platforms, which have the resources to police content and pay for legal battles, against competition from smaller firms.
LIVING WITH LIES
与虚假为伴
But although deepfakes are dangerous, they will not necessarily be disastrous. Detection will improve, prosecutors and plaintiffs will occasionally win legal victories against the creators of harmful fakes, and the major social media platforms will gradually get better at flagging and removing fraudulent content. And digital provenance solutions could, if widely adopted, provide a more durable fix at some point in the future.
In the meantime, democratic societies will have to learn resilience. On the one hand, this will mean accepting that audio and video content cannot be taken at face value; on the other, it will mean fighting the descent into a post-truth world, in which citizens retreat to their private information bubbles and regard as fact only that which flatters their own beliefs. In short, democracies will have to accept an uncomfortable truth: in order to survive the threat of deepfakes, they are going to have to learn how to live with lies.
来源:微信公众号“我与我们的世界”
编辑:马晓晴

262#
 楼主| 发表于 2019-1-19 22:09:26 | 只看该作者
【案例】
眼见不再为实|【外交事务】:照片可P,视频可P,直播也可P

【世界决定视界】【视界决定世界】
欢迎打开“我与我们的世界”,从此,让我们一起纵览世界之风云变幻、洞察社会之脉搏律动、感受个体之生活命运、挖掘自然之点滴奥妙。
我与我们的世界,既是一个奋斗的世界,也是一个思考的世界。奋而不思则罔,思而不奋则殆。这个世界,你大,它就大;你小,它就小。
欢迎通过上方公众号名称,来挖掘本公众号与大家共享的往期文章,因为,每期都能让你“走近”不一样的世界、带给你不一样的精彩。  
本期导读:人脸解锁、刷脸购物、人脸识别的动物表情小游戏,生活中已很常见,这在一定程度上也就说明了一个问题,那就是,人工智能(Artificial Intelligence)在人脸这件事儿上已经越来越精通了。

如果说识别只是AI对人脸做出的第一件事,那么第二件事是什么呢?从种种迹象来看,答案只有一个,那就是给人换脸。当然,AI不会真的去给人整容(至少目前不会),它能做的是在视频里给人换脸。比如曾被刷屏的小视频可能有些朋友就已看过。
视频中的女主角(确切的说是女主角的脸)是《神奇女侠》的扮演者盖尔·加朵。但这当然不是其本人出演了什么令人羞耻的小电影。而是有人用深度学习(Deep Learning)技术把盖尔·加朵的脸替换到了原片女主角的身体上。
这就是源自2017 年 12 月,一位自稱 DeepFakes 的網友利用深度學習技術,將 A 片女主角換臉成神力女超人女主角蓋兒加朵(Gal Gadot),此技術當時引起一陣轟動,近日有外國的研究報告估計,DeepFakes將會成為對下屆美國總統大選構成影響,甚至造成威脅的人工智能技術。

根據《The Nextweb》報導,研究人員認為只要依賴人工智慧,加上事先蒐集大量的語音訓練數據,就可以製造出以假亂真語音記錄和影片,預計在 5 年內,這項技術將會越來越成熟,足以欺騙沒有訓練過的人們。
隨著人工智慧(AI)技術日漸成熟,民眾越來越難分辨網路上假新聞及假影片的真實性。就像是之前美國男演員喬登皮爾(Jordan Peele)和美國數位新聞網站 BuzzFeed,就利用前陣子火紅的 AI 換臉技術「FakeApp」,聯合製作一條歐巴馬談論假新聞的政令宣傳影片,逼真到讓人難以分辨。
视频截图
根據《The Verge》報導,這隻影片除了採用之前美國網友 Deepfakes 所使用的 AI 人臉交換技術「FakeApp」外,也有使用 Adobe 的視覺特效製作軟體After Effects,兩個軟體一起結合運用,成功地用歐巴馬的臉把喬登皮爾的臉換掉。
AI人工智能在全球开花,而这个技术也是被运用到各行业中来。据CNET报道称,Naughty America公司打算通过AI人工智能,为用户提供定制化的换脸视频,当然主要是集中在一些成人电影上,而这种人工智能驱动的换脸技术掀起了行业的热潮。
据悉,Naughty America正在使用的AI技术,可以做的不仅仅是替换面容。用户可以和他们最喜欢的女演员或男演员一起出现在一个场景中,或者可以把自己置身于现实生活中不可能的性环境中,甚至情侣也可以一起放到同一场景中。
目前Naughty America团队将与外部人工智能研究人员合作制作这类视频,不过国外的一些社交网站已经禁止换脸色情视频。美国国防高级研究计划局(DARPA)正在研究一种检测深度伪造视频的方法。
这个ID叫做DeepFakes的网友,始终致力于在Reddit上分享其利用AI技术制作的明星换脸小视频。差不多所有当红的好莱坞女星都被他“炮制”了一遍。各位是不是感觉有点兴奋?以后想看哪位明星的片子自己动手做就是了,甚至可以把自己的脸替换上去演对手戏,各种YY皆能成真。
可是,如果是你亲戚朋友的脸被替换了呢?如果把犯罪现场所拍摄嫌疑人的脸换成你呢?如果在你不知情的情况下,不法分子发给你家人一段有你露脸的绑架视频呢?当我们不能相信自己的眼睛,各种混乱和罪恶的重量,绝对会大于那一点点违法的“福利”。
换脸的恐怖之处,在于AI很简单。
回到前面提到制作女星换脸小电影的DeepFakes。这哥们不仅是个老司机,还是一位热爱分享的“技术型活雷锋”,不仅免费发布他的成果,还不厌其烦的分享自己制作换脸视频的教程,以及自己编写的深度学习代码和相关数据集。大概他的意思是,别再问我要谁谁的视频了,你们自己做去吧。

当然,这哥们也不是专注女明星,上边这张就是他分享的如何把尼古拉斯·凯奇换成川普的教程。根据他的分享,制作一个明星换脸视频非常简单。以盖尔·加朵的视频为例,他首先会在谷歌、YouTube以及各种网络图集中收集盖尔·加朵的各个角度的视频和图片。组成一个能满足深度学习任务进行脸部替换的素材库。
然后通过TensorFlow上提供的机器视觉相关模型,学习和理解原版小电影中女主角的面部特征、轮廓、动作和嘴型等。继而让模型在素材库中寻找各种角度、各种表情下AI认为合适的图片与视频,对原本视频进行替换。
虽然可以看到,他做的视频在很多细节上还是有瑕疵,不够自然。但是大体一看已经可以蒙混过关,并且制作效果在日渐提高。这里隐藏的真正问题,在于利用开源的AI架构进行视频换脸这件事,不是太复杂太前卫了,而是太简单太容易了!
这东西毫无技术难度,要会用TensorFlow的基础功能,电脑显卡不至于太烂,差不多几个小时就可以搞出来一个。哪怕连编程基础都没有的人,跟着教程一步步走,搜集足够多的素材,也可以自己搞出来换脸视频。
设想一下,当你身边某个仇人想要陷害你的时候,只要收集你的照片和自拍,就可以随意把你和任何罪恶甚至肮脏的视频结合到一起,然后在你的社交圈里散播的到处都是,那场面何其令人胆寒?这就像枪支可以无审查、无监管的随意买卖,并且价格低廉。
在机器视觉开发的底层技术日益完善后,视频换脸必然继续在三个层面加强它的普及化:
1.近乎无门槛使用。换脸相关的数据集、源代码和架构,在今天只要有心就可以轻易找到,随着技术的成熟,这种趋势大概只会愈演愈烈。
2.可以工具化。由于技术并不复杂,这个功能被工具化的可能性很大。也就是说不法分子可以把它做成一个应用,购买了之后只要按要求添加视频和希望替换人的图像,就可以自动生成换脸视频,达成真正的无门槛。
3.欺骗性不断增强:有相关AI从业者认为,DeepFakes的视频仅仅经历了初步的学习和替换过程,没有进行修补和细节雕琢,就已经获得了很高的完成度。那么假如进一步结合对抗生成网络进行修饰,大概就可以生成真伪难辨的视频了。
总之,当我们知道照片可以PS之后,视频也不再可信了。而且,不仅仅是视频。
山雨欲来:下一站是直播+换脸
去年年初的时候,德国纽伦堡大学的相关团队发布了一个应用,也就是非常出名的Face2Face。这款应用的能力,是通过摄像头进行脸部追踪,从而让视频里的人跟着你说话。
由于其精准的捕捉效果和实时化能力,Face2Face在诞生之日起就引起了轩然大波。在其演示视频下,无数网友质疑这项技术将成为网络诈骗、绑架勒索的帮凶,质疑如果视频电话的另一端,竟然不是你认识的那个人,那将会是多么恐怖的一件事。
当然,Face2Face目前是个封闭状态,用户只能扮演其提供的角色尝尝鲜而已。但经过一年多的发展,直播中的脸部捕捉和替换技术也已大幅度提升。如今我们可以在直播平台上看到实时替换的背景和道具,而利用AI在直播中进行脸部替换,也已经是近在咫尺的事了。
与之相配合的,是AI进行声纹识别与声音合成技术也在突飞猛进。比如Adobe在近两年陆续发布了新的声音合成技术。普通人用AI来进行柯南用蝴蝶结完成的换声,已经不是多困难的事情。
借助AI,直播中换脸和换声正在同步向前跨越。那么带来的影响会是什么呢?双头人开播?川普坐在白宫办公室里跟你连麦?某当红小鲜肉在直播中跪着给你唱《征服》?没问题,统统都可以。有没有很开心?当然,你跟直播平台可能都开心了,小鲜肉却不开心了。
而换个角度想想,假如同样的技术运用在视频电话里呢?假如你接到的亲人/朋友的视频电话,套取你的隐私或者跟你借钱,事后竟然发现是陌生人处心积虑伪造的。假如一个人可以彻底伪装成另一个人,还会有人开心吗?
当我们打开手机电脑,发现一切都不是真的。真是挺让人丧心病狂的一件事。
AI换脸并不难,由于多种应用场景的存在和超高的娱乐性,我们也很难阻止它的到来。于是真正该让我们头疼的,大概就是深藏其中的法律问题与伦理陷阱。
基本可以很靠谱的说,今天国内外的很多直播与视频平台,都在研发直播换脸技术。并且某些解决方案已经相当成熟。试想一下,换脸之后的当红女神与小鲜肉,整晚开直播说一些迎合猎奇心理的话,礼物还不多到把平台挤爆了?——即使用户明知是假的。
当然,正规直播平台大概不敢这么做,使用这种技术会非常克制。但是假如有第三方插件可以做这件事呢?或者在缺乏监管的地下直播/半地下直播平台上呢?毕竟利益和猎奇可以驱使人去做各种事情,技术的门槛一旦解禁,滚滚而来的法律问题很可能决堤。

这里隐藏的伦理陷阱,是肖像权这个东西可能会前所未有的复杂化。无论是明星还是普通人,大概都不希望被别人“易容”成自己的样子来进行直播。
但问题是,你如何证明易容的是你呢?或者说你如何证明你是你?我们知道,肖像权是指你本人拍摄的图像和视频。但是用你的面部数据搭建起来的AI模型还属于你的肖像权范畴吗?
更困难的是,你根本无从证明AI搭建出来的肖像模型跟你有直接关系。毕竟深度学习训练是在看不见的后端完成的,制作者大可以说是臆想出来,或者用跟你很像的人来搭建的。再或者只比你脸上多一颗痣,是不是就不是你了呢?
更复杂的伦理情况还有很多,比如一个人享有故去亲人的肖像权吗?假如一个人希望用AI来重现已故的亲属,与亡者进行视频通话,但另一个亲属却坚决认为这是违法行为,那么到底该听谁的?
这还是基础的伦理与法律矛盾,在这之外,是大把可以用AI换脸术进行的非法勾当。比如诈骗、勒索、诬陷等等等等。总而言之,AI换脸术这件事在今天可以归纳为三句话:一、火是肯定要火的;二、乱是一定要乱的;三、如何监管,大概是不知道的。
哦对了,最后应该说一下如何防止别人做出你的AI换脸视频:不要发太多自拍。
  
Deepfakes and the New Disinformation War: The Coming Age of Post-Truth Geopolitics
深度换脸技术与新型虚假信息战争:后真相地缘政治时代来临
By Robert Chesney and Danielle Citron
小编注:译文部分仅供参考;本期共享文章摘自“外交事务”网站,文章有点小长,也有点小难,因此搜了好多相关信息,整理出上面那篇较长的导读;本公众号更多优质文章,见文末“往期精彩”。
A picture may be worth a thousand words, but there is nothing that persuades quite like an audio or video recording of an event. At a time when partisans can barely agree on facts, such persuasiveness might seem as if it could bring a welcome clarity. Audio and video recordings allow people to become firsthand witnesses of an event, sparing them the need to decide whether to trust someone else’s account of it. And thanks to smartphones, which make it easy to capture audio and video content, and social media platforms, which allow that content to be shared and consumed, people today can rely on their own eyes and ears to an unprecedented degree.
与文字比起来,照片更具直观性。远比照片更具直观性的,则是音视频。现如今这个信息时代,人们通过眼睛耳朵所能接触到的音视频信息数量之巨,前所未有。
Therein lies a great danger. Imagine a video depicting the Israeli prime minister in private conversation with a colleague, seemingly revealing a plan to carry out a series of political assassinations in Tehran. Or an audio clip of Iranian officials planning a covert operation to kill Sunni leaders in a particular province of Iraq. Or a video showing an American general in Afghanistan burning a Koran. In a world already primed for violence, such recordings would have a powerful potential for incitement. Now imagine that these recordings could be faked using tools available to almost anyone with a laptop and access to the Internet—and that the resulting fakes are so convincing that they are impossible to distinguish from the real thing.
有很多影音视频,可以是假的。
Advances in digital technology could soon make this nightmare a reality. Thanks to the rise of “deepfakes”—highly realistic and difficult-to-detect digital manipulations of audio or video—it is becoming easier than ever to portray someone saying or doing something he or she never said or did. Worse, the means to create deepfakes are likely to proliferate quickly, producing an ever-widening circle of actors capable of deploying them for political purposes. Disinformation is an ancient art, of course, and one with a renewed relevance today. But as deepfake technology develops and spreads, the current disinformation wars may soon look like the propaganda equivalent of the era of swords and shields.
技术的进步,也带来了虚假信息的泛滥。
DAWN OF THE DEEPFAKES
深度换脸技术时代来临
Deepfakes are the product of recent advances in a form of artificial intelligence known as “deep learning,” in which sets of algorithms called “neural networks” learn to infer rules and replicate patterns by sifting through large data sets. (Google, for instance, has used this technique to develop powerful image-classification algorithms for its search engine.) Deepfakes emerge from a specific type of deep learning in which pairs of algorithms are pitted against each other in “generative adversarial networks,” or GANS. In a GAN, one algorithm, the “generator,” creates content modeled on source data (for instance, making artificial images of cats from a database of real cat pictures), while a second algorithm, the “discriminator,” tries to spot the artificial content (pick out the fake cat images). Since each algorithm is constantly training against the other, such pairings can lead to rapid improvement, allowing GANS to produce highly realistic yet fake audio and video content.
通过生成式对抗网络技术,可做出极其逼真但实际上却是虚假的影音视频。
小编注:生成式对抗网络(GAN, Generative Adversarial Networks )是一种深度学习模型,是近年来复杂分布上无监督学习最具前景的方法之一。模型通过框架中(至少)两个模块:生成模型(Generative Model)和判别模型(Discriminative Model)的互相博弈学习产生相当好的输出。该方法由伊恩·古德费洛等人于2014年提出。生成网络从潜在空间(latent space)中随机采样作为输入,其输出结果需要尽量模仿训练集中的真实样本。判别网络的输入则为真实样本或生成网络的输出,其目的是将生成网络的输出从真实样本中尽可能分辨出来。而生成网络则要尽可能地欺骗判别网络。两个网络相互对抗、不断调整参数,最终目的是使判别网络无法判断生成网络的输出结果是否真实。生成对抗网络常用于生成以假乱真的图片。此外,该方法还被用于生成视频、三维物体模型等。
This technology has the potential to proliferate widely. Commercial and even free deepfake services have already appeared in the open market, and versions with alarmingly few safeguards are likely to emerge on the black market. The spread of these services will lower the barriers to entry, meaning that soon, the only practical constraint on one’s ability to produce a deepfake will be access to training materials—that is, audio and video of the person to be modeled—to feed the GAN. The capacity to create professional-grade forgeries will come within reach of nearly anyone with sufficient interest and the knowledge of where to go for help.
技术很容易获得,唯一的约束性条件就是控制“训练材料”的获得。
Deepfakes have a number of worthy applications. Modified audio or video of a historical figure, for example, could be created for the purpose of educating children. One company even claims that it can use the technology to restore speech to individuals who have lost their voice to disease. But deepfakes can and will be used for darker purposes, as well. Users have already employed deepfake technology to insert people’s faces into pornography without their consent or knowledge, and the growing ease of making fake audio and video content will create ample opportunities for blackmail, intimidation, and sabotage. The most frightening applications of deepfake technology, however, may well be in the realms of politics and international affairs. There, deepfakes may be used to create unusually effective lies capable of inciting violence, discrediting leaders and institutions, or even tipping elections.
对很多领域来说,这项技术具有很大应用价值。但技术是把双刃剑,技术能应用于善,也能应用于恶。
Deepfakes have the potential to be especially destructive because they are arriving at a time when it already is becoming harder to separate fact from fiction. For much of the twentieth century, magazines, newspapers, and television broadcasters managed the flow of information to the public. Journalists established rigorous professional standards to control the quality of news, and the relatively small number of mass media outlets meant that only a limited number of individuals and organizations could distribute information widely. Over the last decade, however, more and more people have begun to get their information from social media platforms, such as Facebook and Twitter, which depend on a vast array of users to generate relatively unfiltered content. Users tend to curate their experiences so that they mostly encounter perspectives they already agree with (a tendency heightened by the platforms’ algorithms), turning their social media feeds into echo chambers. These platforms are also susceptible to so-called information cascades, whereby people pass along information shared by others without bothering to check if it is true, making it appear more credible in the process. The end result is that falsehoods can spread faster than ever before.
These dynamics will make social media fertile ground for circulating deepfakes, with potentially explosive implications for politics. Russia’s attempt to influence the 2016 U.S. presidential election—spreading divisive and politically inflammatory messages on Facebook and Twitter—already demonstrated how easily disinformation can be injected into the social media bloodstream. The deepfakes of tomorrow will be more vivid and realistic and thus more shareable than the fake news of 2016. And because people are especially prone to sharing negative and novel information, the more salacious the deepfakes, the better.
DEMOCRATIZING FRAUD
民主幻像
The use of fraud, forgery, and other forms of deception to influence politics is nothing new, of course. When the USS Maine exploded in Havana Harbor in 1898, American tabloids used misleading accounts of the incident to incite the public toward war with Spain. The anti-Semitic tract Protocols of the Elders of Zion, which described a fictional Jewish conspiracy, circulated widely during the first half of the twentieth century. More recently, technologies such as Photoshop have made doctoring images as easy as forging text. What makes deepfakes unprecedented is their combination of quality, applicability to persuasive formats such as audio and video, and resistance to detection. And as deepfake technology spreads, an ever-increasing number of actors will be able to convincingly manipulate audio and video content in a way that once was restricted to Hollywood studios or the most well-funded intelligence agencies.
Deepfakes will be particularly useful to nonstate actors, such as insurgent groups and terrorist organizations, which have historically lacked the resources to make and disseminate fraudulent yet credible audio or video content. These groups will be able to depict their adversaries—including government officials—spouting inflammatory words or engaging in provocative actions, with the specific content carefully chosen to maximize the galvanizing impact on their target audiences. An affiliate of the Islamic State (or ISIS), for instance, could create a video depicting a U.S. soldier shooting civilians or discussing a plan to bomb a mosque, thereby aiding the terrorist group’s recruitment. Such videos will be especially difficult to debunk in cases where the target audience already distrusts the person shown in the deepfake. States can and no doubt will make parallel use of deepfakes to undermine their nonstate opponents.
Deepfakes will also exacerbate the disinformation wars that increasingly disrupt domestic politics in the United States and elsewhere. In 2016, Russia’s state-sponsored disinformation operations were remarkably successful in deepening existing social cleavages in the United States. To cite just one example, fake Russian accounts on social media claiming to be affiliated with the Black Lives Matter movement shared inflammatory content purposely designed to stoke racial tensions. Next time, instead of tweets and Facebook posts, such disinformation could come in the form of a fake video of a white police officer shouting racial slurs or a Black Lives Matter activist calling for violence.
Perhaps the most acute threat associated with deepfakes is the possibility that a well-timed forgery could tip an election. In May 2017, Moscow attempted something along these lines. On the eve of the French election, Russian hackers tried to undermine the presidential campaign of Emmanuel Macron by releasing a cache of stolen documents, many of them doctored. That effort failed for a number of reasons, including the relatively boring nature of the documents and the effects of a French media law that prohibits election coverage in the 44 hours immediately before a vote. But in most countries, most of the time, there is no media blackout, and the nature of deepfakes means that damaging content can be guaranteed to be salacious or worse. A convincing video in which Macron appeared to admit to corruption, released on social media only 24 hours before the election, could have spread like wildfire and proved impossible to debunk in time.
Deepfakes may also erode democracy in other, less direct ways. The problem is not just that deepfakes can be used to stoke social and ideological divisions. They can create a “liar’s dividend”: as people become more aware of the existence of deepfakes, public figures caught in genuine recordings of misbehavior will find it easier to cast doubt on the evidence against them. (If deepfakes were prevalent during the 2016 U.S. presidential election, imagine how much easier it would have been for Donald Trump to have disputed the authenticity of the infamous audio tape in which he brags about groping women.) More broadly, as the public becomes sensitized to the threat of deepfakes, it may become less inclined to trust news in general. And journalists, for their part, may become more wary about relying on, let alone publishing, audio or video of fast-breaking events for fear that the evidence will turn out to have been faked.
DEEP FIX
深度技术需深度应付
There is no silver bullet for countering deepfakes. There are several legal and technological approaches—some already existing, others likely to emerge—that can help mitigate the threat. But none will overcome the problem altogether. Instead of full solutions, the rise of deepfakes calls for resilience.
Three technological approaches deserve special attention. The first relates to forensic technology, or the detection of forgeries through technical means.Just as researchers are putting a great deal of time and effort into creating credible fakes, so, too, are they developing methods of enhanced detection. In June 2018, computer scientists at Dartmouth and the University at Albany, SUNY, announced that they had created a program that detects deepfakes by looking for abnormal patterns of eyelid movement when the subject of a video blinks. In the deepfakes arms race, however, such advances serve only to inform the next wave of innovation. In the future, GANS will be fed training videos that include examples of normal blinking. And even if extremely capable detection algorithms emerge, the speed with which deepfakes can circulate on social media will make debunking them an uphill battle. By the time the forensic alarm bell rings, the damage may already be done.
A second technological remedy involves authenticating content before it ever spreads—an approach sometimes referred to as a “digital provenance” solution. Companies such as Truepic are developing ways to digitally watermark audio, photo, and video content at the moment of its creation, using meta data that can be logged immutably on a distributed ledger, or blockchain. In other words, one could effectively stamp content with a record of authenticity that could be used later as a reference to compare to suspected fakes.
In theory, digital provenance solutions are an ideal fix. In practice, they face two big obstacles. First, they would need to be ubiquitously deployed in the vast array of devices that capture content, including laptops and smartphones. Second, their use would need to be made a precondition for uploading content to the most popular digital platforms, such as Facebook, Twitter, and YouTube. Neither condition is likely to be met. Device makers, absent some legal or regulatory obligation, will not adopt digital authentication until they know it is affordable, in demand, and unlikely to interfere with the performance of their products. And few social media platforms will want to block people from uploading unauthenticated content, especially when the first one to do so will risk losing market share to less rigorous competitors.
A third, more speculative technological approach involves what has been called “authenticated alibi services,” which might soon begin emerging from the private sector. Consider that deepfakes are especially dangerous to high-profile individuals, such as politicians and celebrities, with valuable but fragile reputations. To protect themselves against deepfakes, some of these individuals may choose to engage in enhanced forms of “lifelogging”—the practice of recording nearly every aspect of one’s life—in order to prove where they were and what they were saying or doing at any given time. Companies might begin offering bundles of alibi services, including wearables to make lifelogging convenient, storage to cope with the vast amount of resulting data, and credible authentication of those data. These bundles could even include partnerships with major news and social media platforms, which would enable rapid confirmation or debunking of content.
Such logging would be deeply invasive, and many people would want nothing to do with it. But in addition to the high-profile individuals who choose to adopt lifelogging to protect themselves, some employers might begin insisting on it for certain categories of employees, much as police departments increasingly require officers to use body cameras. And even if only a relatively small number of people took up intensive lifelogging, they would produce vast repositories of data in which the rest of us would find ourselves inadvertently caught, creating a massive peer-to-peer surveillance network for constantly recording our activities.
LAYING DOWN THE LAW
张开法网
If these technological fixes have limited upsides, what about legal remedies? Depending on the circumstances, making or sharing a deepfake could constitute defamation, fraud, or misappropriation of a person’s likeness, among other civil and criminal violations. In theory, one could close any remaining gaps by criminalizing (or attaching civil liability to) specific acts—for instance, creating a deepfake of a real person with the intent to deceive a viewer or listener and with the expectation that this deception would cause some specific kind of harm. But it could be hard to make these claims or charges stick in practice. To begin with, it will likely prove very difficult to attribute the creation of a deepfake to a particular person or group. And even if perpetrators are identified, they may be beyond a court’s reach, as in the case of foreign individuals or governments.
Another legal solution could involve incentivizing social media platforms to do more to identify and remove deepfakes or fraudulent content more generally. Under current U.S. law, the companies that own these platforms are largely immune from liability for the content they host, thanks to Section 230 of the Communications Decency Act of 1996. Congress could modify this immunity, perhaps by amending Section 230 to make companies liable for harmful and fraudulent information distributed through their platforms unless they have made reasonable efforts to detect and remove it. Other countries have used a similar approach for a different problem: in 2017, for instance, Germany passed a law imposing stiff fines on social media companies that failed to remove racist or threatening content within 24 hours of it being reported.
Yet this approach would bring challenges of its own. Most notably, it could lead to excessive censorship. Companies anxious to avoid legal liability would likely err on the side of policing content too aggressively, and users themselves might begin to self-censor in order to avoid the risk of having their content suppressed. It is far from obvious that the notional benefits of improved fraud protection would justify these costs to free expression. Such a system would also run the risk of insulating incumbent platforms, which have the resources to police content and pay for legal battles, against competition from smaller firms.
LIVING WITH LIES
与虚假为伴
But although deepfakes are dangerous, they will not necessarily be disastrous. Detection will improve, prosecutors and plaintiffs will occasionally win legal victories against the creators of harmful fakes, and the major social media platforms will gradually get better at flagging and removing fraudulent content. And digital provenance solutions could, if widely adopted, provide a more durable fix at some point in the future.
In the meantime, democratic societies will have to learn resilience. On the one hand, this will mean accepting that audio and video content cannot be taken at face value; on the other, it will mean fighting the descent into a post-truth world, in which citizens retreat to their private information bubbles and regard as fact only that which flatters their own beliefs. In short, democracies will have to accept an uncomfortable truth: in order to survive the threat of deepfakes, they are going to have to learn how to live with lies.

来源:微信公众号“我与我们的世界”
编辑:马晓晴

263#
 楼主| 发表于 2019-1-26 20:33:09 | 只看该作者
【案例】



编辑:何林

264#
 楼主| 发表于 2019-1-28 22:23:08 | 只看该作者
恳请新华网下次照顾下广大科研人员的感受,谢谢![color=rgba(0, 0, 0, 0.298039)]原创: [color=rgba(0, 0, 0, 0.298039)]啦啦啦 [url=]弗雷赛斯[/url]
新华网是国家通讯社新华社主办的综合新闻信息服务门户网站,一句话,是绝对的官方媒体!

作为生物背景的小编我,昨天在例行每日关心国家大事的时候,看到一则新闻,是新华网发表的一个图片新闻:江苏海门:加快推进高标准农田建设.

一看,好事儿啊,我国本来就是农业大国,人口基数大,粮食质量和产量都关系重大

然后看到,以下图片:


标题是:1月25日,江苏省海门市金盛农业生态园的技术员正在对小麦进行细胞分裂检测。 新华社发(顾华夏 摄)

嗯,很高级啊,细胞有丝分裂,已经达到分子水平了,基础研究一定能赋能产业升级,实验小哥哥也很帅!

可是,可是......
为什么载物台上放是这个????!!!!!!!!!


这明显是一台倒置显微镜啊!!

观察细胞有丝分裂,载物台上放的不应该是培养皿么......

小编我在震惊的同时,理性告诉我这不可能,新华网毕竟是官媒,怎么会这么草率呢?于是我小心翼翼地向几位老师,特别是农学的教授请教这是不是农学特有的观察方式,回答让我很失望......非常的失望。

各位新华网领导:

您们是权威媒体,代表的是国家意识,国家意志....

毕竟广大老百姓都会看,被误导怎(也没)么(关)办(系)

但,毕竟很多对科学感兴趣的祖国花朵也会看,

最后,全中国,广大科研工作者也会看

下次请稍微照顾一丢丢我们的感受,谢谢

我爱我的祖国!!!!!





[url=]写留言[/url]

  • 45

    miaomiao3225 置顶
    世界前缘技术---“植物细胞分裂原位观察法”!

  • 374

    逯斐
    这个必须挺你,没毛病

  • 235

    这里的夜晚静悄悄🔥
    我一个不是生物方向的研究生都笑了……但凡上过高中的理科生都应该不会这么搞吧

  • 153

    Winjor
    你这小编也不合格,观察细胞分裂用的是培养皿?不需要染色?不用载玻片?重点不是在倒置显微镜好吧,何况这是植物

  • 132

    HoAg
    那我也纠正你们一下,就算是敬词也只有“您”和“你们”,没有“您们”的用法

    54

    作者
    好的,谢谢您


  • 118

    朱刘·新桥
    观察细胞分裂也是细胞水平吧,怎么就“分子水平”了……

  • 104

    啦啦啦
    呃…楼上那个高中水平的别秀了好么倒置显微镜看活细胞就是直接看培养皿的,不了解还出来误导人就不好了

    21

    作者
    你故意的么,啦啦啦😁


  • 94

    废柴
    这可能是一种新式显微镜,彰显了我国强盛的科研实力(滑稽

  • 83

    毛逸伦
    看的是韭菜???

  • 71

    令狐伯通
    与之前检测虹鳟鱼肉那张图有异曲同工之妙,动植物检验两开花呀!

  • 63

    Halgar、
    开门,你家水表不转了。

  • 48

    治水医龙
    这只是官媒放彩蛋的方式,若发现了说明你有文化,没发现也能让人觉得祖国农业科研前途光明。

  • 48

    在水一方
    可能只是想摆拍一下,没想到这届网友这么较真。

  • 41

    Y2
    让我想起了前不久青海虹鳟的寄生虫检测

  • 36

    皮纳特
    骗取了国家的科研经费,买了仪器,没人用,不会用。还得放个响,托关系找媒体报道报道。新闻媒体并不是什么都懂。

  • 34

    连翘
    科学进步一大步,宣传一下退一步。

  • 33

    Dr.SHEN
    我们一定是误解人家了,人家的设备牛逼,CT显微二合一

  • 28

    云歌
    这种作秀,摆拍,内行人不让干,外行人占着位置瞎干,这种现象难道不普遍么?你有必要这么大惊小怪么?一看作者就觉悟不高啊,完全没有看到国家的良苦用心,国家这么做是为了科普,你懂啥?!

  • 23

    Yiniker
    “上过高中的理科生”——我们文科生也学过高中生物的,不背这个锅谢谢

  • 22

    李龙
    作为生物学学生我说两句:他犯了两个错误。第一☝,小麦植物细胞直接肉眼看不见,需要做切片处理后才可以在显微镜🔬下看到。可以看到他直接放了麦苗,肉眼都能看到麦苗了 第二,他的显微镜🔬的物镜根本没有放下来,根本不可能形成正常成像光路……所以他肯定啥也看不见

  • 19

    eggtart12
    懵了,而且镜头也没垂直放下来

  • 18

    ANiu🍁
    想说点啥,怕被查水表,想想还是算了,只想说,正常操作

  • 16

    沉迷学习o日渐消瘦
    魔法……不,仙法,一定是仙法,“原位”观察细胞分裂

  • 16

    NQ
    至少物种正确,不会搞成韭菜或大葱之类的

  • 15


    编者回答:我不放根菜,你们会看我写的文章吗,真是.....

  • 15

    赵凌霄
    镜头都没摆正,他在目镜里看什么呢?

  • 14

    彩色童话
    可能都是文科生,不懂这些门道

  • 14

    H.Jett
    为你们提出的问题点赞,官媒需严谨。

  • 13

    迷宫中的将军
    记者不懂很正常。就算记者让你这样摆拍,你就真的这么干了?大家看到新闻是骂记者傻逼蛮横还是骂你科研工作者软弱没原则让你干啥就干啥?换我,我宁可不上新闻也不丢这个人。

  • 12

    歪脖子
    “您”既可以复指也可以单指,所以没有“您们”一说。如果实在要用的话,可以用“您二位”“您几位”等。(不是为了挑剔,不需要上墙,请转给本篇作者。)

  • 12

    张鹏
    说上高中的就不专业了,观察细胞分裂,初一生物。中考我们还考来着

  • 12

    下雨声
    不错不错,我看到新闻都蒙了,我用扫描电镜,共聚焦都没有这么高级

  • 11

    MAYESHENG
    记者先生:拍一个能显示出研究水平的图片吧!实验室人员:那拍一个观察小麦有丝分裂的吧……记者先生:小麦呢?没有小麦谁知道你看什么?来,把这个烟灰缸拿走,换上小麦我重新拍一个…………

  • 11

    郑志福
    不怪新华社,摄影的不懂,情有可原,可操作显微镜的?难怪很多检测机构,只管盖章!

  • 11


    直接获得一手数据。都没经过处理的,真实可靠,诺奖水平。

  • 11

    无为而为
    意思是都是韭菜。。割完这批还有下一批韭菜。。

  • 11

    细胞君
    显微镜挺不错~比我们实验室的好

  • 11

    嘿嘿黑薛
    求农学院同学解惑

  • 10

    Alex Shaw
    在用放大镜观察天体变化的天文学家发来贺电

  • 10

    Janny
    身为海门人表示哭笑不得

  • 9

    毕-索
    求生欲望很强

  • 7

    陈鸿伟
    看到那堆草笑死

  • 6

    山里人
    看到此图,深表不适,我怀疑我拿了个假博士学位了。倒置显微镜要么观察培养皿中细胞,要么将样品固定后制片,要么涂片后观察,总得个载玻片把,难道放了寒假一周我没关注世界生物动态就发明了比如牛逼的显微镜了?编辑撰稿人外行也就呵呵了,穿个白大褂这么操作好意思吗?这么看细胞分裂或许是诺奖级技术。自己不懂谦虚点,找个专业人员帮你看看你的稿子好不?

  • 6

    KodyPan
    明明是在研究韭菜们

  • 6

    leon
    最后一句必须要有

  • 6

    世龙
    8倍镜,厉害

  • 5

    Bo
    呃。。。提醒下各位,上方没有放好的器件是聚光镜,不是物镜。你们这样吐槽,只是五十步笑百步,而已

  • 5

    特异张
    🎨艺术高于生活

  • 5

    刘辰萱😑😑
    而且这很有可能还是个装了汞灯的荧光显微镜,难为你了显微镜(拍肩)

  • 5

    团团妈妈
    这技术员打哪儿找来的?不会是记者自己披个白大褂上的吧

  • 5

    闲闲
    转了GFP好歹也用荧光显微镜吧

  • 5

    Serena
    我很好奇能看到啥。。

  • 5

    Ich mag
    话说,这个大葱是不是转染了GFP一类的?我不是农学院,也不太懂,只是和农学院师弟聊到了,他提出的。

  • 5

    杜编
    记者是凯丁嘛?

  • 5

    Qingzhou
    嗯,这个显微镜一定很高级

  • 4

    稳如磊静如水
    这不是新华社记者……这是新华社的特约摄影师拍的,类似在各个城市摄影技术牛逼的人供的稿子。新华社记者不背这个锅

  • 4

    北京王品红
    反正给领导看,摆拍很正常,大众也不太懂。

  • 4

    TaeseeTun
    让我想起了某次青海检测三文鱼肉寄生虫的采访视频,满脸的尴尬!

  • 4

    燕子
    这个摆拍果然有人出来吐槽,读新闻时,第一眼就看到了,只是觉得这摆拍也太假了吧,很可笑,但真心无力吐槽,看来是我不够正义、严谨了

  • 4

    feiwanclamp
    求被拍照的心理阴影面积

  • 4

    騁同學很聰明。🤤
    Ummm…BME专业表示看到评论里的CT很不自在…

  • 4

    孙崇
    评论区好多杠精哦遇到我三道杠的杠精,今天你完了

  • 4

    阿闵真的很严格
    这种现象相当常见呀

  • 4

    本少爷三岁
    卧槽,我看到照片笑出声了

  • 4

    Jay
    老家怎么就这么上榜了啊

  • 3

    海洋的洋
    我是一名媒体从业但不是新华网的员工,但这个问题我想替新华网说句话:记者本身不是业内人士,不懂这些仪器设备使用方法,去采访也只能让科研人员自己保持工作状态,记者去拍摄。如果现场科研人员本身都不注重这些,记者真的没办法分辨哪些设备究竟是怎么用的,如果真的有无良记者提出不专业要求,是可以拒绝的。一般我会问下是否需要带口罩?是否需要带手套?等一系列常规细节,但对于太专业的仪器设备,都是尊重操作人员的专业操纵的。我们真不是不照顾感受,是真没办法都做到了解。拙见见谅。

  • 3

    Qixuefei
    Nikon TE2000 NT-88V3,无法直视

  • 3


    他是这个意思,放一个什么镜再放一把小麦告诉我们这些不懂行的文科生和没文化的老百姓,瞧,显微镜,瞧,小麦,用显微镜看小麦,齐活了!

  • 3

    王亚南
    感觉是台带荧光的显微镜,并且镜头没有复位,其实光路是不通的,摆拍骗人呢!

  • 3

    王洋
    这是好朋友朝县 三胖 发来的照片,看着帅就配发到新闻了。

  • 2

    凡卜
    这是配合摄影,临时的创意,跟研究没半点关系!只不过编辑一时没注意发表了。鉴定完毕

  • 2

    王学全
    这是要发大发cns的节奏吗?一个高学历竟然靠不懂这神操作

  • 2

    融融萧玉
    真的会误导人的

  • 2

    🐘
    哈哈哈哈哈哈哈哈笑死啦

  • 2

    木泱泱
    我坚决支持你,我已转发让更多的人看到了

  • 1


    感谢您对中国政府网的关注,如发现内容有误,请发送邮件至:content@mail.gov.cn

  • 1

    乔淦
    而且看着载物台上面镜片的倾斜角度,他能看到东西才怪。只能叫做表演艺术家,当时他内心的也想说“他妈这啥也没有呀”

  • 1

    糊涂
    实验台收拾的很整洁、干净

  • 1

    二十一克
    这不会就是传说中的韭菜吧,新割的

  • 1

    陌上花开
    哪里需要高中生,初中就学了,考了。

  • 1

    雁过有声
    大家不要在吵了,倒置显微镜看细胞可用培养皿也可用载玻片加盖玻片,鉴定完毕,谢谢!

  • 1

    就爱掰粉笔
    顾华夏要火!

  • 1

    Harvey Y.🌝
    不讲科学 自以为是 拍脑袋做决定 比比皆是

  • 1

    witch
    初中就有生物课了吧,至少高中就玩过显微镜了吧。素质教育果然重要!

  • 1

    布鲁斯
    人家也是实事求是嘛 没毛病

  • 1

    陈嘉瑜-同济大学
    这是带显微操作的显微镜,右手边的是操作臂

  • 1

    俞梅
    小编我们都爱国

  • 1

    Rocy


  • 1

    kittlexuan
    是哭呢还是笑呢

  • 1

    南宫轩竹
    哈哈,大进步

  • 1

    大头
    哈哈,真被雷到了


  • 大金子
    还不如那个三文鱼专业,人家好歹还切了片


  • 光明
    看到奥林巴斯显微镜,我更想拥有一台奥林巴斯的相机了,太接地气咧


  • 请叫我何博
    这这这,这不知道说啥好了


  • 朱花辞树
    我们学校也不是每个老师都买得起这种仪器,不如送给我们吧


  • 田雯
    看到后我也震惊了


  • A姚子豪
    这也太逗了


  • xiaoxiao2⁰1⁸
    大家不觉得在显微镜上放颗葱啊 小麦啊 韭菜啊很超现实主义吗?



  • 那些出来杠的来工地吧,别出来丢那个人了。什么话题下面都能杠,看不懂讽刺,偏离重心要突出得自己“博学”,哪都能显摆,去新华网下面啊,隔这装什么优越?






https://mp.weixin.qq.com/s/NzoAUpW47c7mnrNlETVY2A编辑:邢海波







265#
发表于 2019-2-14 22:14:16 | 只看该作者
【案例】


266#
 楼主| 发表于 2019-2-22 23:28:44 | 只看该作者
【案例】

编辑:杨琦钜


267#
 楼主| 发表于 2019-2-23 20:49:06 | 只看该作者
【案例】

编辑:杨琦钜


268#
 楼主| 发表于 2019-3-9 20:44:47 | 只看该作者
【案例】

新闻联播制造假新闻不择手段





编辑:陈茗

269#
 楼主| 发表于 2019-3-17 22:47:28 | 只看该作者
【案例】


编辑:何林

270#
 楼主| 发表于 2019-3-22 16:14:56 | 只看该作者
【案例】


编辑:冉玲琳

发表回复

您需要登录后才可以回帖 登录 | 实名注册

本版积分规则

掌上论坛|小黑屋|传媒教育网 ( 蜀ICP备16019560号-1

Copyright 2013 小马版权所有 All Rights Reserved.

Powered by Discuz! X3.2

© 2016-2022 Comsenz Inc.

快速回复 返回顶部 返回列表