Practical AI – Episode #49

Exposing the deception of DeepFakes

get Fully-Connected with Chris and Daniel

All Episodes

This week we bend reality to expose the deceptions of deepfake videos. We talk about what they are, why they are so dangerous, and what you can do to detect and resist their insidious influence. In a political environment rife with distrust, disinformation, and conspiracy theories, deepfakes are being weaponized and proliferated as the latest form of state-sponsored information warfare. Join us for an episode scarier than your favorite horror movie, because this AI bogeyman is real!

Featuring

Sponsors

DigitalOceanCheck out DigitalOcean’s dedicated vCPU Droplets with dedicated vCPU threads. Get started for free with a $50 credit. Learn more at do.co/changelog.

DataEngPodcast – A podcast about data engineering and modern data infrastructure.

Brain Science – For the curious! Brain Science is our new podcast exploring the inner-workings of the human brain to understand behavior change, habit formation, mental health, and being human. It’s Brain Science applied — not just how does the brain work, but how do we apply what we know about the brain to transform our lives.

FastlyOur bandwidth partner. Fastly powers fast, secure, and scalable digital experiences. Move beyond your content delivery network to their powerful edge cloud platform. Learn more at fastly.com.

Notes & Links

📝 Edit Notes

Transcript

📝 Edit Transcript

Changelog

Play the audio to listen along while you enjoy the transcript. 🎧

Hey, welcome to the Practical AI podcast. This is gonna be another Fully Connected episode where Daniel and I keep you fully connected with everything that’s happening in the AI community. We’re gonna take some time to discuss the latest AI news and we’re gonna dig into some learning resources to help you level up on your machine learning game.

My name is Chris Benson, I’m one of the co-hosts; I am chief strategist for artificial intelligence, high-performance computing and AI ethics at Lockheed Martin, and with me is my co-host, Daniel Whitenack, who is a data scientist with SIL International. How’s it going today, Daniel?

It’s going great. How about with you, Chris?

It’s going very well. As we are recording this, I just got back from the LiveWorx Tech Conference in Boston. I had a good time there, I gave a talk and it went well, so I’m a happy camper.

Awesome. Well, things are looking up here as well. Over the past month or so it seems like my internet at home kind of has gradually been degrading, and I haven’t been able to figure out why I’ve updated my computer, and done all the restarts of everything, checked all the things… But I had the technician out today and it turns out that squirrels were eating the cables coming into my house, producing obvious degradation. I’m happy that that’s actually figured out and I have good internet again.

There you go. Humane removal of squirrels to be considered here.

Or reconfiguring the cable positioning to make it a little bit harder…

Or that. Either one works, yeah. Okay, we actually have a really timely topic right now, because we are having conversations about this constantly… So today we wanted to talk about deepfakes, and we’ll get a cover and overview with it. So much is happening in the news right now regarding deepfakes; we’ll tell everyone what they are, and such… But everywhere I go, whether I was in Boston, or this past week we had our monthly Atlanta Deep Learning Meetup, the topic came up there… It’s just coming up everywhere, and it’s coming up in the news on a daily basis, so we’re gonna delve into this topic and see what we find.

[04:03] Yeah, and actually even just this morning the policy director from OpenAI was testifying before the House Intelligence Committee here in the U.S. The topic of that was the national security challenge of artificial intelligence manipulated media and deepfakes. This is reaching the highest levels of government, and certainly something that people ask about a lot. I think it’s time that we spend time talking about it on the podcast, so I’m glad you brought up the idea.

Yeah, I actually saw your tweet about that was going on and tuned in; I was too late to catch the beginning of the testimony, but I saw at least the full second half. It was fascinating, and it was interesting to see how startled the members of the intelligence committee were receiving this information. I think they already had some insight into it, but… It was one of the reps - whose name I don’t have at the top of my head - who noted that it was very scary stuff, and so the potential of how it can be used nefariously… We’ll certainly get into that today. We’re gonna talk about what deepfakes are, and then get into how they can be used, how they have been used, what can be done to prevent bad actors, and such things. I took notes, and we’ll refer to some of the congressional testimony in this episode.

Awesome. I appreciate that, yeah. And I guess this episode will be kind of a downer, so…

Sorry, folks…

Sorry in advance. We’ll try to at least keep interesting, even if we are talking about “dangerous things.” We’ll try to bring some of our thoughts into it, but try to give you a good overview of the topic. Maybe one good way to start is just by defining what a deepfake is. In my understanding, a deepfake - the “deep” part really refers to deep learning modes; we talk a lot about deep learning on this podcast, so if you want to know more about that, there’s certainly a lot of links throughout the podcast about deep learning… But now I guess the question is for deepfakes, if we’re talking about deep learning models faking something, so generating fakes of something…

Yeah, what are we talking about - so what are deepfakes faking, Chris?

Deepfakes – I’ll get to a specific example, but that’s where you are using deep learning technology (and we’ll talk about the specifics in a moment) to either create or change videos that may be out there. It could also be audio, it can be any kind of media that people will watch to take in information, whatever that might be, and change those so that what you are seeing and hearing is not actually what really happened with the original, unchanged video. So it opens the door for all types of manipulation. That’s a start. It’s a very broad definition.

Some of our regular listeners might remember that our last Fully Connected episode we did an overview of various advanced methodologies in the AI world; one of those was GANs. In that episode we talked about how GANs (generative adversarial networks) are – one of the things they can do is generate art, or generate images, or generate videos, or change styling, or these other sorts of things. This is one deep learning methodology that’s applied to generate certain things. Like you were saying, Chris, I think the thing that people probably think of right away when they think of deepfakes if they’ve seen some of these things is the video things.

[08:02] There’s been some funny or satirical ones as well. I was just watching one before the episode where kind of a Joker’s face was applied to these different videos. There’ve been ones with President Obama’s face where he’s dancing, and other things… The Joker or President Obama didn’t actually act in those videos, but their face is in those videos, doing certain things, when they didn’t. So I think that that’s probably what comes to mind.

Of course, like you said, this isn’t only video. We talked even in the last Fully Connected episode, about generating fake people, pictures of fake people… But in this case probably the deepfake part of it would be faking someone’s face with an emotion or an expression or a scene that they were never in. Or faking someone’s voice saying something that they never said, or maybe it’s both of those things together - faking someone’s voice, and in a video saying something they never said, so replicating someone’s voice and mouth movements… But then it also goes beyond the video and imagery, into text as well.

Of course, there was a lot of focus on OpenAI’s GPT-2 model recently, this spring, which was capable of producing some really realistic news articles or text on certain subjects… Generating text in a certain style, or based on a certain subject, or something like that - that’s also something that should be considered as we’re talking through this.

Absolutely. It’s been in the media so much recently because there have been several notable things that I imagine most or all of our listeners are already somewhat familiar with. There was recently a video of the speaker of the House of Representatives in the U.S. Congress, Nancy Pelosi, where in the video she was speaking and they made her appear – I think the most common references were drunk, and slurred speech, and such as that. That one thing, where it changed the characterization of her having that conversation… That’s a big part of it; it’s not just changing the words, you can change how people appear to you. And a lot of people believed that video; they were like “Oh, there’s this video of Nancy Pelosi talking drunk on stage”, or whatever. So that was one, and there was a bit of an uproar, and Facebook refused to take it down.

Earlier this week there was a video posted on Facebook and other placed of Mark Zuckerberg, the CEO of Facebook, saying things that he never said, where they took – I think it was a 2017 talk of his, and changed what he was saying. I think that was intended somewhat as a “Look, I told you so. Facebook, you should have taken down the Pelosi video. How does it feel?” So those are certainly big…

I know in my day job, working at Lockheed Martin, focusing on national security issues, this is certainly something that we talk about, because there are all sorts of uses here, and we’ll get into some of those potential use cases in the world as we go here.

Some of you are probably thinking “Well, this kind of has already been around for a while.” Hollywood and movie studios have been doing CGI and video tricks and movie tricks for quite some time, that might put someone in a scene, standing on a mountain where they weren’t really, or maybe it’s super-imposing a face… I’m thinking of the Star Wars example, with Princess Leia… So there’s certainly something that’s not new. I think what is new is traditionally or always in the past these sorts of techniques were pretty much restricted to experts, so it required a lot of effort, it required a lot of time and money to pull off these things convincingly.

[12:18] Now with these deep learning networks that have these large encoding layers and decoding layers, you can have a training dataset where you have a bunch of images with one person’s face, and output with the other face; or with a certain pose or representation of a body, and then output with someone in that pose, or whatever it is. If you have the right training data, now it’s just a matter of applying models that are existing to that training data… And it really doesn’t require a lot of expertise beyond maybe some hours of compute time on AWS, or something like that. And in some cases you can even get pre-trained models, so…

In terms of generating these new-looking faces, corresponding to a certain facial expression or something like that - in this case maybe the input is a facial expression, and the output is someone in that facial expression… That may only take a few images to fine-tune on if you’ve already got a very good pre-trained model. So it’s not even like you have to have a bunch of images of someone to be able to fake them in a certain position, or with a certain facial expression, or in a certain scene.

So I think the big difference now, or what’s been the development recently is kind of the ease with which you’re able to do these sorts of things.

That’s true, and there’s a whole host of software programs that have come out that are basically dumbing down the process, so that you don’t have to have a lot of deep learning training. In some cases you can just install the applications on your system, and there’s some for Mac, and Windows, and things like that; some of the well-known ones that people talk about… Or there was the original – what was called the Fake App program; an open source version of that followed, called Deep Face Lab, which I’ve seen referenced quite a lot… Although that has recently closed down, because the primary maintainer has moved on; that person is encouraging other people to use the source though, and move on and do things with it if they choose. There’s FaceSwap, there’s MyFakeApp, which is on Bitbucket…

I think you’re gonna see more and more of these coming about. And in addition to that, obviously, you can use the standard tools; you can use Keras and TensorFlow to do the same things with the data as you just alluded to.

I think the key takeaway there is that when it was a Hollywood thing, it was a highly skilled thing, that required special software, things that you would find in a movie studio, but not everywhere… And that’s changed. It’s now something that any computer-savvy person can handle. I think that’s what’s changed as we hit 2019 - it’s now been democratized.

Alright, Chris, let’s maybe get to the more important, or pressing (or however you wanna put it) part of this story… In some ways it’s kind of cool that this technology exists, in the sense that technology-wise and intellectually this is kind of interesting that these techniques cand o this with such little data, or with such realism… And when you just think about it from that point of view, technology-wise it’s pretty interesting. But what makes this technology – why is there so much hyper around it, why are these sorts of methods, that produce these deepfakes, why are they so dangerous? What’s your perspective on that?

Well, I think there’s a number of reasons that we can work our way through… You can start off with the fact that since the data is so available - you can get videos from so many places now - and with people using their smartphones to take video, and post it out on social media… Historically, deepfakes have really centered on this like celebrities; they would put a celebrity’s face on a pornographic video, or take a politician – I saw something on one of the deepfake software sites where they were superimposing Nicholas Cage’s face on Donald Trump, and you could clearly see – it appeared to be Trump talking, but the facial expressions were clearly Nicholas Cage’s…

And those were kind of goofy, so long as that was as far as you were taking it, and they could be a little bit fun, meme-like… But I think that obviously opens the same door for people who are out to cause harm to others, and that could be at a lot of different levels. It could be as personal as a bad actor harassing someone they know. I’m just making something up - if they broke up with their girlfriend, and they had video of their girlfriend talking, and they could take that, and take some other bad footage, whatever you wanna do, and put that out there to humiliate them. I think there have been some instances of that.

I think you had mentioned that there was one that the Washington Post had reported, when we were talking beforehand?

Yeah, one of the first ways I think this had surfaced and people have used it in this sort of harassing way is the pornographic use, like you had said before. Maybe before there were certain people that tried this with celebrities, at least leading up until very recently. Now I think it’s very real that if someone had a video of someone they knew, in their circle of friends or acquaintances, they could harass them in this way by making explicit content with that person’s face… And it looks so real that if they propagate that, then the harm is done before it may – it may come out or it may never come out that that was a fake… So it’s definitely a concern in terms of how this could affect real people’s lives.

[19:45] It’s funny - when you said that, about whether or not people after the fact would learn that… That was addressed in the congressional hearing today. It was noted that when one of these videos goes out and goes viral, even if it’s widely reported that the video was a fake after the fact, it tends not to hit as many people… So you inevitably have changed the landscape by the initial post, and then the “fix” afterwards doesn’t actually completely fix it… And they noted that psychologically even if people know it was a fake that they saw, psychologically they still hold on to some of that bias that was introduced through the fake.

So even if I found out that the picture of President Trump and Nicholas Cage was in fact a deepfake, as it was, in theory there’s the potential for that to influence me in some way, depending on what the video author was shooting for. It’s an interesting side effect.

Yeah, and part of the hype that’s been generated recently, and part of the momentum to discuss these things I think has been that shift from people’s thoughts of before thinking that “Well, yes, whether it was satirical, a joke sort of thing, or whether it was actually harassment and humiliation…”, like in terms of the pornography type of stuff, that was maybe restricted to celebrities, and people are thinking, “Oh, well those people put themselves out in public, there’s a lot of video of them, so they’re kind of asking for this sort of thing”, which is kind of sad anyways, because no one should be subjected to that if they don’t want it… But now you think about any video you see, whether it’s a video of someone you know on Facebook, or someone that’s not a high-profile celebrity - now there’s this potential that even those videos are faked in some way or another to influence you, or at least there’s the potential of that happening… Which is kind of a shock when you think about it.

Yeah, it really is. There’s such widespread application, from a very personal level, as we were talking about a moment ago, all the way to large societal concerns… And it is a technology that is fluid enough in its application to where you can scope it however you want… And you’re seeing everything from that level, all the way up to nation states using it to influence others, which we’ll talk about in just a moment.

One of the things that I think is certainly contributing to it being used this way, in such a successful frame, is the fact that when your political environment is what we all know it to be, and we are very polarized, we’re very tribal-oriented, and we recognize that there are messages that are supporting each of those viewpoints - that by itself, before you get to deepfakes, already kind of introduces in a lot of people’s minds the potential for conspiracy theories, and thinking about others in ways that may not be entirely realistic anyway, regardless of which side you’re on…

So when you throw in the nefarious intent that deepfakes can lend themselves to, that just exacerbates the situation. We really have created - not only here in the United States, but in places around the world - an environment where we’re very susceptible to this technology being used against us now… And it’s certainly something that if we are to navigate safely through this, we’re gonna have to figure out ways of coping. We’ll address some of those; the congressional hearing addressed some of that, we’ll talk about that in a few minutes.

Yeah, it is interesting that just the fact that these deepfake videos exist, it creates an excuse, an extra kind of excuse for people that don’t want to face the truth, or want to create conspiracy theories. This has already been seen around the world… There’s a great Washington Post article that I’ll link in our show notes, it was talking about in Malaysia there’s a viral video clip of a man confessing to certain things with a local cabinet minister, and that questionable stuff is being thrown into “Oh, well that’s just a deepfake.”

[24:16] And then similar things in Africa, even controversy over videos that has contributed to coup attempts, and other things… I don’t know if it’s known in those cases, but even just the questioning of those videos, if they’re a deepfake, created enough uncertainty that it actually created political military turmoil.

Yeah, and those aren’t the only places. There was actually an example very similar to your African one, in the testimony before Congress today about that… Just to go ahead and get to that, it was a statement that was prepared for the Permanent Select Committee on Intelligence within the U.S. House of Representatives, and the person testifying was Clint Watts of the Foreign Policy Research Institute… And the title of the brief was “The national security challenges of artificial intelligence, manipulated media and deepfakes.”

I read through the document that was submitted initially, that kind of represented their viewpoint, in addition to hearing some of the testimony… And it wasn’t actually at the top, but there was a paragraph that jumped out. It’s very short, and I’m gonna read that really quick, because I think it really gets right down to it. It was:

“‘Deepfake’ proliferation presents two clear dangers. Over the long term, deliberate development of false synthetic media will target U.S. officials, institutions and democratic processes with an enduring goal of subverting democracy and demoralizing the American constituency. In the near and short term, circulation of ‘Deepfakes’ may incite physical mobilizations under false pretenses, initiate public safety crises and spark the outbreak of violence. The recent spate of false conspiracies proliferating via WhatsApp in India offer a relevant example of how bogus messages and media can fuel violence. The spread of ‘Deepfake’ capabilities will likely only increase the frequency and intensity of such violent outbreaks.”

That was one paragraph, and that’s a scary paragraph when you think about it.

That was just one out of the entire thing. He goes on, and – they do make recommendations, which we’ll hit in a few minutes, but if that doesn’t give you pause, when talking about this, I’m not sure what would.

Another one that I saw in that Washington Post article which I thought was really great was a quote from – or I don’t know if it was a quote; it was a paraphrase from Rachel Thomas, who’s one of the co-founders of Fast.AI, which we all love for many reasons, including the educational piece, and the practical packaging and everything, so shout-out to Fast.AI… But she said that “This information campaign using deepfake videos probably would catch fire because of the rewards structure of the modern web.” I think what she’s getting at there is basically the shock factor of a deepfake video is really what drives the reach of that video.

These videos are set up to be shocking, in many cases, so just that by itself lends to those going viral, reaching bigger audiences, versus maybe more mundane but true videos. It is a concern, and I know that we’ve already talked about just the idea of these videos existing as dangerous, but also - they are already being used by malicious actors, like you were saying, in various places around the world. I think the Russia piece also fit into the hearings that were this morning, right?

[28:11] They did. I know that in politics - some people may disagree, but the American FBI has stated explicitly that Russia interfered with the election in 2016. Taking that as a basis of fact for this, it was also noted in that same testimony “Moving forward, I’d estimate Russia as an enduring purveyor of disinformation, is and will continue to pursue the acquisition of synthetic media capabilities, and employ the outputs against its adversaries around the world.”

I think that’s representative of the fact that it’s a weapon of information warfare at this point, the deepfakes. They can be used in that way. And it’s not necessarily just Russia. It can be many, many nations that can try to influence and sway other countries, other parts of society with that… These are the types of things that it’s not just you and I talking about it, it’s not just the AI community or the general population; it’s certainly something that the defense industry and the military at large are having to consider. It’s still relatively new, and it’s something that really all countries are gonna have to contend with going forward.

Yup. One last thing here, when we’re moving on from the dangers, and maybe a quick point here that I know that you posed on our LinkedIn page… So if you aren’t aware, our podcast has a couple ways for you to engage with us; we’d love for you to engage on our Slack channel, which you can join if you go to Changelog.com/community. We also have a LinkedIn page, and I think it was posed on the LinkedIn page if there were any beneficial uses of deepfakes, or good use cases for using this sort of technology to create fake somethings.

We’d love to hear from you if you have those ideas, but I know a couple came up… What did you see there, Chris?

I know that - and I hope I don’t mess up his name - Constantin Svetnov… And I’m sorry, Constantin, about mispronouncing there… He is a part of the Atlanta Deep Learning Meetup community as well, but he is a senior solution architect with NVIDIA. Really savvy guy about this, and he did point out – he talked about kind of what you said earlier about the technology itself is agnostic. It’s not a bad technology. It’s a set of tools that can be used, and we’ve talked about some of the joking things, and obviously it can be used for bad, as we’ve been addressing as well… But he pointed out that the forensic – we can learn a lot; when we do have bad actors doing that, the forensic evidence that we can then analyze and understand how people are doing that is beneficial.

He’s saying there’s something where you can take something good after something bad has happened, and improve. Then he finishes up, he says “And also deepfakes who create a whole new genre of TV comedy.” There’s that [unintelligible 00:31:16.10] certainly. So maybe there’s some pretty fun things that can be done; light-hearted, which I know isn’t the tone we’ve set here, but… It will be interesting to see how people use these technologies going forward. It’s here, everyone’s gonna have access to it at large, and so I would certainly love to see that optimism express itself in people’s creativity.

One thing that is useful, in terms of the entertainment industry… There’s obvious use cases where movie studios have people’s permission to create these computer-generated things. Maybe someone can’t dance a certain way or something, but that needs to be in a movie, and they get that person’s permission to make that video of them dancing, or whatever the situation is. I think there are legitimate uses of that within entertainment…

[32:12] Also in addition to governments weaponizing this sort of technology, or malicious actors using it against their enemies, I think there are probably uses of this technology that could be beneficial in the opposite way, in terms of bringing humanitarian or help to people in the developing world as well, where the political situation is hard. I know that getting educational material to people in certain areas is really tough, to certain language minorities as well, especially if those language minorities are also religious minorities.

In my mind, I’m aware of translators who might translate educational material or something for those people. They probably don’t want to have themselves on the video, so if that video was created as an avatar or something like that, then a lot of that could be useful as well.

I think that putting all of the weight on the bad uses is not completely fair, although I think putting a lot of the focus there is warranted, because this is a very serious topic.

I agree with you. I’m really hoping to see some great use cases come forward, where it’s not just the doom and gloom thing going forward. I’ll tell you what - if anybody wants to put my face (which is out there) on a dancing video, my seven-year-old daughter would love that… So I’m hoping somebody will post something like that on Slack, or in our LinkedIn community, or something.

Alright, so we’ve talked about what deepfakes are, we’ve talked about the dangers they pose, and also maybe some benefits they can offer… But maybe now let’s move to talking about how people are thinking about protecting themselves, or other people, or societies against deepfakes, or the disinformation that they can spread.

I know one of those approaches that I’ve seen in the community to protect against deepfakes is kind of a strategy that OpenAI has taken with their GPT-2 model. We have a podcast episode about that model and the technical details of it. If you’re interested, definitely take a listen to that episode… But the thing that they saw with GPT-2 was that it is capable of generating this very realistic text, and very long-form text, which obviously they saw as an opportunity to create fake news articles, or fake content for social media, and that sort of thing.

They saw the danger with this, and the approach that they took to prevent that was they just released the code for the model, not the full pre-trained model itself; they released a limited version of the model… And they didn’t release the full dataset that they used to train that model. Their hope was basically to try and slow down the malicious use of that model, and give researchers time to develop detection methods, or methods that would help in some other way fight fake news, and now that they know that GPT-2 is out there…

It’s an interesting approach. I don’t know if you have any thoughts on that, Chris… I don’t know if it did – I guess my question would be “How much good did that approach do?” I know just recently I saw a student who had access to certain compute resources, TPUs… I forget if he used the cloud, or what… But he had access to some compute resources, which are not that uncommon; I think other people could get access to those… And he was a student, and he was able to use that code and reproduce the full GPT-2 model. That was 3-4 months after OpenAI released the code, and the paper, and all of that. So it was not that much time between when they released the partial release and when the full thing was public, and when the student released that… So the question is – I don’t know that 3-4 months really buys us much, but that’s not to say that it wasn’t a good approach, or OpenAI didn’t try, but I just wonder if that’s enough.

Yeah, it’s a tough thing. I know when we did our GPT-2 episode, shortly after its initial release there, we talked about this in that episode, and kind of debated that approach from OpenAI in terms of this release; we contrasted that against the norm in open source of kind of “Throw it out there and let the world dig in and see what your stuff is.” One of the things that we considered at the time was maybe this gave us a little bit of bumper, even though it wasn’t purely in the spirit of open source in that way… And I think in the time – one of the things I said then was I thought probably… A couple things - that it was probably good to give us a little bit of time, just to absorb and realize the new world that we’re in, with that kind of release… And I also said that really it would happen anyway.

[40:20] Now that people knew what was possible, it would be recreated sooner, rather than later. And this student has done just that; they came in and did exactly what I was predicting. The reason is there’s a lot of smart people in the world, and just because one team does something and doesn’t release it doesn’t mean – everyone else now knows it’s possible… So you put smart people on the problem, they know there’s a solution, and so they’re gonna get it.

As I’ve had a little bit of time to analyze this, I think OpenAI’s approach was the responsible way to do it at this point. It wasn’t too long - we saw that it’s gotten out there anyway - but it gave us time to absorb what they had achieved, what was possible with that achievement, and how we might think about malicious uses of it, which we started doing immediately… And then this kid (the student) came out and released this. I think for significant, impacting technology releases this might be the way forward, in my opinion.

Yeah, so there may be a delay; hopefully, organizations approach things in a responsible way and give people notice when something like this is coming out, and own up to the implications of it, and expose those… But even then, like you say, the full technology is going to be available and people are gonna reproduce it… So in light of that, there’s definitely people out there that are focusing on detecting fakes; they’re focusing on actual AI methods that would be able to detect or discriminate fake versus non-fake.

I really like the website PapersWithCode.com. If you haven’t seen that, you should definitely check it out. But I just search for “fake detection” and then a bunch of papers come up, recent papers on this topic… And I would say that this detecting deepfakes, or fake news, or fake images, fake videos, however they phrase it in their papers - there’s definitely some approaches out there that seem promising, but there’s definitely no one-size-fits-all solution. There’s no “This is how we’re going to protect ourselves against deepfakes” things.

There’s some solutions out there… On the video side there’s people looking at the way people blink in videos actually, and apparently that’s hard to replicate in a fake. There’s also a person’s facial fingerprint, and also per-frame inconsistencies in videos… So there’s people looking at those sorts of things to be able to tell – these artifacts that would give away a deepfake.

On the text side, there’s people that are analyzing the persuasive structure of news articles or arguments, and the stances that those articles take to figure out if they’re fake or not. There’s a whole variety of things that people are trying, and I certainly have only just mentioned a few. There’s a whole bunch out there. We’ll link some of those in our show notes, but I think probably the best thing if you’re interested in those sorts of techniques is to search some site like Papers With Code, and look at what people are doing.

One of the things that I was struck by in reading that Washington Post article was a quote by a computer science professor at Berkeley, Dr. Farid. He said “We are outgunned”, and the reason why he said that is the number of people working on video synthesis, as opposed to the detector side, is 100 to 1. In other words, there’s 100 people working on interesting deepfake technology, and only one corresponding person working on detecting deepfakes. That’s probably also due to, in my opinion, the incentives that are built into the academic system and the AI community… Whereas if you come out with a deepfake technology or some video technology for generating videos, you’re gonna get a lot more attention than if you come out with a really great detection thing, right?

[44:22] That’s true.

So I think the moral of that story is if you’re out there and you want to work on something around this, maybe consider working on the detection problem. We really need people working on that.

Not only having to do with deepfakes, but having to do with things like poison data and other safety issues. We actually had an episode I was wanting to call out to our listeners - episode #33 was called “Staving off disaster through AI safety research.” That was one when I was attending Applied Machine Learning Days in Switzerland. I know you organized the AI For Good track for that conference… And I met with - I’m gonna try to get his name right - El Mahdi El Mhamdi; I apologize if I just screwed that up, because I know he listens to the podcast… And it was a fascinating conversation that we had, that was recorded and put out, where talked about different approaches to that. He basically made that same point - the number of bad actors, nefarious actors out there far outweighs the number of people that are trying to be pro-safety in this space, him being one of those people.

So there’s the kind of release of models side of things, there’s the detection of deepfakes… There’s probably a third category here, which really is assuming that deepfakes are going to exist, that we’re not going to be able to detect them all; some are going to get through our best detection mechanisms. We should be able to criminally prosecute people that are generating these things in a malicious manner. That’s the stance that some people are taking.

There’s been bills introduced in the United States; I’m sure there’s other governments wrestling with this, but this is probably what we’re most familiar with here in the U.S. A senator from Nebraska, other lawmakers from Virginia and California are considering legislation around these things, even at a state level. The New York State Assembly has introduced bills to push back against this technology.

I guess that’s another approach that people are taking. I think there’s still a lot of really interesting open questions there, like how is this enforced, how would we enforce this, while still allowing legitimate entertainment companies, or companies that are maybe doing legitimate and helpful work in these areas - how do we allow those people to operate and prevent this malicious usage of the technology?

There’s a lot of interesting questions there… Where do we draw the line of deepfake and not, and malicious and not…? There’s a whole range of things, from jokes, and satire, to the really harmful, “bad” stuff. Where do we draw the line…? There’s a lot of open questions there.

Yeah. When I was watching the Intelligence Committee hearing that we’ve been alluding to in this episode this morning - that was a big issue, because a great deal of this involves First Amendment rights. For those who are not U.S. citizens, the First Amendment of our Bill of Rights, which is part of our Constitution, is the amendment that allows for free speech and for expression in the United States. There was quite a bit of legal-oriented back and forth. I’m not a lawyer, so much of that went right over my head; I won’t speak to that directly, but they really were talking about “How do you handle this without violating First Amendment protections?”

[48:04] Some good news is that this morning in the House Intelligence Committee hearing that we’ve been talking about over the course of this episode, they did in fact make some recommendations on how to contend with deepfake issues. There were six explicit points that were called out, and I thought I’d just cover not all the verbiage on each one, but just kind of the first sentence or so of each one, which gives you a sense of it.

The first was Congress should implement legislation prohibiting U.S. officials, elected representatives and agencies from creating and distributing false and manipulated content. As an addendum to that, they mentioned that the U.S. government - whatever kinds of statements they make, should always be the truth; the official government statements and policies should always be based in truth.

The second thing was that policymakers should work jointly with social media companies to develop standards for content accountability.

The third was that the U.S. government should partner with the private sector to implement digital verification signatures designating the date, time and physical origination of content.

The fourth one was that social media companies should enhance their labeling of synthetic content across platforms, and work as an industry to codify how and when manipulated or fake content should be appropriately marked.

The fifth was that the U.S. government from a national security perspective should maintain intelligence on adversaries capable of deploying deepfake content, or the proxies that they employ to conduct such disinformation.

And the final one they noted was public awareness of deepfakes and its signatures will be greatly assisted in tamping down the attempts to subvert U.S. democracy and incite violence.

Those were good, and I think the Intelligence Committee heard that. There was some debate about the legal issues regarding First Amendment concerns, but it was good to see them wrapping up with potential ways forward to mitigate the dangers that we’ve been talking about on this episode.

I see a couple things in there… I guess one thing is there’s definitely an emphasis on social media companies, both to contribute to the discussion, but also to utilize their resources to help label synthetic content. There’s definitely a responsibility placed on social media companies. It will be interesting definitely to see how social media companies receive that, and the cooperation that they get. I’m not totally convinced it will be exactly – there’s competing interests here always, right?

Of course.

So it’ll be interesting to see how that pans out. The other thing is maintaining intelligence on adversaries capable of deploying deepfake content - I think that includes me. I don’t wanna give myself away, but I’m capable of deploying it… I guess I’m not an adversary of the U.S. government, or at least I don’t consider myself to be so… But it is interesting to me. That’s like everyone who knows how to clone a repo on GitHub and run that code. I don’t know… I guess probably what they’re getting at is government entities, and other things…

That’s what I think, yeah.

But it seems like there’s a huge range of things. I guess I don’t want the government maintaining intelligence on me, since I’m capable of deploying those technologies I guess is what I’m saying.

I think the intent of the recommendation was really from the perspective of the United States for an adversary. That’s what that was looking at.

It makes sense.

That was certainly the impression I came away from listening to the testimony.

Yup. I guess maybe one way we could wrap up this discussion is to give you some good learning resources and links on this subject… Not so much on the “learn how to make deepfakes”, because you probably also don’t want to have the U.S. government maintaining intelligence on you, but maybe just learning about deepfakes, the state of the art, and also detection methods and other things.

[52:09] I think one good link if you just go back one Fully Connected episode to when we were talking about GANs, reinforcement learning and transfer - certainly GANs and transfer learning come into play in this discussion in a big way. There’s also that good overview article that I mentioned from the Washington Post that we’ll link that’s not really technical, but does provide a good amount of links in it as well…

And then there were a couple of workshops that I’ve found. Actually, right now ICML is going on. There is a workshop there that I’m assuming will post papers and maybe some results and discussions about synthetic realities, deep learning for detecting audio-visual fakes… There’s also a workshop that happened at Applied Machine Learning Days this year, about fake news detection. We’ll link that. There’s actually a GitHub repo that goes through a tutorial, and some ideas and slides about fake news detection. I think those would be good starting places.

I appreciated the discussion today, Chris. I think it was good to talk through some of these things, it was difficult in many ways… I like to be generally positive, I hope that comes through in the podcast, but…

I think we both do, actually…

It’s a pretty heavy topic.

Yeah. I think we have a responsibility to our listeners and in general just to be able to fairly represent things. Obviously, a lot of the things that we talk about are just exciting, and fun, and I hope that we come off that way to our listeners… But there’s occasionally some scary stuff in this field, and I think it’s our responsibility to represent that as well. I hope our listeners feel a little bit more attuned to this topic, and sorry for the downer of an episode… I appreciate everybody sticking in with us through this. It was a good talk, Daniel.

Yeah, definitely. We’ll talk to you next time, Chris.

Talk to you next time, thanks a lot.

Changelog

Our transcripts are open source on GitHub. Improvements are welcome. 💚

Player art
  0:00 / 0:00