Social Media

Deep Fakes and Social Media: A Q&A With Alex Cohen

Learn about the threat of deep fake technology on social media, and how to prepare your agency
Jun 8, 2021

The questions below were curated from attendees of the Deep Fakes and Social Media Webinar, hosted by the SocialGov Community of Practice and Digital.gov on May 18, 2021. Questions were moderated by SocialGov Community of Practice Board Chairwoman Gabrielle Perret, of the U.S. General Services Administration (GSA), and answered by Director of Emerging Technologies for GSA’s Office of Technology Policy, Alex Cohen.

Q: What are some surprising things you have learned about deepfakes?

A: Deepfakes are really easy to make. There are studios and downloadable tools available that are easy to use and allow people to get started immediately. I think the biggest surprise is that deepfakes have not been used yet for more destructive purposes. Most of them are simply politicians or celebrities saying and doing stupid things – rather than anything with a mission behind it.

Q: Do people need videos in order to create deepfake videos? Or just photos?

A: Just photos although we are talking about thousands of photos. Video is great at 30 frames per second; it makes it easier to create deepfakes. But just images are sufficient if you have enough of them.

Q: What political examples have there been?

A: There was one in India. It was the first example we saw using a political campaign. It was relatively innocuous and not used in a malicious way. It had someone speaking in different languages. They had a different person speak the language and then map the politician’s face onto the person speaking.

Q: What about in the United States?

A: We have not seen this in the United States. We have not really seen it used here other than for humorous purposes – not for something that is purporting to be an official statement.

Q: Do libel or slander laws apply with deepfakes? Have there been any legal cases regarding impersonation?

A: Yes, there have been although, again, it depends on if it’s being used for humorous purposes or satire. Deepfakes are protected under those laws. There have been some lawsuits in the entertainment area.

Q: How do you combat these videos in a more immediate fashion? What are the initial steps, who do you contact, who do you email, who do you call right away?

A: We urge you to pre-plan. Have some language ready to roll that is pre-approved. Be prepared to make a video in response if it’s a video-based deepfake. There really is no other way and you can certainly contact a social media platform if it’s circulating on Facebook, etc. Reach out to your agency connections to those organizations. Once it’s out, it is going to be hard to fight. You can bounce around YouTube and it’s difficult to stop. That’s why it’s important to have something ready to go as quickly as possible. As soon as you detect a deepfake, be prepared to issue a statement, and get your own content out there as quickly as possible. The good news about this is the response can be relatively low-tech. Publish a press release, an official statement, read something into a camera, talk about it and be ready to go. It’s not a high tech solution; it’s within the agency’s capacity to do and respond coherently.

Q: Do you know if there’s a social media company doing any monitoring or addressing this in any way?

A: This is certainly a hot topic, and falls under the misinformation issue that is going on right now. Unfortunately, these are hard, because there’s no computer algorithms that can detect if something is inherently fake. It really does fall to the agencies to be able to flag it immediately and report something. If you come across a fake, use the content moderation tools within that platform and be prepared to reach out to that platform if you have any specific contacts there.

Q: Have advertising campaign videos been used for deepfakes?

A: You don’t see it in the U.S., per se, and frankly video, is video. Many of these politicians probably have tons and tons of press releases, conferences, speaking engagements. It’s not hard; it doesn’t need to be the same video, and it doesn’t need to be housed at the same event. You can use something from here and and something from there to put together enough images to generate a compelling deepfake. Obviously, Hollywood actors are easy because they’re on camera by definition. And that’s why they usually come with a deepfake platform. But, there’s more than enough video of major politicians to generate these.

Q: Are there any specific technologies being used right now that can detect deepfakes? So they can be debunked?

A: There are a few that are out there. We did not explicitly call them out because it’s a constant battle between the tools and the technology. I could give you a name today, and tomorrow there may be somebody new. It’s literally a battle that is currently being fought in the tech space. Our recommendation is not to go and look for automated tools, but put contracts in place for automated tools because it’s constantly changing. Be prepared to respond when it occurs.

Q: Is there anything in HTML that can help you determine if something is a deepfake?

A: No, not really.

Q: Do you partner with any social media companies to counter deepfakes before they become viral?

A: This really isn’t our space. I would contact your Public Affairs offices, who typically work with social media companies. Our office makes policy recommendations and provides best practices, but we don’t focus solely on deepfakes.

Q: Are there any free online tools you can use to detect deepfakes?

A: Yes, there are. They change constantly. This is a current area of research for us.

Q: Do you imagine some sort of a verification system in the future that can pre-confirm whether video content is real or what it claims to be?

A: I don’t think so. But, imagine someone putting compromising content of a secretary up on YouTube, or sharing it on their own Facebook. They may issue something during an emergency. Imagine during a hurricane, a fake image of the FEMA Administrator coming on and saying there’s no hurricane, stay in your house. And then that gets circulated among a whole group in the path of a hurricane. So, the answer is no, we don’t really see any kind of tools. We recommend monitoring agency officials on these social media platforms so that deepfakes are flagged immediately. Have a blog post, have a press release, have a video script ready that explains what the deepfake is, that we did not make it, and point to authentic sources. Debunk it.

Q: Have you seen anything yet where folks flag video content as misinformation? For images and posts, I have seen fact checkers come in and “x” out the screen with a statement that says this information has been proven to be false. But I haven’t seen that for videos.

A: We have seen video content being considered misleading if claims are made in it that are inherently misleading. What makes this difficult is that you can have a compromising situation with an agency official. They can be saying something and it does not appear right; but, if it’s at the agency location, it’s easy to say no, that was not us. But imagine if that official is saying something on the street or another location, it could be a legitimate news story. The agency may even have a difficult time detecting whether or not it’s fake if it’s shot in a non-formal setting. Did the Secretary really say that to this person at a bar or whatever it is? That is why, again, being prepared to respond is really the key here.

Q: Does the deepfake software allow you to impersonate a person’s voice in addition to their photo and video?

A: Yes. There are tools that, based on audio recordings, can allow you to impersonate someone’s voice. If you can get the person talking, you can use that to generate a fake phone call that sounds like that agency official. As mentioned in the beginning, geospatial data can also be generated synthetically through this technology.

Q: Do you know if any of the social media companies are digging deep to reveal fake videos? Do you know if they have procedures in place to take them down? In the past I have worked with Twitter when there was some threatening content that was posted, and they were immediately responsive in getting it taken down.

A: I know it’s an area that’s hot for them. When we were doing our research, we were finding that the social media platforms were trying to figure this out just like everyone else was. How do we deal with this, how do you verify it when it’s hard to prove that it’s not real?

Q: Given the proliferation of conspiracy theories, what if an agency counteracts and people still refuse to believe the facts?

A: This is why you need to put some energy into creating a response. Be prepared to respond coherently and point to authentic sources. That’s exactly the sort of thing you want to prevent, and the longer it’s out there without a response, the more it’s going to circulate. You want to respond quickly with what this is – we did not do this, here’s the actual source, here’s the correct person speaking. That’s why we’re urging preparation; that’s going to be the key. The last thing you want to be doing is scrambling to get sign off from agency officials. Have some evergreen content that you think isn’t going to change between now and when it happens. You can have the sections all ready to go, and then just point to the specific instance, fill in the blanks, and it’s basically pre-approved.

Q: How do you keep ahead of the innovation curve when it comes to drafting policies and assessing new technologies especially with deepfakes? It’s adapting and changing every day as people are becoming more and more sophisticated. How do you stay on top of that?

A: This is separate from the deepfake conversation, but in general, we built out a reconnaissance capability. We have access to computer science and association journals, and the Institute of Electrical and Electronics Engineers (IEEE). Everyone on the team has one or two subjects that they are monitoring. Each week they read the latest journal entries, keep up with it, and identify new sources. We follow certain people on Twitter, anyone who is hot in those particular fields. For example, I monitor cloud. Someone else on the team monitors cybersecurity and business processes. We built out an entire taxonomy in the IT space and everyone is assigned one to three categories in that taxonomy. We have “New Technology Wednesdays,” where we discuss if there’s any policy implications for any of the topics we are monitoring. Some of them are so far out that they may not have policy implications. For example, one might be printable tattoos that include circuitry, there might be an issue with sensitive compartmented information facilities (SCIFs) and other secure environments, or we might be looking at a medical device. We add policy issues to a basic Trello board, and prioritize them every month. So far, we’ve identified 90 or so policy issues. We have a very agile shop that has built out this reconnaissance capability. In the near future we will be soliciting industry and asking questions about what technology issues we should be paying attention to (that we’re not), and if they have had any difficulty bringing their technologies into government. Monitoring new technologies is an active effort; we have to keep on top of it. There’s no one source - i.e., read this website and it tells you everything. It’s monitoring and tracking a lot of different news sources.

Q: If folks in this group want to learn more, are there other resources or related organizations that they can follow or research more into?

A: When we were working with this topic, Defense Advanced Research Projects Agency (DARPA) was particularly helpful to us. We reached out to a number of folks in the intelligence community and asked for their best advice and feedback, and we incorporated that into our recommendations. If anyone has a contact at DARPA, I would strongly recommend them. We also got some feedback from the U.S. Department of Homeland Security as well.

Q: Are deepfakes really just for people? Or, can objects or other images be manipulated in video and images?

A: As I mentioned, geospatial data was recently deepfaked.They created a map that added extra things to it that looked like authentic satellite imagery, using the same deepfake technology. In terms of objects, I have not seen that in particular. But the technology does work for geospatial and there’s no reason it wouldn’t work for objects as well.

Q: In your geospatial example, what was that data used to portray?

A: I believe in that case it was a cybersecurity conference or convention. It’s a current hot topic. Not only can you use it to generate people but you can generate geospatial. The concern is the same, right? A lot of times you have news reports that show troop mobilizations or other incidents, and if you can generate those types of fake geospatial data it looks official. It looks real, but again, it’s totally synthetic. So that would be a concern. This is a new form of misinformation that we need to learn how to combat. Not only people and voices, but also satellite imagery.

Q: How are most entities notified of a deepfake video? As a one-person program, I have had a hard time monitoring everything, especially in this situation where they’re obviously not tagging the official accounts on social media. How do folks find out if there’s a deepfake out there?

A: We recommend alert bots. Set up alerts for any of your folks who are on camera often and have that dumped into an inbox somewhere and keep an eye on it. The alert bots will monitor blogs, other news sources, and public posts on social media sites. If you suddenly see a bunch of mentions of that person, and if people start chatting about it, something like that might pick it up. Be prepared to have those kinds of tools running in the background. [Gabrielle Perret: I would think, too, that sometimes a good old-fashioned hashtag search or keyword search in social media can bring up content and put it on the radar as well.]

Originally posted by Alex Cohen on Jun 8, 2021

GSA

Originally posted by Gabrielle Perret on Jun 8, 2021

GSA

Jun 8, 2021