KIRO 97.3 host Dave Ross asked Rob about a Stanford study that demonstrated the potential for video technology that completely change a subject’s facial expressions. The technology can use existing footage or even real-time broadcasts. Coupled with audio manipulation, the methods could be helpful to fake news purveyors, or governments looking to cause choas to a rival.
Dave Ross: “The inventors of this claim that they’re worried about this too, and they’ve created a watermark system – [a] sort of invisible, digital watermark – so that if anything ever crossed the line, you could do some sort of electronic analysis on it and conclusively prove that it was fake. And yet again, in the meantime, so many people are getting their news over the internet, the damage could have been done.”
Rob McKenna: “Yeah, that doesn’t solve the fake news problem, you’re absolutely right, and this could be very, very powerful. You know, imagine an American president, his image and voice being manipulated to say something hateful about people in another country, who then see it on their televisions and their computers, and they go nuts.
“A lot of people could be hurt, a lot of property damaged, and other harms created before it’s sorted out, if it ever is sorted out.”
Latest posts by SGW (see all)
- McKenna on KIRO: Send a GIF, get indicted for assault - March 23, 2017
- McKenna on KIRO: Judges rule against revised travel ban - March 16, 2017
- McKenna on KIRO: Everett suing OxyContin maker over opioid crisis - March 10, 2017