A researcher who has created a fake video of President Obama defended his invention at the last TED conference.
The clip shows a computer-generated version of the former US leader mapped to fit an audio recording. Experts have warned that the technology could trigger a “political crisis”.
Dr. Supasorn Suwajanakorn recognized that there was a “potential for abuse”.
But, to the Vancouver event, he added that the technology could be a force for good.
The computer engineer, is now used by Google on the Brain of the division. He is also working on a tool to detect the fake videos and photos on the name of the Foundation.
Risk of damage
Dr. Suwajanakon, with colleagues of Steven Seitz, and Ira Kemelmacher-Shlizerman University of Washington, published a document in July 2017, describing how they created the fake Obama.
They have developed an algorithm that took the audio and the transpose on a 3D model of the president’s face.
The task has been completed by a network of neurons, with the help of 14 hours to Obama’s speech and the dispersion of the data on a base in the shape of a mouth.
Dr. Suwajanakorn has recognized that “fake videos can do a lot of damage” and was in need of an ethical framework.
“The reaction to our work has been quite mixed. People, like graphic designers, thought it was a great tool. But it was also very scary for other people,” he told the BBC.The political crisis
It could provide history students the opportunity to meet and interview victims of the Holocaust, ” he said. Another example would be to let people create avatars of dead relatives.
Experts fear that the technology could create new types of propaganda and false reports.
“False information tends to spread faster than the real news as it is both novel and confirm existing biases,” said Dr Bernie Hogan, research director of the Oxford Internet Institute.
“To see someone make fake news with real voices and faces, as seen in the recent issue on deepfakes, will likely lead to a political crisis associated with calls to regulate the technology.”
Deepfakes refers to the recent controversy on an easy-to-use software tool that can scan photos and use them to replace a person of his characteristics with the other. It has been used to create hundreds of pornographic video clips featuring celebrities faces.
Dr. Suwajanakorn said that, although that fake videos were a new phenomenon, it has been relatively easy to detect counterfeits.
“Fake videos are easier to verify as the fake photos, because it is difficult to make all the frames of your video perfect,” he told the BBC.
“The teeth and the languages are difficult to model and could take a decade,” he added.
The researcher also asked if it makes sense for the fake news creators complex videos, “when they can just write stories as false”.