Leading ai researchers compete to detect deepfake videos: “we are being overtaken”

Leading ai researchers across the country are rushing to defuse an extraordinary political weapon: computer fake videos that can undermine candidates and mislead voters in the coming year’s presidential campaign.

And they have a message: we are not ready.

Researchers have developed automatic systems that can analyze videos for the presence of telltale signs of a fake, evaluate light, shadows, blink patterns and, using one of the potentially innovative methods, even how the movements of the candidate’s face in the real world (for example, the angle of the head when smiling) correlate with each other.

But, despite all progress, the researchers say people are still heavily overwhelmed by technology, which they fear could herald a devastating new wave of disinformation campaigns, much like fake news and misleading facebook groups have been used to influence public opinion in time for the 2016 election.

A powerful new ai program has effectively democratized the creation of compelling “deepfake” videos, making it easier, again before, to create someone who pretends to say or do what they actually did not actually do it. , From harmless satire and film adaptations to targeted harassment and deepfake porn.

And researchers fear it is only a matter of time before videos are deployed to maximize damage—to sow confusion, fuel doubt, or undermine an adversary, possibly ahead of the white house vote.

“We’re armed,” said hani farid, a computer science professor and digital forensic scientist at the university of california, berkeley. . “Those who struggled for years who work on the video synthesis side but not on the detector side is 100 to 1.”

Facebook wouldn’t delete nancy pelosi’s edited video. What about mark zuckerberg?

These videos, created by artificial intelligence, also did not cause their own political scandal in america. But even simple changes to existing videos can cause turmoil, as the landing page did with the recent viral distribution of a video of speaker of the house nancy pelosi (r-ca) garbled to make her speech stunted and slurred. This visual has been viewed more than three million times.

Deepfakes have already appeared everywhere: in 2019, a video appeared in central africa with the long-unseen president of gabon, ali bongo, who was considered poor. Healthy or already dead, his political opponents denounced him as a deepfake, and about a week later they blamed the failed gabonese military coup. Deepfake. He doesn’t look like that. . . . His image is not as built as other people on camera,” said a local politician, according to the malay mail newspaper in kuala lumpur. Used to implement them have gone private on capitol hill, where lawmakers think videos could threaten national security, the voting process, and possibly their reputation. On thursday, the house intelligence committee will hold a hearing in which ai experts are expected to discuss how deepfakes can evade detection and send a “sustainable psychological impact.”

Rep. Adam b. Schiff (d-ca), chairman of the committee, said thursday: “i don’t think attendees are generally well prepared. And i don’t believe the public has tasted what’s to come.”

Rachel thomas, co-founder of fast.Ai, a machine learning lab in san francisco, talks about the disinformation campaign. The use of deepfake videos is likely to generate hype due to the public internet reward structure where shocking material attracts a large base and can spread farther and faster than the truth. So compelling that it still has an impact,” thomas said. “We are social creatures, which are offered together with the crowd in order to find everything that other people see. It would be easy for a bad actor to have such an impact on the public discussion.”

No law regulates deepfakes, although some legal and professional experts recommend adapting modern laws for the time regarding defamation, slander, profile fraud or impersonation of a government official. But fears of over-regulation abound: the line between first amendment parody and deepfake political propaganda may not always be clear-cut.

And some fear that the potential hype or hysteria of fake porn videos could even undermine people’s attitude towards video evidence. Disinformation researcher aviv ovadia calls this problem “reality apathy”: “too much effort to understand what is real and what is not, so you are more inclined to just stick to your old connections.

Fake porn videos are used to harass and humiliate women: “everyone is a potential target”

Maybe it’s already working. In a recent pew research review, about 2-thirds of americans surveyed said that altered clips and photos have become a major problem with simply the basic facts of current events. More than a third said “fictitious news” caused them to cut down on the amount of news they receive on the regulations.

There are also concerns that deepfakes lead to assurances that some are trying to deny legitimate videos – professor phenomenon rights robert chesney and daniel citron called the “liar’s dividend”. President trump, for example, told people that the “road to hollywood” videos, in which he bragged about attacking ladies, were fabricated. (As soon as the actual audio recording was published by the washington post in october 2016, trump apologized for the remarks made.)

Democratic and republican party officials, as well as leading presidential campaigns in the country, assure that the films can do it. Prepare for the damage a little in advance and believe in social networks and video sites, the best way to find and remove the worst fakes. But tech companies have different policies regarding takedowns, and more than half of them don’t require uploaded videos to be true.

“People have the ability to repeat my words and say what they need to and it’s a complete fabrication,” former president barack obama told an audience in canada last month. “The marketplace of ideas, which is central to our democratic practice, hardly works if we do not have common criteria for food, what is right and what is wrong.”

Technology is developing rapidly. Artificial intelligence researchers at the skolkovo institute of science and methods in moscow last month unveiled a “frame-by-frame” ai system that has the ability to create a convincing fake of a person by simply reprinting the footage of a few photographs of their face. Lead researcher yegor zakharov said he would not be able to discuss this, citing current expertise, however, the group’s description states that “the net expectation of wider availability of cgi video technologies” has been positive… [And our experts are confident that neural avatars technology will be no exception.”

Fake videos of pelosi slowed down to make her look drunk go viral on social media

Another type of ai researcher, even from stanford and princeton universities, has just unveiled a standalone system that can edit everything someone says on the cutscenes just by changing the text, along with the ai swapping the spoken syllables and mouth movements of a person to lay out just a smoothly modified “talking head”.

Lead researcher ohad fried said the technology could be used to improve low-budget filmmaking and localize video for international languages and audiences. However, he also said that it could be used to falsify videos or “slander famous people”. Video made with the help of the indicated words tool can be presented as synthetic. But he said regulators, tech concerns and journalists should play a more primary role in research on how to debunk fakes. Food representation of what happened,” he said.

Video deepfake is just part of making artificial intelligence revolutionize disinformation. New natural language ai systems like gpt-2, developed by the openai research lab, can feed on written text and produce many more paragraphs in the same tone, course, and fashion trends – maybe a boon for chatbot spam and “fake news.” “. “, In the case where the main ideas are sometimes subject to gibberish.

This technique has already been used to automatically replicate the speech style of political leaders after training” in many hours of speeches at the un. To counteract this, researchers at the washington institute and allen university for artificial intelligence earlier this month unveiled a fake text detection system called grover that could potentially detect what they call machine-generated “neural fake news.”

Convincing fake sounds are also on the horizon, notably from researchers and facebook who have reproduced a human voice using computer-generated speech that sounds deceptively realistic. The melnet system learned to impersonate itself by listening to hundreds of hours of ted talks and audiobooks; in the examples, the system forces bill gates, jane goodall, and the like to say sentences such as “a cramp is no small danger while swimming.”

Pelosi says the altered videos prove facebook leaders are “ready facilitating russian federation interference in elections

In ai circles, identifying fake media has long and firmly received less attention, funding and institutional support than creating it: why sniff out other people’s fantasies when it’s possible to form your own? “There is no money to be made from detecting these services,” said nasir memon, a professor of computer science and engineering at new york university. , The high-tech research arm of the pentagon, which launched the media forensics program in 2016, sponsoring more than a dozen academic and corporate research teams. Matt turek, computer vision expert who leads the darpa program, called the detection of synthetic media “defensive technology” not only against foreign adversaries, but also against domestic political antagonists and internet trolls.

“Nation- states have had a chance to manipulate the media almost from the very beginning,” turek said. But a powerful enough system for detecting fakes would allow groups with very limited resources to detect “enough computational load so that the risk does not justify itself.” In what cryptographic circles call “untrusted environments” where it is impossible to trace the official data about the creator, origin and distribution of the video. And speed is crucial: every minute an investigator spends debunking a video has an impact on the internet that much further.

Scarlett johansson on fake sex stories created by artificial intelligence: “what else can stop someone cutting and pasting my image”

Forensic experts have identified a number of subtle indicators that can serve as clues, such as the shape of light and shadows, angles and blurring of the face facial features or the softness and weight of clothes and hairballs. But in various cases, a skilled video editor can review the fake to iron out possible flaws, making it much more difficult to assess it. Detectives that they uploaded hours of videos of high-profile executives and taught her to look for ultra-precise “facial units” – data points about their facial movements, tics and expressions, including when the data rears its head. Their upper lip and the way their heads turn when the movies frown.

To test these “soft biometric” models, farid and his team worked with a team of digital avatar designers to create some deepfakes of their own by swapping the faces of senator elizabeth warren (d-mass.), Hillary clinton and president trump by playing their own impersonators in “saturday night online.” The system has shown high reliability against many different types of fakes: porn with satirical parodies of people; fakes with face swapping, popular in forum icons; lip-sync fakes, where the real face remains but the mouth is replaced; and “puppeteer” knockoffs, in which the target’s face is placed on top of the actor’s body.

The study titled “protecting the world’s cutting edge designs from bottomless fakes” was partly funded by google. , Microsoft and darpa. Next week, it will be presented on an equal footing with other technologies in major cities at the computer vision and pattern recognition conference, a landmark annual summit sponsored by the largest representatives of american and chinese ai.

Sam gregory, program director of witness, an advocacy group that helps train amateur journalists around the world to register insults, said the world’s facebook, instagram and twitter platforms should come together around a “common immune system “, scheduled to establish and suppress viral fakes. According to gregory, scanning the faces of senior politicians using farid’s method will provide protection to senior leaders, but not to local politicians, journalists or others who might be vulnerable to attack.

Farid wants the media, to have access to a deepfake detection tool so they can rate a news video when it comes up. But the wider distribution of the system has its own threat at its disposal, as it potentially allows deepfake creators to examine the code and pick up loopholes. This game of cat and mouse has long frustrated forensic scientists, ensuring that even a promising detection method will only be applied temporarily.

Xiwei liu, director of the computer vision laboratory at the state university of new york at albany , last year helped lead a pioneering study that found that all deepfakes have a tell-tale clue: no blinking. It was a win in the investigation—until two weeks later, liu had an email from the deepfake creator saying that they had solved the problem in their latest fakes.

Liu believes that media transactions may have a broader meaning. Psychological effect, subtly changing citizens’ perceptions of politics, news and ideas.

“Everyone knows this is a fake video. But they are watching it,” lyu said. “It provides the illusion. A landing page can do major damage. Very difficult to destroy. And it can come from anywhere. New perspectives are being erased with the advent of the internet.”

Fake news is about to become much more dangerous

High-definition fake videos are often the easiest to spot, scientists say.Than the more details in the video, the more opportunities the fake has to reveal its flaws. But the modern internet works against this advantage because most social networks and chat portals compress videos into formats that make it quicker and easier to share them, removing the necessary recommendations.

Some people find this problem insurmountable , and prompted most researchers to instead have an authentication system that would remove the “fingers” as soon as they were captured. This could make it easier to detect fakes, but would require the consent of smartphone manufacturers, video surveillance systems and magazines – a distant proposal that could take years.

“I have been working on detection for 10 years. . It doesn’t function,” memon said. Fb videos? Things scattered around whatsapp? . . . This may never work. However, the adversary has indeed gone up a notch or two.”

Political campaigns that have recently prepared defenses against gross video gaffes have said they don’t know how to prepare for a new weapon of mass deception. Several campaign officials said they were pinning their hopes on the fact that tech companies would become more aggressive towards police over fakes.

A democratic national committee official said it helped educate the campaign how to deal with misinformation and seek elimination. From social networks. A republican national committee official said he was urging employees to be on the lookout for suspicious video and that his digital team is working with tech giants to flag malicious content and accounts.

A reason for despair on the digital future: deepfakes

But the policymakers of the tech giants disagree on whether to remove fakes or tag, demote and cherish. Youtube, for example, was quick to remove pelosi’s garbled video, claiming it violated its “cheating” policy. But facebook has kept hardcore sex online by stating in the description for the post that there is no policy that says that what you post on facebook must be authentic.”

Facebook funds some universities. ‘ Study of the manipulated press and throughout the application for the post said that “destruction by disinformation is one of the mostmost important things that we are able to create.” This week, the company became the target of its own hoax when an altered video of ceo mark zuckerberg surfaced showing you he has “exhaustive control” over the world’s data. (The fake remains online.)

Twitter said that it systematically checks over 8 million accounts that seek to distribute videos using “manipulative tactics.” The company said, adding, and she does not believe that our company is forced to create a precedent for interference in order to determine what is true and what is not available through the world wide web. The coming showdown with misinformation

The company added that “deception of false data occurs in minutes – we have due to checks of ordinary people in real hours and days and also that, as a rule, factually inaccurate materials receive very little distribution. On twitter until the whole thing is debunked. The firm was unable to provide any statistics to support this assertion.

Perhaps the most common flaw in modern visual storytelling, researchers say, is not the acquisition of sophisticated fake files, but the misattribution of real ones: footage of real protest. For example, a march or a violent skirmish, titled as if it happened somewhere in a different credit department.

Detection systems have taken on a new urgency due to the upcoming elections, but there is still growing interest from corporate america to protect against virus fraud. Shamir allibhai, founder of amber, a small fake-finding startup, said his firm is now working with a test group of corporate clients seeking protection from deepfakes, which could show, for example, racist or misogynistic slurs on the side of a ceo. .

In a society where porn has played a significant role in the advancement of modern history, there is an opinion among researchers that, after all, it is extremely important to find a way to recognize counterfeit and a number of fears will occur if the authority of the video slips away.

“The result is that they won’t even believe the truth,” memon said. “Each of us in front of the tank in tiananmen square shook the world. Nixon on the phone cost him the presidency. The horror scenes from the concentration camps finally spurred us to action. If the idea of disbelief in what the viewer sees is attacked, this is a serious matter. To see the truth again, you need to: restore the truth.”

Dig deeper: artificial intelligence disinformation

Want to find out more about how disinformation spreads? Check out our curated list of stories below.

If customers liked this post and the player wants to get even more information about ai-generated porn – https://aixxxsites.com/ – please take a look at our city website.

Related Post