New AI tools make it easy to create fake video, audio and text: NPR

Deep Fake Video Wide Aff9448244D5548A16621C82Cede8C98956Acc26 S1100 C50

Ethan Mollick, a business professor at the Wharton School in Pennsylvania, created a deeply fake video using his own photo (left) on an artificial intelligence platform (right).

Ethan Molik


Hide description

Change caption

Ethan Molik

Deep Fake Video Wide Aff9448244D5548A16621C82Cede8C98956Acc26 S1200

Ethan Mollick, a business professor at the Wharton School in Pennsylvania, created a deeply fake video using his own photo (left) on an artificial intelligence platform (right).

Ethan Molik

At first glance, the Video Ethan Mollick, posted on LinkedIn last month, looks and sounds like what you’d expect from a business professor at Pennsylvania’s Wharton School. Dressed in a checked shirt, he’s giving a talk on a topic he knows deeply about: entrepreneurship.

Of course, his delivery is stiff and his mouth moves strangely. But if you don’t know him well, you probably won’t think twice.

But the video is not Ethan Molik. It is a deep lie. Molik created it himselfIt generates the words, voice and animation using artificial intelligence.

“It was mostly to see if I could do it, and realizing it was a lot easier than I thought,” Mollick said in an interview with NPR.

Like many who closely follow the rapid acceleration in AI technology, Mollick is excited about the potential of these tools to change the way we work and help us become more creative.

But there is growing concern among people that the proliferation of this so-called “generative AI” could boost propaganda and influence the campaigns of bad actors.

Mollick teaches about creative entrepreneurs and executives. More recently, anyone has tapped into a new set of AI-powered tools that anyone can use to create the most compelling images, text, audio, and video today—from chatbots like OpenAI’s ChatGPT and Microsoft’s Bing to image generators like DALL-E and Midjourney.

“I stumbled upon being an AI whisperer,” Molik said with a laugh. Now he wants his students to use AI and reports his own experiments on social media feeds and in his newsletter.

Fast, easy and cheap

Mollick launched ChatGPT, a chatbot from OpenAI that exploded in popularity when it launched in November, and started a race among tech companies to launch generative AI.

“I told Ethan Mollick to write a script about entrepreneurship, and he did a great job,” he said. Next, he turned to a device capable of sounding from a short audio clip.

“I gave him a minute to talk about some unrelated topic that he doesn’t have, like cheese, and I posted the speech and generated the audio file.”

Finally, he fed that audio and his own photo into another AI application.

“You put in a script and it actually moves the mouth and moves the eyes and makes you shrug. And that’s all I needed,” he said.

It was quick, easy and cheap. Molik spent $11 and only worked eight minutes.

“Finally, I had it – I fake it – I gave a fake lesson that I’ve never done in my life, but it feels like me, in my fake voice,” he said.

Molik posted the experiment online as a demonstration and a warning that such AI threats are not imminent — they’re already here.

“I think people are not concerned enough about this,” he said. “I’m a fan of this technology in many ways. But I think we’re not ready for the social implications of convincing people to scale. … The idea that you can do this is a new phenomenon for anybody.”

Fake video and images of Biden and Trump already exist.

Concerns about deep liars have been around for years. Now the identifying technology has grown and is accessible to anyone with a smartphone or computer.

People are using synthetic audio to impersonate Presidents Donald Trump, Barack Obama, and Joe Biden, like the viral TikTok video trend for jokes and memes. Playing video games.

But deep lies are using it for political purposes.

Right-wing activist Jack Posobik, known for promoting the Pizzagate conspiracy theory, recently Fake video of President Biden Announcing draft to send US troops to Ukraine.

While Posobic clarified that the video was an AI-generated fake, he also described it as a “sneak preview, attractions to come, a look at the afterlife.”

Many people share the video without any disclaimer; This is not true.

This week, AI-generated fake images It shows what it would look like if former President Trump were arrested. Millions of Twitter users are speculating that a New York grand jury may soon indict the former president. It is likely Fake photo Chinese leader Xi Jinping’s meeting with Russian President Vladimir Putin has been widely circulated online.

AI-generated propaganda and fraud are rampant.

The research firm Graphica published the first known a State-bound influence work using deep fakes end of last year. The researchers found pro-China bots sharing fake news videos Anchors created by AIon Facebook and Twitter.

Meanwhile, scammers are taking advantage Fake voice to steal money Impersonating family members in trouble.

“The information ecosphere is going to be polluted,” says Gary Marcus, a cognitive scientist at New York University who studies AI.

He says we’re unprepared for what it means to live in a world full of i-creative content, and worries that widespread access to this technology will undermine our ability to trust anything we see online.

“A bad actor will take one of these tools and use it to make this unimaginable amount of really convincing and scary disinformation,” Marcus said.

“That can be complete with data, false references to studies that didn’t exist before. And not just one story like that, which a human could write, but thousands or millions or billions, because you can automate these things.”

Text sent from AIS can be harder to distinguish than pictures and videos.

Marcus and others who see AI rapidly going public are particularly concerned about a new set of tools that generate text — Bing, ChatGPT, and powerful technology. BardGoogle’s new chatbot was released this week.

These tools are trained to recognize language patterns by taking large amounts of text from the Internet. You can generate news articles, essays, tweets and discussions that look like they were written by real people.

“Language models are a natural tool for propaganda,” said Josh Goldstein, a researcher at Georgetown University’s Center for Security and Emerging Technologies. He co-wrote a recent one Paper Investigating how these AI-powered tools can be misused for influence operations.

“Using the language model, propagandists can create more and more original articles, and they can do it faster and at less cost,” he said.

That means a troll farm can require fewer workers and wider propaganda campaigns can reach a wider variety of bad actors.

What’s more, researchers have found that AI-generated content is indeed possible. Persuasive.

“You can generate persuasive propaganda even if you’re not completely fluent in English or don’t know the idiom of your target community,” says Goldstein.

Derived text can also be difficult to distinguish from fake video or audio. Online campaigns that use AI to write posts can look more organic than the copy-and-paste messages often associated with bots.

And although AI-written content isn’t always persuasive, for propagandists this is a feature, not a bug. The fear is that this source of text highlights the so-called abundance “False Fire” A propaganda tactic that indiscriminately sprays false and often contradictory messages.

Former Trump adviser Steve Bannon He had another phrase for this: “flood the zone with s***”.

“If you want to flood the zone with s***, there’s no better tool than this,” Marcus said.

To be clear, researchers have not yet identified propaganda or influence using derivative text.

Companies are scrambling to build defenses.

Tech companies launching AI tools are scrambling to put safeguards in place to prevent abuse, as well as the technology’s ease of use. Creating things (known in the field as “discrepancy”) and behave strangely.

But there are open source versions that these companies don’t control. And at least one powerful AI language tool, developed by Facebook parent Meta, already has. Flowed onlineIt was quickly posted to an anonymous message board on 4chan.

Meanwhile, tech companies are scrambling. AI inclusion To more and more products, from search to productivity tools to operating systems.

Aza Raskin, founder of the Center for Humane Technology, describes it as an “arms race for every arms race.”

Ruskin and co-founder Tristan Harris b Documentary film Social problemThey warned about the dangers of social media. They have now turned their attention to ongoing active injury warnings. AI is released irresponsibly.

Raskin says he sees huge potential benefits from AI and believes we all need to learn to live with and use these tools.

“But that’s very different,” he says, “before we know how to secure these technologies, like consumer software and social applications, rather than baking them into the underlying infrastructure.”

Molick, a self-deprecating Wharton professor, is unfazed by Silicon Valley’s I.I.

“The cat is out of the bag, and we all deal with cats everywhere.”

We offer you some site tools and assistance to get the best result in daily life by taking advantage of simple experiences