Chatbots, deep fakers and voice clones: AI manipulation for sales

You may have heard of the simulation theory, that nothing is real and we are all part of a giant computer program. At least for the length of this blog post, let’s pretend this notion isn’t true. However, we may be moving into a future where everything we see, hear and read is a computer-generated simulation. We always keep it real here at the FTC, but what happens when none of us can tell the truth from the lie?

In a recent blog post we discussed how the word “AI” can be used as a deceptive selling point for new products and services. Let’s call it that. Fake AI Problem Today’s topic is the use of AI behind the screen to create or spread deception. Let’s call it this. AI fake The latter problem is a deeper, emerging threat that companies across the digital ecosystem need to address. right now.

Most of us spend a lot of time looking at things on a device. Thanks to AI tools that create “artificial media” or otherwise create content, a growing percentage of what we’re seeing is inaccurate, and it’s becoming increasingly difficult to tell the difference. And as these AI tools become more advanced, they become easier to access and use. Some of these tools may have useful benefits, but fraudsters can use them to cause widespread harm.

Generative AI and artificial media are colloquial terms used to refer to chatbots built from large language models and to technologies that mimic human activity, such as software that creates immersive videos and audio clones. There is evidence that fraudsters use these tools to quickly and cheaply generate, distribute to large groups, or target specific communities or individuals. You can use chatbots to create spear phishing emails; Fake websitesfake posts, fake profiles and fake user reviews, or to help create malware, ransomware and Fast injection attacks. You can use deep fakes and voice clones to facilitate fake scams; exploitation, and money laundering. And that’s pretty much a non-exhaustive list.

The FTC’s prohibition against misleading or unfair practices may apply if you make, sell, or use a device effectively designed to deceive – even if that is not the intent or sole purpose. So consider:

Should you even make or sell? If you build or offer a synthetic media or generative AI product, consider the reasonably foreseeable — and often obvious — avenues for fraud or other harm at the design stage and beyond. Then ask yourself if such risks are high and you should never offer the product. It’s become a meme, but here we describe Jeff Goldblum’s character, Dr. Ian Malcolm, who is too busy advising executives on “Jurassic Park.” can If you think, build something that you don’t stop to think about must be..

Are you mitigating the risks effectively? If you decide to make or supply such a product, take all reasonable precautions before It is used in the market. The FTC prosecuted businesses that distributed potentially harmful technologies without taking reasonable steps to prevent harm to consumers. Warning your customers about abuse or making disclosures is not enough to prevent bad actors. Your countermeasures should be permanent, built-in features, not bug fixes or optional features that third parties might break through modification or removal. If your device is meant to help people, also ask yourself if it really needs to act like a human or if it’s just as effective as a bot, speaking, talking or acting.

Are you overly dependent on getting after release? Researchers continue to improve recognition methods for AI-generated videos, images and audio. AI-generated text is more difficult to recognize. But these researchers are in an arms race with companies that build generative AI tools, and fraudsters using these tools often move on when someone discovers their fake content. The burden shouldn’t be on consumers, anyway, to figure out if a generative AI tool is being used to scam them.

Are people being misled by what they see, hear or read? If you’re an advertiser, you might be tempted to employ some of these tools to sell. Deeply fake rumors of celebrities are already common, for example, and appear in commercials. Misleading consumers with doppelgangers, such as fake dating profiles, fake followers, deep lies, or chatbots, has previously alerted companies to — and even resulted in — FTC enforcement actions.

While the focus of this post is on fraud and deception, these new AI tools carry other concerns, such as potential harm to children, adolescents, and other at-risk populations when they interact with or implement other tools. As companies continue to rapidly bring these products to market and as human-computer interactions take new and potentially dangerous turns, commission staff are closely monitoring those concerns.

We offer you some site tools and assistance to get the best result in daily life by taking advantage of simple experiences