The U.S. Federal Trade Commission moved to put new rules into place around impersonation, citing the rising threat of scams enabled by generative artificial intelligence.
The agency is seeking public comment on a proposed rule that would make companies liable if they “know or have reason to know” their technology, including tools used to make AI-generated content, “is being used to harm consumers through impersonation,” according to an FTC statement.
The FTC also said it finalized a rule regarding impersonations of businesses and the government, such as using business logos in scam emails or sending them from a government address. Under the rule, the commission can file court cases intended to make scammers pay back money made from such scams.
The FTC said that complaints around impersonation fraud were surging, and it’s concerned that AI “threatens to turbocharge this scourge.”
“Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale,” FTC Chair Lina Khan said in a statement, adding that voice cloning and other AI tools have made scams more feasible.
The rapid development of generative AI technology, which can generate voice, video or text in a variety of styles, has dazzled Silicon Valley. At the same time, the technology has raised privacy and security concerns because of its ability to impersonate individuals, for example President Joe Biden in a robocall.
With assistance from Leah Nylen.