AI and US Election Rules

If an AI breaks the rules for you, does that count as breaking the rules? This is the essential question being taken up by the Federal Election Commission this month, and public input is needed to curtail the potential for AI to take US campaigns (even more) off the rails.

At issue is whether candidates using AI to create deepfaked media for political advertisements should be considered fraud or legitimate electioneering. That is, is it allowable to use AI image generators to create photorealistic images depicting Trump hugging Anthony Fauci? And is it allowable to use …

Continue reading AI and US Election Rules

Deepfake Election Interference in Slovakia

Well designed and well timed deepfake or two Slovakian politicians discussing how to rig the election:

Šimečka and Denník N immediately denounced the audio as fake. The fact-checking department of news agency AFP said the audio showed signs of being manipulated using AI. But the recording was posted during a 48-hour moratorium ahead of the polls opening, during which media outlets and politicians are supposed to stay silent. That meant, under Slovakia’s election rules, the post was difficult to widely debunk. And, because the post was audio, it exploited a loophole in Meta’s manipulated-media policy, which …

Continue reading Deepfake Election Interference in Slovakia

Large Language Models and Elections

Earlier this week, the Republican National Committee released a video that it claims was “built entirely with AI imagery.” The content of the ad isn’t especially novel—a dystopian vision of America under a second term with President Joe Biden—but the deliberate emphasis on the technology used to create it stands out: It’s a “Daisy” moment for the 2020s.

We should expect more of this kind of thing. The applications of AI to political advertising have not escaped campaigners, who are already “pressure testing” possible uses for the technology. In the 2024 presidential election campaign, you can bank on the appearance of AI-generated personalized fundraising emails, text messages from chatbots urging you to vote, and maybe even some deepfaked campaign …

Continue reading Large Language Models and Elections

Detecting Deepfake Audio by Modeling the Human Acoustic Tract

This is interesting research:

In this paper, we develop a new mechanism for detecting audio deepfakes using techniques from the field of articulatory phonetics. Specifically, we apply fluid dynamics to estimate the arrangement of the human vocal tract during speech generation and show that deepfakes often model impossible or highly-unlikely anatomical arrangements. When parameterized to achieve 99.9% precision, our detection mechanism achieves a recall of 99.5%, correctly identifying all but one deepfake sample in our dataset.

From an article…

Continue reading Detecting Deepfake Audio by Modeling the Human Acoustic Tract

We’re Entering the Age of Unethical Voice Tech

In 2019, Google released a synthetic speech database with a very specific goal: stopping audio deepfakes.  “Malicious actors may synthesize speech to try to fool voice authentication systems,” the Google News Initiative blog reported at the time. “Perhaps equally concerning, public awareness of “deep fakes” (audio or video clips generated by deep learning models) can […]

The post We’re Entering the Age of Unethical Voice Tech appeared first on Security Intelligence.

Continue reading We’re Entering the Age of Unethical Voice Tech

How to Protect Against Deepfake Attacks and Extortion

Cybersecurity professionals are already losing sleep over data breaches and how to best protect their employers from attacks. Now they have another nightmare to stress over — how to spot a deepfake.  Deepfakes are different because attackers can easily use data and images as a weapon. And those using deepfake technology can be someone from […]

The post How to Protect Against Deepfake Attacks and Extortion appeared first on Security Intelligence.

Continue reading How to Protect Against Deepfake Attacks and Extortion

Identifying Computer-Generated Faces

It’s the eyes:

The researchers note that in many cases, users can simply zoom in on the eyes of a person they suspect may not be real to spot the pupil irregularities. They also note that it would not be difficult to write software to spot such errors and for social media sites to use it to remove such content. Unfortunately, they also note that now that such irregularities have been identified, the people creating the fake pictures can simply add a feature to ensure the roundness of pupils.

And the arms race continues….

Research paper.

Continue reading Identifying Computer-Generated Faces

Propaganda as a Social Engineering Tool

Remember WYSIWYG? What you see is what you get. That was a simpler time in technology; you knew what the end result would be during the development stage. There were no surprises. Technology moved on, though. Now, the mantra should be, “don’t automati… Continue reading Propaganda as a Social Engineering Tool