會員
News Express(English Edition)

AI deepfakes blur reality in 2026 US midterm campaigns

As the video opens, Democratic Texas State Representative James Talarico appears to stand in front of a Texas flag, beaming.



"Radicalized white men are the greatest domestic terrorist threat in our country," the U.S. Senate candidate seems to say into the camera. As a voice whispers "white men," Talarico continues: "So true. So true."



But Talarico never filmed that video. Instead, the clip is an AI-generated ad from the National Republican Senatorial Committee (NRSC), the party's Senate campaign arm, featuring a computer-altered Talarico reciting social media posts he wrote years ago.



The words "AI generated" show up in easy-to-miss font in the lower righthand corner.



The realistic video is among a vanguard of "deepfake" advertisements that some campaigns are already deploying ahead of November's midterm elections, taking advantage of AI tools that are improving at a breakneck pace.



The ads are being introduced into a media landscape with few guardrails.



There is no federal regulation constraining the use of AI in political messaging, leaving only a patchwork of largely untested state laws.



And while social media companies like Meta and X label certain AI-generated content, they have scrapped professional fact-checking systems in favor of user-generated notes.