With help from Mohar Chatterjee and Derek Robertson
The 2024 presidential campaign may barely have started, but we’re already getting a preview of just how online, free-wheeling and disorienting it’s likely to be.
That’s thanks to AI, Elon Musk and digital-forward presidential campaigns.
Over the weekend, an AI deepfake video featuring the face and voice of Ron DeSantis superimposed on “The Office” character Michael Scott went viral, aided by a tweet from Donald Trump Jr.
The double take-inducing video — first posted by C3PMeme, a pro-Trump video account with about 50,000 Twitter followers — is notable for how convincing the deepfake of DeSantis appears.
C3PMeme’s account illustrates the startling pace at which the technical capabilities of anonymous online political influencers are advancing.
Compare the obviously photoshopped faces of Democratic politicians in a video posted in November to the uncanny deepfake DeSantis posted this morning (then think back, in the recesses of your mind, to the static memes that Trump supporters generated in support of Donald Trump ’s first presidential campaign.)
To make matters more confusing, Elon Musk’s anti-gatekeeping ethos means videos like these are set to proliferate in an environment with less oversight of potentially misleading material.
Before Musk’s takeover, a blue Twitter check mark was meant to denote a verified account belonging to a notable person. Under Musk, a blue checkmark is available to anyone who pays a monthly fee. So C3PMeme’s AI deepfakes — and those of anyone else willing to shell out $8 a month — come paired with what looks like a Twitter seal of approval.
Then there’s the campaigns themselves. Much has already been made of DeSantis’s very online choice to announce his run on a Twitter Spaces last week.
The response from other campaigns to DeSantis’s glitchy rollout, seeking to pounce on the campaign’s first meme-able moment, tells us just as much about the free-wheeling digital environment heading into 2024.
First there’s President Joe Biden. In recent memory, the norm for incumbent presidents (not named Trump) seeking reelection has been to remain above the fray—which means refraining from commenting on would-be challengers in the other party’s primary.
But as technical failures delayed DeSantis’ Twitter Spaces announcement, Biden ran straight into the social media slugfest, tweeting, “This link works” and directing followers to a page for donating to his reelection effort.
The pugilistic incentives of social media outweighed the benefits of presidential gravitas: Even Musk, whose relationship with Democrats has soured dramatically in recent months, agreed the tweet was a “solid shitpost.”
Biden’s tweet was also notable for its speed. Coming 16 minutes into the delayed kickoff, it essentially amounted to live commentary on a rival campaign event, something that was considered a shocking development when Trump first live-tweeted through a Democratic primary event in 2015.
The Trump campaign’s response, too, was notable for the speed and sophistication with which it deployed custom-made videos to try to crystallize the moment as a disaster for DeSantis and Musk.
Within hours, Trump’s Instagram account had published three videos to its 23 million followers lampooning the rollout. One video contrasted screenshots of his rival’s glitching Twitter announcement with soaring footage of Trump being cheered by supporters. Another video showed a rocket from Musk’s SpaceX, labeled “Ron! 2024,” crashing on liftoff. And then there was a bizarre deep-fake parody of the event featuring Musk and DeSantis in dialogue with George Soros, Adolph Hitler, and the devil.
Keep in mind that a video in which Hitler briefly questions Satan’s sexual orientation is just the starting point for the 2024 social media scrum. Get ready for a mind-bending 18 months.
As the development of artificial intelligence becomes a more urgent issue across the global economy, conversations about its risks are becoming much more specific. In the power sector, for instance, AI could solve complex problems about real-time resource usage — but integrating AI could also present new kinds of risk to critical bits of infrastructure. How are we supposed to think about those?
A panel of experts sat down Tuesday to talk about their take on whether AI can be used (safely) to provide critical utilities like electricity to Americans.
The conclusion? Maybe, sorta, kinda — and always with a human in the loop. The experts in the discussion included Nvidia’s Marc Spieler, Brown University professor emeritus John Savage, Electric Power Research Institute senior tech executive Jeremy Renshaw, and Landis + Gyr senior director Daniel Robertson. They were speaking at a virtual media briefing by the United States Energy Association — a nonprofit, non-lobbying coalition of think tanks and agencies.
Renshaw noted that EPRI has researched how to use AI to help human operators better distribute power across the grid. In each case, though, any solutions offered by AI must go through a human before any action can be taken.
Meanwhile, John Savage said critical infrastructure cannot be fully entrusted to automated systems, and that regulations are needed for areas where there is a high risk for incidents to harm society. Savage said the risk of using AI to support the distribution of critical utilities should be measured as “likelihood weighted by impact” — IE: while the risk of a severe cyberattack might be low, the impact of even a one-off attack on critical infrastructure, like the electric grid, would be “catastrophic.” — Mohar Chatterjee
State governments are proposing their own experiments with AI policy, with New Jersey considering a new office to regulate its use in government.
POLITICO’s Daniel Han reports this morning in New Jersey Playbook on a bill proposed by state Sen. Troy Singleton that would delegate responsibility for regulating AI to an “artificial intelligence officer,” along with an advisory board that would provide feedback on the officer’s proposals.
Singleton told Daniel he doesn’t “think it’s in our best interest for me as a state legislator to try to overprescribe what that public policy [around artificial intelligence] looks like,” and that the bill would “allow individuals with deep experience in this area to utilize that experience to frame out what that public policy should look like.”
Daniel reports that it’s not likely the bill sees much action amid New Jersey’s hectic budget season, but if nothing else its introduction is a signal that much like with crypto and blockchain, states could quickly become laboratories for similar experiments with AI policy. A report from law firm Bryan Cave Leighton Paisner shows 22 states plus the District of Columbia are considering or have enacted AI-related policies. — Derek Robertson
- Including race in medical algorithms takes serious care to avoid causing harm.
- Another open letter from tech leaders argues that AI is an existential risk.
- The run on chips to power AI systems is “like toilet paper during the pandemic.”
- Uber Eats is bringing delivery robots to American streets.
- Qualcomm execs argue that AI computing will need more than just the cloud.
Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); Steve Heuser ([email protected]); and Benton Ives ([email protected]). Follow us @DigitalFuture on Twitter.
If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.
( Information from politico.com was used in this report. Also if you have any problem of this article or if you need to remove this articles, please email here and we will delete this immediately. [email protected] )