AI Action Summit (10 & 11 February 2025) – The Economist – 10.02.25
- Michael Julien
- Feb 11
- 5 min read
The French government’s Artificial Intelligence Action Summit begins in Paris on Monday. Tech grandees, politicians and NGOs from around the world will wrangle over how to handle the technology’s development. Those fearful that AI’s rapid progress could be catastrophic for humanity will push for heavy regulation. The “AI-ethics” camp will seek limits on things like deepfakes, algorithmic discrimination, intellectual-property theft and the burning of fossil fuels to power data centres. Others will argue for unleashing innovation in AI to boost productivity.
Advocates of strict, global rules on AI should expect resistance. China seems disinclined to heed such entreaties; President Donald Trump has torn up his predecessor’s sweeping regulations on AI. The gabfest’s French hosts want to limit EU constraints on AI in order to compete with the biggest global players. On Sunday Emmanuel Macron, France’s president, announced investments worth €109bn ($113bn) for AI projects and called on other European countries to “accelerate” such spending.
I am writing this from the Eurostar taking me from London to Paris. Over the next few days France will host what is becoming an annual summit on artificial intelligence (AI). I am headed for the École Militaire, an 18th-century building complex that hosts the country’s military academy. This afternoon I’m giving a speech on the topic of AI on the battlefield and talking to companies building military AI.Where does one start?
AI is no longer something niche, associated exclusively with killer robots. It is everywhere. Over the years I have written about AI pilots, AI spies, AI drone guidance, AI command and control and AI logistics. Like the space race before it, AI is also becoming a litmus test for national power, and thus for national security—a fact that became apparent when China shocked America with its DeepSeek large language model.
AI is also woven throughout our coverage. I thought this week it would be useful to pick out some examples of that from across the paper. Start with “Scam Inc”, a new podcast on the sinister world of cybercrime and online fraud, which is nearly as big as the illegal drug trade. I was listening to Sue-Lin Wong, who hosts the “Scam Inc” series, on “The Intelligence” over the weekend. I was struck by her alarm at the advances in AI voice cloning.As we explain in our leader on the topic, just 15 seconds of someone’s voice is sufficient to clone it. “By combining voice-changing and face-changing AI with translation services and torrents of stolen data sold on underground markets,” we write, “scammers will be able to target more victims in more places.” This has obvious implications for defence.
Remember that deepfake video of Volodymyr Zelensky in March 2022 apparently ordering his troops to lay down their arms? Imagine how much more convincing those videos would be today, three years on.Then there is our piece on China’s tools of economic warfare. In December, we explain, China banned shipments of several rare-earth materials—gallium, germanium and antimony—to America. Why did that matter? The materials are used to make weapons, munitions and the chips used for training AI models. America has not produced gallium since 1987.Continue on to Ukraine. In the Europe section, we describe how drones and other sensors have made the battlefield virtually transparent, and thus increasingly lethal for soldiers caught in the open.
“AI is being used to analyse surveillance data and cross-reference it with signals intelligence and open-source information,” we write, “like Russian soldiers’ social-media posts, which can reveal their positions.” Last year we published a detailed account of how Ukraine is using AI for this sort of clever data fusion, including whizzy models designed to identify which parts of the Russian front were suffering from low morale.We also wrote this week on electronic warfare (EW). In the old days soldiers used a metal box which churned out radio waves on a particular frequency to overwhelm the enemy’s own signals. But radios, like everything else in life and on the battlefield these days, are increasingly “software-defined” rather than constrained by their hardware. These are still the minority in Ukraine. But they can do clever things, like quickly change their waveform to match whatever drone threat is incoming.
The trend, as a recent report on EW by the Royal United Services Institute puts it, is towards “the mass generation of bespoke EW payloads”, a phenomenon they describe as “algorithmic warfare”.
All this has limits, of course. One Ukrainian volunteer tells us that AI can sometimes generate “false signals”, muddying the picture and decreasing transparency. Context is important. Last year I reported that the algorithms for America’s satellite-intelligence agency struggled to detect destroyed equipment at the start of the Ukraine war because they had not been trained on mangled hardware. Explainability is another challenge. In our science section this week we describe how investigators try to hunt down criminal cryptocurrency. AI is useful for that. But because the inner workings of AI models can be opaque, their output cannot be used as evidence in court.
You might have noticed I have not discussed Elon Musk’s promise—or threat, if you prefer—to turn his government-shredding agency DOGE on to the Pentagon. Donald Trump has enthusiastically embraced this idea: “Let’s check the military. We’re going to find billions, hundreds of billions of dollars of fraud and abuse.” That is because we have a big package of stories on this topic coming up later this week.Thank you as always for your letters. Ian asks if we have another Ukraine webinar coming up. I’m glad you asked, Ian. Zanny, Ed, Arkady and I will be re-appearing on February 24th, the third anniversary of Russia’s invasion.
Roger, in Canada, asks how far Israel has succeeded in demolishing Hamas’s tunnel network in Gaza. While some of these tunnels have been destroyed, it has been difficult to find them, to fight inside them and to demolish them. I suspect that hundreds of kilometres of tunnels will be in use for years to come.Janice asks whether Mr Trump’s plan for a space-based missile shield, which I discussed last week, might cause other problems. Well, consider that satellites in SpaceX’s Starlink constellation were forced to swerve 25,000 times to avoid collisions in a single six-month period in 2022-23.
Then imagine adding another several thousand satellites to the mix. Moreover, because space-based sensors and interceptors would be in very low orbits, they would “de-orbit”, or re-enter the atmosphere, more often than those higher up. This requires careful tracking by radars, such as those of America’s Space Force—and also provides good practice for missile-tracking.
Mike, in Boston, wants to know why we tend to focus on missiles as nuclear delivery systems; why not, say, container ships entering American ports? This was a big concern two decades ago, immediately after the 9/11 attacks, as this piece from our archive demonstrates. But the risk from terrorist groups is very different from that of states. States want to use nukes as instruments of statecraft, using calibrated threats of retaliation. Shipping containers, which travel slowly, can enter only at particular ports and are subject to inspection, are hardly reliable tools for that.
Thanks as always for reading. You can write to me at thewarroom@economist.com. I’ll be away next week, but see you in a couple of weeks.
Shashank Joshi at The Economist newsletters@e.economist.com
An AI power struggle in Paris
For this article in pdf, please click here:

Comentarios