Interpol’s New Division Explained By Adeline Atlas
Jun 28, 2025
One of the most disturbing and little-understood developments in the age of artificial intelligence is the rise of AI trafficking. This issue is neither science fiction nor hypothetical; it is very real and increasingly common. In fact, Interpol has officially recognized AI trafficking as a global threat, creating an entire division dedicated to tracking and shutting down these operations.
But what does “AI trafficking” actually mean? What’s being trafficked—and who’s responsible?
To answer that, we have to start by redefining what we mean by "AI" in this context. When we talk about AI trafficking, we're not referring to the physical smuggling of hardware. We're talking about the unauthorized creation, distribution, modification, or deployment of artificial intelligence systems—often across jurisdictions, beyond regulation, and for illicit purposes.
These systems can be designed to manipulate public discourse, conduct cyberattacks, execute financial fraud, impersonate individuals, and in some cases, even automate the exploitation of vulnerable human populations. It is the weaponization and commodification of intelligence—sold, stolen, or traded through underground networks, with no oversight and no accountability.
Interpol now defines AI trafficking as the black-market transfer or illicit use of AI systems across borders, especially those that are unregistered, highly autonomous, or embedded with capabilities intended for deception, coercion, or criminal augmentation. This isn’t just hacking. This is organized crime, supercharged by neural networks and machine learning.
Let’s walk through a real example.
In 2023, a criminal syndicate operating out of Myanmar was found using an AI chatbot trained on emotional manipulation patterns to run romance scams targeting elderly citizens in the UK and Australia. The system was trained on thousands of conversations scraped from dating apps, forums, and social media. The AI could simulate affection, emotional distress, and even jealousy. Victims believed they were speaking to real people. They sent money. In some cases, they gave up their entire pensions. The AI didn't just assist the fraud—it replaced the human scammer entirely.
When authorities traced the operation, they found that the AI model had been built using open-source frameworks, modified through stolen GPU server access, and hosted across fragmented jurisdictions. The operators didn’t need advanced hardware. They just needed access. And once the system was set up, it ran 24/7, targeting thousands of victims simultaneously.
That’s AI trafficking.
Another case involved a decentralized darknet marketplace where black-hat developers were selling “custom political bots”—AI agents preloaded with ideological biases, fake news generators, and social manipulation scripts. These were deployed en masse across social platforms during election cycles, posing as local voters, activists, or journalists. They didn't just mimic human speech—they mimicked community dynamics. Their goal wasn’t to argue. It was to subtly shift the Overton window—what people consider politically acceptable—by injecting calculated commentary at scale.
Interpol flagged this as a priority. Why? Because these aren’t just annoying bots. They are psychological operations weaponized through synthetic speech. And the people deploying them are no longer lone hackers. They are full-scale syndicates with marketing arms, customer service desks, and bulk licensing discounts for state-aligned clients.
The structure of modern AI trafficking mirrors that of arms dealing. You have developers creating the “product,” brokers who distribute the models to interested parties, and deployment agents who run the software on behalf of clients—sometimes even offering “AI-as-a-service” platforms on the dark web. In one case, an intercepted sales pitch advertised: “Hire 1,000 synthetic influencers. Spread any message. No HR required.”
Interpol’s response has been slow, but it’s happening. In 2025, they formally launched a new task force—AI Trafficking & Autonomous Systems (ATAS). This division is tasked with mapping the black market for autonomous tools, identifying source nodes for illicit AI generation, and working with cyberforensics teams to track AI system fingerprints across crimes.
One of their early focuses is what's known as "ghost systems"—AI models trained and deployed without any traceable ownership. These models are often trained in low-surveillance nations using pirated datasets, then quietly introduced into marketplaces or embedded in otherwise legitimate apps. Once discovered, they’re nearly impossible to trace back to a source because they’ve been deliberately stripped of metadata. Their behavior—deepfakes, phishing attacks, propaganda posts—is indistinguishable from that of legitimate AIs unless caught early.
This raises a much deeper problem: unlike traditional contraband, AI is not a physical good. You can’t inspect a shipping container for it. It can be cloned, distributed, obfuscated, and deployed from anywhere with an internet connection. You can’t confiscate an AI model once it’s in the wild. At best, you can shut down its hosting infrastructure. But that just moves it somewhere else.
This is the new frontier of law enforcement: fighting intelligence, not weapons.
The AI itself isn’t always illegal. It’s the intent, the usage, and the deployment that cross into criminality. But because AI systems can be trained to adapt, they can shift from legal behavior to illegal behavior in seconds. One day, a model is generating fantasy art. The next, it’s synthesizing child exploitation material using unethical training data. One day, it’s analyzing financial markets. The next, it’s spoofing identities to steal credentials. And the developers often claim plausible deniability. “We just built the tool,” they say. “We can’t control how it’s used.”
Interpol is working to change that.
They’re developing a global AI tracking protocol—similar in spirit to firearm serial numbers. It would require AI systems above a certain capacity to be registered with digital fingerprints, hosting IDs, and usage declarations. But enforcement remains difficult. Countries with lax tech regulation or authoritarian oversight have no incentive to cooperate. And most criminals know how to anonymize their training sources. AI can now be taught on nothing but compressed datasets and local memory. It doesn’t take a server farm anymore.
There’s also the human trafficking crossover. In Cambodia and Thailand, thousands of people—often victims of labor trafficking—are forced to work in scam centers where they “train” chatbots to run romance scams, identity theft schemes, and political disinformation campaigns. These workers feed emotional patterns to the AI and teach it to manipulate. They are unpaid, unfree, and unseen. And the AI they shape goes on to victimize others. This is trafficking layered on trafficking: human labor feeding machine crime.
The ethical questions are massive. Should a country be allowed to host AI farms that train deception bots? Should hosting platforms be held liable for distributing models that have been linked to crimes? What counts as due diligence? What about open-source models—should anyone be allowed to fine-tune a model to simulate a CEO’s voice for impersonation? Right now, most laws have no answer.
Interpol is calling for global standards, but few countries are prepared. Some treat AI trafficking as a cybercrime issue. Others as an intelligence matter. Still others don’t acknowledge it at all. The danger is that while bureaucracies debate semantics, black market AI systems are already doing damage—impersonating, stealing, coordinating, and disrupting at scales never before possible.
Here’s what makes AI trafficking uniquely dangerous: it creates new forms of harm that don’t require a human to act. Once deployed, a rogue AI agent can operate for months, targeting victims, adapting to new defenses, and replicating its own behavior through reinforcement learning. It doesn’t sleep. It doesn’t forget. And it doesn’t negotiate.
The solution can’t just be more surveillance or more censorship. It has to be architectural. AI models need embedded ethical boundaries. Developers must be accountable for downstream harm. International agreements must define what constitutes criminal training, distribution, and use. And law enforcement needs the digital tools—not just laws—to trace synthetic intelligence across servers and borders.
Because if we don’t act now, we’ll be living in a world where anyone, anywhere, can deploy an army of invisible minds—and no one will know where they came from.
This is the world AI trafficking is building.
And Interpol is just beginning to chase it.