
AI vs Fraud: The new invisible arms race in payments
24 July 2025 in Blog,Case Study
by Ludovic Plisson
Share This Story, Choose Your Platform!
Fraud is no longer a side threat in payments. It’s now a structural challenge, shifting, global, and increasingly powered by artificial intelligence.
To defend their networks, payment players are deploying AI-based systems at unprecedented scale. But the arms race is invisible. And it goes both ways.
Fraudsters also use AI, to mimic humans, bypass verification, and overwhelm traditional defenses. This silent war plays out in milliseconds. Across every payment, every country, every device.
We’re entering a new era where AI no longer just supports fraud detection, it defines it.
A Battle of Algorithms, Not Rules
Legacy systems used static rules: block large transactions, limit IP changes, flag blacklisted accounts.
Today, that’s not enough. Modern fraud tactics evolve faster than rules can adapt. AI, in contrast, learns in real time. It detects anomalies, links hidden patterns, and scores risk in under a second.
Companies like Stripe, Adyen, and PayPal now rely on AI to monitor billions of signals. They use metadata like device fingerprint, session velocity, and behavioral history to assess trust. At scale, that means fighting fraud with probability, not just policies.
Stripe Radar, for instance, blocks many attacks using models trained across its entire merchant base.
But AI is not just a shield. It’s also becoming the fraudster’s best tool.
When AI Becomes the Attacker
Generative AI has lowered the barrier for fraud. Anyone can now create deepfake videos, clone voices, or write phishing scripts in seconds.
Fake ID documents pass automated checks. Synthetic identities slip through onboarding flows. And bots simulate “human-like” user behavior with uncanny accuracy.
A report by Arkose Labs in 2025 shows a 67% increase in AI-powered bot attacks in fintech alone.
What once took organized crime networks now fits inside a Python script.
The fraud landscape has shifted. It’s no longer a question of “if AI is used” but “who uses it better.”
The Global Arms Race: Local Challenges, Shared Tensions
Fraud doesn’t respect borders. But its form changes by region.
- In Brazil, the real-time payment system Pix led to a spike in fraud attempts. Banks like Itaú and Bradesco now use AI models to monitor transfer timing, behavioral context, and geolocation (Banco Central do Brasil).
- In Nigeria, SIM-swap scams are a major threat. To combat them, financial institutions deploy behavioral biometrics monitoring how users type, scroll, and even hold their phones (NIBSS Fraud Report 2023).
- In Germany, AI-generated voices are used in phishing calls. Banks are starting to analyze emotional tone and urgency in speech to spot these scams (BSI Cyber Report 2024).
- In India, where UPI handles over 13 billion transactions monthly, banks use AI to flag anomalies based on location, device, and timing (NPCI).
Despite these regional differences, the core issue remains the same: speed, scale, and subtlety of fraud.
Fighting Back: How AI Defends the Front Line
Modern fraud prevention blends multiple AI techniques. No single method works alone.
- Behavioral biometrics detect anomalies in typing rhythm, device motion, or scroll behavior.
- Graph-based machine learning maps relationships between users, cards, and IPs to uncover fraud rings.
- Reinforcement learning adapts fraud thresholds dynamically, learning from every accepted or blocked transaction.
- Synthetic identity detection flags inconsistencies in new user registrations, even when documents appear valid.
- Natural language processing (NLP) scans messages, emails, and chats for manipulative language in real time.
These systems don’t replace humans. But they vastly accelerate detection and prevent losses before they occur.
Proving You’re Human Is the Next Frontier
Fraud detection is no longer just about verifying identity. It’s about proving you’re really human. In June 2025, Sam Altman, CEO of OpenAI, said it clearly:
“We need a new way to prove you’re a human in the age of AI.”
This challenge goes beyond payments. Bots now pass CAPTCHA tests. Deepfakes beat selfie checks. AI-generated emails fool even trained professionals. Some propose radical solutions, like biometric “proof of personhood.” The controversial project Worldcoin, for example, scans users’ irises to issue a unique World ID. That approach raises serious privacy concerns. But the idea itself is becoming harder to ignore.
As Linas Beliūnas aptly summarized in a recent post, the core problem isn’t authentication, it’s existence in a digital world flooded with synthetic activity. In the coming years, “human detection” may matter as much as “fraud detection.”
Collaboration Will Define the Winners
No company, bank, or acquirer can fight AI-powered fraud alone. That’s why players like Mastercard and Visa are shifting toward collaborative AI models. Their platforms analyze anonymized data from billions of global transactions, generating smarter risk scores.
- Mastercard Decision Intelligence uses over 75 billion annual transactions to detect patterns at scale.
- Visa’s Risk AI scores transactions in under 300 milliseconds, offering APIs for real-time fraud decisioning.
The future is clear: those who share intelligence- across providers, sectors, and borders, will outpace fraud faster.
What Comes Next?
The arms race isn’t ending. But it’s becoming more visible. Fraudsters will keep exploiting weak links. AI will keep evolving. Regulation will struggle to keep up. But there’s hope. By combining smart infrastructure, layered defenses, and ethical AI, the payments industry can stay ahead. Not forever, but long enough to build trust, preserve value, and protect people. We must move fast. Because fraud already has.
🙏 Acknowledgment
Special thanks to Linas Beliūnas for raising vital questions on identity and AI in payments. His insights helped shape this reflection.













