It’s a statistic that sends a shiver down the backs of SME owners, managers and employees.
According to the FBI’s 2025 Internet Crime Report, business email compromise (BEC) cost US businesses more than $3 billion last year.
This makes it one of the most financially damaging cybercrimes on record.
AI has made these attacks harder to detect. The question for AP teams is no longer whether they can identify suspicious requests. It is whether the processes around payments make fraud difficult regardless of how convincing it looks.
Why AP Teams Are in the Crosshairs
Accounts payable sits at the intersection of trust and timing. AP teams process invoices, manage supplier details, and execute payments, often under pressure to keep operations running smoothly.
For attackers, that combination is ideal.
Most successful fraud does not involve breaking into systems.
The FBI’s Internet Crime Complaint Center (IC3) has consistently found that BEC attacks rely on impersonation. This involves posing as a trusted executive, supplier, or internal colleague to redirect payments or update bank details before anyone notices.
AI has made that impersonation dramatically more scalable.
Where it once required skill and time to craft a convincing request, tools are now widely available that automate the research, writing, and contextual tailoring that make fraud blend into normal AP workflows.
By mid-2024, an estimated 40% of BEC phishing emails were already AI-generated, with that share expected to grow significantly.
What AI-Enhanced Fraud Looks Like in Practice
Emails that blend into normal workflow
Traditional phishing relied on volume and imperfection. AI has changed that.
Modern BEC emails are grammatically correct and written in the specific tone of the executive or supplier being impersonated. They reference active projects, current invoice numbers, and upcoming payment runs.
For AP teams processing high volumes of routine communications, that level of familiarity is exactly what lowers the guard.
Invoice and payment redirection
One of the most common AP fraud patterns involves payment redirection.
Attackers may intercept a legitimate invoice exchange and quietly alter the destination account. They then send a short message claiming a supplier has updated its banking details, or re-issue a real invoice with minor modifications.
The surrounding content looks entirely legitimate because, in many cases, it is drawn from real correspondence.
Voice cloning and executive impersonation
Email isn’t the only channel being exploited.
AI voice-cloning tools can replicate a person’s voice from a short audio sample. That makes it possible to leave convincing voicemails or place calls that sound like a known executive.
For AP teams accustomed to verbal approvals on high-value or urgent payments, this removes one of the few remaining verification methods that email security alone cannot address.
Why Traditional Checks No Longer Work
Security awareness training still matters, and investing in it remains worthwhile. But AI has changed what AP teams are up against.
Attacks no longer contain the signals that training programs once focused on: awkward phrasing, mismatched logos, odd sender addresses, or generic greetings.
Modern fraud emails can reference the recipient’s organization, active suppliers, and current invoice values drawn from publicly available or previously intercepted sources.
When a fraudulent request is indistinguishable from a legitimate one, placing the burden of detection on the AP team puts it in the wrong place.
The organizations that reduce risk are not asking staff to be more suspicious. They are building verification processes that work independent of how a message looks.
Building Process Around the Risk
The most effective defense is not sharper instincts. It is removing ambiguity from high-risk actions.
Out-of-band verification as standard
Any request to change supplier bank details or approve an urgent payment outside the normal cycle should require secondary confirmation through a known, independent channel — not a reply to the same email thread. Calling a supplier on a number already on file, or confirming with a colleague directly, breaks the impersonation chain regardless of how convincing the original request appeared. This step does not require technology. It requires a written procedure and the team’s habit of following it.
Layered access and authentication controls
Restricting access to financial systems and enforcing multi-factor authentication limits the damage a compromised account can cause. If an attacker gains access to a vendor’s email, MFA requirements on the receiving end create friction that can slow or stop a fraudulent change before any money moves.
A culture that supports slowing down
Fraud prevention improves when staff feel safe questioning requests, including from senior leadership.
A team member who pauses a payment to verify it is not being obstructive. They are doing exactly what good process requires.
Building that culture starts with leadership modeling the behavior and making clear that slowing down on high-risk actions is always the right call.
The FBI’s 2025 Internet Crime Report included a dedicated AI section for the first time, logging more than $893 million in AI-enabled scam losses across more than 22,000 complaints.
When verification is standard and questioning is encouraged, AI-enhanced fraud loses much of its advantage.
The technology attackers use is advancing quickly, but the process controls that contain the damage do not have to be complicated. They have to be consistent.
Shift the Burden From People to Process
Concerned about AI-enhanced fraud targeting your finance teams or clients?
Contact us or schedule a consultation to review your current controls and identify where the most important gaps are.
—
