The Rise of AI Phishing
Ever notice how some scam messages don’t look as obvious as they used to? In recent years, the standard “red flags” for detecting phishing scams have become less reliable.
The reasons? The advancement and accessibility of AI. In 2024 alone, phishing was reported more than 193,000 times in the U.S. Attackers are now using AI-generated messages and even deepfake audio to make impersonation feel a lot more convincing.
So what does “AI phishing” actually look like in the real world? And what can you change today to protect yourself from AI-assisted scams?
Key Takeaways
- Scammers can now generate polished, personalized messages in minutes, which makes it much easier to successfully scale phishing schemes.
- Impersonation scams have expanded beyond emails and SMS. Scammers now use AI voice cloning and deepfake videos to create more authentic interactions.
- Traditional security and scam detectors struggle to keep up with AI because they depend too much on patterns and obvious tells.
- AI phishing is accelerating quickly, with phishing now the most commonly reported cybercrime, and AI-generated content becoming a common feature of modern scam campaigns.
AI-Driven Phishing Attacks Are on the Rise
Like just about everything, AI has been an accelerator for scams, allowing scammers to push out effective campaigns significantly faster.
In 2024, phishing was the most reported cybercrime to the FBI’s IC3, with 193,407 complaints. And across all IC3 complaints, reported losses hit $16.6 billion that same year.
AI is fueling this rapid increase in phishing. The FBI has warned that criminals are using AI to run phishing and social engineering attacks with more speed, scale, and personalization.
Artificial intelligence can help scammers smooth out the rough edges of phishing scams that used to make them easier to spot. AI-powered phishing scams have fewer grammatical errors and less awkward wording.
These advanced scam tactics also make targeted messages feel more believable, especially when they’re built around real personal details pulled from your digital footprint.
And we’re starting to see that show up in the data. Verizon’s 2025 DBIR notes that AI-generated text in malicious emails has doubled over the past two years alone.
How AI Is Changing the Phishing Landscape
If it feels like phishing is everywhere lately, you’re not imagining it. AI is pushing both the volume of attempts and the losses higher. The real question is: what’s different now? How is AI changing the way scammers approach phishing?
Smarter Impersonation
Phishing has always relied on impersonation. But AI takes things to a new level and makes these schemes even more believable.
Scammers are using AI tools for sophisticated phishing and social engineering. These tools allow them to clone voices and even create realistic deepfake videos that can mimic people you trust, like co-workers, business partners, and even family members.
Now, you're not just judging what a message says; you’re judging whether or not the person you are talking to is really who they are claiming to be, and that’s much harder to spot in the moment.
Personalization at Scale
Spear phishing is a highly targeted and personalized type of scam. In the past, this type of personalization took a lot of time and resources. Scammers needed to research the target, pick a hook, and write a believable message using information they’d compiled on the target.
AI significantly cuts down on that workflow. It can automatically pull details from data breaches, public profiles, and anything you’ve left online, then turn those scraps into a message that feels weirdly specific. The FBI claims that generative AI helps criminals send messages faster and reach a wider audience with believable content.
Lower Skill Barriers for Scammers
One major change that isn’t often discussed is that AI lowers the “minimum skill” required to run a convincing phishing operation.
Generative AI reduces the time and effort criminals need to deceive targets, and it can even correct the kinds of mistakes that used to give them away. 
That means more people can run scams that used to require solid writing, good pretexts, or fluent English. AI tools have widened the pool of scammers and boosted the quality of what they send.
Improved Language with Fewer Easy Tells
For years, one of the easiest ways to detect phishing was bad writing or clunky messaging. These emails would have strange grammar, clunky phrasing, and obvious translation issues.
However, with the introduction of AI tools like ChatGPT, Grammarly, and Gemini, this has completely changed.
First of all, scammers can paste in a real email (like a shipping update, password reset, or invoice notice) and tell a chatbot to rewrite it in the same tone, but customize it to fit their specific goal.
AI also acts as quality control. It fixes spelling and grammar, tightens the messaging, and can even translate cleanly. With a simple prompt, cybercriminals can ask a chatbot to generate 20 variations of a phishing email to see which one is the most convincing.
AI Tactics Advanced Scammers Use Today
AI tools have been showing up in real phishing campaigns and increasing both the believability of the schemes and the potential damage they can cause.
Here are a few of the most common tactics, what they look like, and how they get used in practice.
Voice Cloning
Voice cloning is a common AI scam tactic used in vishing attacks, in which someone generates speech that sounds like a real person. They usually start with a short audio sample pulled from social media videos, voicemail greetings, or recorded calls, then use a voice model to produce new lines on demand.
The FTC warns that this tactic is often used in “family emergency” scams. Scammers clone the voice of a loved one and claim they have been in a car accident or have been robbed and need money.
The FBI has also warned that criminals are using AI-powered voice cloning to impersonate co-workers or business partners to push victims into sharing sensitive info or approving payments. In some major targeted campaigns, AI-generated voice messages have even been used to impersonate senior U.S. officials.
Deepfakes
AI now goes even further than simply cloning your voice. With modern AI technology, scammers can create fake videos that look like a real person is speaking on camera.
Deepfakes take real video (and sometimes real audio) and use AI to make it look like someone said or did something they never did. They can pull footage from places like social media posts, YouTube interviews, LinkedIn videos, news clips, or even public Zoom recordings.
In scams, that usually shows up in two ways:
- Pre-recorded clips: A short video sent over text/email/DM that looks like a real person confirming a payment, asking for help, or “verifying” their identity.
- Live video calls: The scammer shows up on Zoom/Teams/FaceTime using real-time deepfake tools that map their facial movements onto someone else’s face. If they have enough footage of the target person, the result can look convincing at a glance.
In a widely reported case, an employee of a Hong Kong company transferred about $25 million after joining a video meeting where the “CFO” and several other attendees were deepfakes.
AI-Assisted Website Spoofing
Phishing often succeeds because the website a link leads to looks legit at first glance.
Spoofing is when a scammer disguises a fake website to appear like a trusted company. AI makes this easier to pull off because scammers can generate cleaner and more convincing websites in minutes.
Spoofing websites has gotten much faster nowadays, as scammers can use AI website builders and chatbots to generate a polished landing page that matches a real brand’s tone, layout, and wording.
Some fake sites also incorporate AI-powered “support” chat to make the site feel even more legitimate. Instead of leaving you alone with a login form, the bot answers questions, gives step-by-step instructions, and nudges you toward the scammers' end goal (clicking a link, entering your credentials, making a payment).
Criminals embed AI chatbots into fraudulent websites specifically to prompt victims to click on malicious links and keep them engaged.
Multi-Channel Attacks
AI makes it easier for scammers to create sophisticated phishing that hops between multiple channels. Instead of one suspicious email, you get a sequence that feels connected.
For example, you might receive an initial smishing text that directs you to a cloned AI voice message that shifts the message to an encrypted third-party messaging app like WhatsApp, Signal, or Telegram.
Why Traditional Defenses Are Struggling
A lot of legacy security software was built for classic phishing. But phishing has evolved, and many defenses still rely on rules and signals that are easier to fake now.
Spam Filters Can’t Keep Up
These filters look for known red flags and common signals, but struggle to keep up with more advanced AI-powered attacks. AI-powered scams are less pattern-based and easier to customize.
Criminals use AI-generated text to make phishing and social engineering messages seem more persuasive. AI makes it easy to spin up endless variations of the same phishing lure, so pattern-matching tools have less to grab onto.
Traditional Trust Signals Are Easier to Fake
Most people rely on quick checks to decide if something’s legit. You might quickly glance at the number, the sender name, the website, and, if nothing looks immediately suspicious, you assume it’s legit.
With AI tools, scammers can fake those signals more convincingly than ever. For example, AI-powered caller ID spoofing can make a call appear like it’s coming from a local number or a legitimate business, even when it isn’t. On the web side, they can generate a near-identical login page in a few minutes that passes a quick inspection.
People Have Less Time to Think
Phishing usually succeeds when it catches people off guard.
Attackers lean on urgency and time-sensitive situations, so when AI makes the message look more normal and less obviously scammy, people are more likely to act first and verify later.
Microsoft’s 2025 Digital Defense Report reported that AI-powered phishing emails have a 54% click-through rate, compared with just 12% for standard email phishing attempts. And scammers are actively engineering chaos to rush decisions, like triggering a wave of password reset emails to create panic, then calling or texting, pretending to be support to “help fix the problem.
How to Protect Yourself Against AI-Driven Phishing
AI phishing is harder to spot because the “usual tells” are fading. The good news is you don’t need a totally new system. A few upgrades and habits will significantly cut down your risk of falling victim to a scam.
Use Hardware-Backed Authentication for Important Accounts
If you can, switch high-value accounts (email, banking, cloud storage, admin logins) to passkeys or a security key. These are harder to phish because they’re tied to your device and the real website, not a code you can be tricked into typing into a fake page.
Treat Context as the Main Red Flag, Not Content
One of the biggest changes you can make to your security approach is to stop judging messages by how polished they sound. Instead, look at what the message is asking for. If you get an unexpected request for MFA codes, payment details, or other personal information, you should verify it through a second channel, no matter how “real” it looks.
Use Lifeguard for Advanced Threat Detection
AI allows scammers to fly under the radar, and even if you are a security-savvy person, it can be pretty difficult to identify an advanced scheme. So it helps to have a tool doing the first pass for you. Lifeguard scans emails, texts, and links for common phishing signals like spoofed domains, lookalike login pages, and other high-risk patterns. If something looks suspicious, it flags it early with a clear warning so you can verify before you click or share information.
Assume Your Public Data Is Being Used Against You
AI makes it easy to turn (seemingly small) personal details into an effective scam. Your best defense? Tighten up what information is public. Limit who can see your profiles, remove unnecessary personal details (workplace, city, travel dates), and be careful with posts that make impersonation easier.
Add Guardrails to High-Risk Workflows
For anything involving money or sensitive accounts, set a simple personal rule that adds a second step before you act.
For example, don’t change bank details from an email request, don’t approve invoices without calling a number you already have saved/on file, and don’t handle password resets from links inside SMS messages.
AI Scams Are Here to Stay. Your Defenses Can Catch Up
AI didn’t reinvent phishing, but it certainly improved it. Scammers can now pump out convincing and personalized scams at a faster rate than ever before.
The good news is you’re not powerless here. A few practical upgrades, such as tightening up your verification habits and cleaning up your digital footprint, can go a long way, even against modern AI-assisted scams.
Looking for scam protection that keeps up with AI? Lifeguard blocks scam texts before they reach you, flags risky emails and links, and helps cut down the personal data scammers use to personalize attacks. When something looks suspicious, Lifeguard will actually explain what it found and provide you with some clear next steps.