Scammers have always adapted. They moved from mailed letters to phone calls, from phone calls to emails, from emails to text messages. Each shift let them reach more people, faster, with less effort.
Artificial intelligence is the biggest shift yet. Tools that were once available only to well-funded technology labs can now be accessed by anyone with a laptop and an internet connection. The result is a new generation of scams that are harder to detect, more personalized, and in some cases nearly impossible to distinguish from a genuine call or message.
Understanding how these tools are being weaponized is the first step toward not falling for them. For a look at the classic, non-AI scams that continue to target older Americans, see our companion article: Common Scams Targeting Seniors — and How to Stop Them.
Voice Cloning: When the Voice on the Phone Is Not Who You Think
This is the most alarming development in fraud today. Using just 10–30 seconds of audio — pulled from a voicemail, a YouTube video, a social media clip — AI software can generate a convincing replica of a person's voice that can say anything the scammer types in real time.
The result is a supercharged version of the grandparent scam: the caller doesn't just claim to be your grandchild — they actually sound like your grandchild. Not vaguely similar. Nearly identical. Victims who have fallen for this report they had no doubt they were speaking to their family member.
The FTC issued a consumer alert on AI voice cloning scams in 2023 after reports surged. The technology has continued to improve since then.
What to do: Establish a family code word — a simple phrase only your close family knows — and make it the required verification for any emergency call. "What's our family word?" A scammer won't know it, even with a perfect voice clone. Share the code word in person, never by text or email.
Deepfake Video Calls: Faking a Face-to-Face
If a cloned voice isn't convincing enough, some scammers now use real-time AI video manipulation to fake a live video call. The technology overlays a different face and voice onto a live feed, making it appear that a family member, doctor, or government official is speaking with you face to face over video.
This requires more technical sophistication than voice cloning and is currently less common — but reported cases have already emerged and security researchers warn it will become more prevalent as the tools become cheaper and easier to use.
What to do: During any video call where a financial request is made, ask the caller to do something specific and unpredictable — touch their ear with their right hand, hold up four fingers, quickly look to the right. Real-time deepfakes often glitch or lag on sudden, unscripted physical requests. Then hang up and call back on a trusted number you already have, independently of whatever number called you.
AI-Generated Phishing Emails: The Typos Are Gone
For decades, phishing emails were easy to spot — poor grammar, broken English, obvious formatting errors. AI writing tools have eliminated those tells entirely. Scammers now generate hyper-personalized, grammatically flawless emails that reference your name, your bank, your city, and details scraped from social media or data broker databases.
These emails impersonate banks, Medicare, the IRS, Amazon, or utility companies and create false urgency: your account will be suspended, a suspicious charge is pending, a package cannot be delivered. The link they ask you to click leads to a convincing fake login page designed to steal your credentials.
What to do: Never click a link in an unsolicited email, no matter how official it looks. Open a new browser tab and go directly to the company's website by typing the address yourself, or call the number on the back of your card. Hover over the sender's email address — it often reveals a non-official domain hidden behind a legitimate-looking display name.
AI Chatbot Romance Scams: Fake Relationships, at Scale
Traditional romance scams required a human scammer to invest hours each day maintaining a fake relationship. AI chatbots now allow scammers to run dozens — or hundreds — of fake relationships simultaneously, around the clock, with emotionally engaging responses tailored to everything the target shares about themselves.
These bots are not obviously robotic. They remember details from prior conversations, ask thoughtful follow-up questions, express concern and affection at the right moments. The only difference is that there is no person on the other end — just software designed to build emotional attachment until it's time to introduce a financial crisis.
The FTC reports romance scam losses exceeded $1.3 billion in 2022 — and that was before AI chatbots became widely accessible to bad actors.
What to do: Be skeptical of any online relationship where the person consistently avoids video chat, always has an excuse not to meet in person, and eventually asks for financial help. Use the family code word approach — ask a question only the real person would know. Tell a trusted friend or family member about the relationship before it deepens.
AI-Personalized Impersonation: They Already Know Your Name
Data brokers sell enormous databases of personal information — address history, relatives' names, financial indicators, shopping habits, even political affiliations. AI tools can cross-reference this data instantly to craft a scam call that opens with your full name, mentions your street, references a recent purchase, and names a family member — all before you've said a word.
This illusion of legitimacy is intentional. The scammer wants you to think, "They know so much about me, they must be who they say they are." But that information was purchased, not earned. Anyone who paid for it can sound like an insider.
What to do: The amount of personal information a caller knows about you is not evidence they are who they claim to be. Treat any unsolicited call that immediately demonstrates personal knowledge as more suspicious, not less. Hang up and verify independently through a trusted number.
"The old tells are gone. Bad grammar, awkward phrasing, an unfamiliar voice — AI has removed all of them. What's left is skepticism, a code word, and the habit of calling back."
The Defenses That Still Work
AI makes scams harder to detect on the surface, but the underlying mechanics are the same: create urgency, exploit trust, prevent verification. The defenses that counter those mechanics still work:
- Slow down. Urgency is always manufactured. Real emergencies allow time to verify.
- Use the family code word. No AI can guess a word only your family knows.
- Hang up and call back on a number you already have — never redial the incoming number.
- Talk to someone before sending money, no matter what the caller says. The instruction not to tell anyone is always the giveaway.
For the full list of universal rules and guidance on talking to your family about scams, see: Common Scams Targeting Seniors — and How to Stop Them.
Where to Report AI-Powered Fraud
AI scams are still crimes. Reporting them — even if you weren't fooled — helps authorities track what's being deployed against Americans:
- FTC: reportfraud.ftc.gov
- FBI Internet Crime Complaint Center: ic3.gov
- Elder fraud hotline (DOJ): 1-833-FRAUD-11