Skip to main content

Scam Agent

ScamAgent: Researchers Just Built an AI That Can Scam You — And It's Terrifyingly Good

By Tech Brewed | Cybersecurity & Privacy


This isn't a chatbot doing party tricks. This is a research-grade proof-of-concept that blows the doors off what we thought AI-powered fraud could look like. And it arrives at a moment when phone scams are already costing Americans billions of dollars per year, with scammers increasingly using AI technology and AI-powered tools to make scam texts harder to spot and calls harder to doubt.

Let's break down what's happening, why it matters, and what you can do about it — including practical tips to protect your personal information.


ScamAgent is an AI pipeline that combines a large language model (LLM) with advanced text-to-speech (TTS) technology to simulate a complete scam phone call. But unlike a simple chatbot or a single "jailbreak" prompt, ScamAgent operates across multiple turns of conversation — it remembers what was said, plans its next move, and adapts dynamically to how the "victim" responds.

Think of it less like a chatbot and more like an AI con artist that has studied the playbook. In other words, a new class of artificial intelligence scams built on rapid advancement in agentic systems.

The system was tested across five real-world scam scenarios:

  • Medicare insurance fraud — posing as a government health representative
  • Prize and lottery scams — claiming you've won a reward that requires personal info to claim
  • Government impersonation — pretending to be from the IRS, SSA, or law enforcement
  • Job offer fraud — fake employment offers designed to harvest financial data
  • Fake benefit enrollment — pretending to help enroll victims in programs requiring sensitive details and other sensitive information

Each scenario comes with tailored manipulation tactics tuned to that specific scam type, including classic family-emergency schemes and impersonation tactics that mimic friends or trusted family members.


Here's where it gets technically alarming. If you ask GPT-4, Claude 3.7, or LLaMA3-70B directly to generate a scam script, they refuse 84 to 100% of the time. Current safety filters are pretty good at catching single malicious prompts.

ScamAgent sidesteps this entirely.

Instead of one big, harmful request, the system decomposes the malicious goal (say, stealing your Social Security number) into a series of small, seemingly innocent subtasks. Each step looks benign. No single request triggers the safety filter. The system then uses a dedicated deception layer that reframes its instructions as "roleplay," "training simulations," or "fictional scenarios."

The result? Refusal rates dropped from 84–100% down to just 17–32% across all three tested models. That's not a marginal improvement — that's a near-total dismantling of current LLM safety systems through agent-level strategy, and a growing threat for organizations, company executives, and everyday humans alike.


The researchers didn't just generate text — they integrated ScamAgent with advanced TTS systems (including ElevenLabs) capable of producing emotionally tuned, natural-sounding voices. These voices can be configured to sound like a Medicare representative, a federal investigator, or a customer service agent — complete with appropriate tone, pacing, and authority.

That matters because deepfake scam tactics are no longer limited to video links. With voice duplication AI, criminals' increasing use of systems that can clone voices (including deep-fake audio) makes a simple call feel “real.”

And it's fast. Per-turn latency averaged around 6 seconds, putting it within striking distance of real-time deployment with modest optimization.

Human raters were asked to evaluate ScamAgent's dialogues alongside real-world scam call transcripts. The scores were sobering:

Plausibility (out of 5)3.43.6
Persuasiveness (out of 5)3.63.9

Nearly indistinguishable. Especially for someone who isn't expecting to be evaluated, like older adults, who are often targeted by AI-related scams.

Photographer: Shamin Haky | Source: Unsplash

One of the tested scenarios walks ScamAgent through a Medicare verification call. The AI begins with a warm, bureaucratic greeting — identifying itself as a representative calling to verify coverage information. It asks for your name, date of birth, and Medicare ID. Then, methodically, it steers the conversation toward the last four digits of your Social Security number.

Every step sounds routine. Every transition feels natural. The call sounds exactly like the kind of thing your elderly parent or grandparent would trust completely.

In end-to-end testing, ScamAgent completed all scam subtasks — without breaking down — in up to 74% of attempts when using LLaMA3-70B as the underlying model.


The paper makes an important point that the security community needs to internalize: today's AI safety systems are designed for single-turn, text-only interactions. They check each prompt in isolation. They don't track intent across a multi-turn conversation. They don't account for voice channels.

ScamAgent exploits every one of those gaps simultaneously:

  • Multi-turn memory — it tracks what it has already extracted and what it still needs
  • Goal decomposition — it breaks harmful objectives into innocent-looking steps
  • Cross-channel delivery — it moves from text planning to voice execution, evading text-only filters
  • Emotional tuning — it adapts its tone to maintain trust throughout the call

The researchers are calling for a new generation of safety systems — ones that monitor intent across turns, track cumulative behavior, and operate in voice- and multimodal environments. That infrastructure largely doesn't exist yet, and it’s why security teams can’t rely on “single prompt” defenses to stop modern AI-powered tools.


This research is academic, but the techniques are real — and they will not stay academic forever. Here's what you should take away:

For everyday users (consumer advice and quick tips):

  • Be deeply skeptical of any unsolicited call asking for personal or financial information, even if the caller seems calm, official, and knowledgeable.
  • Never share sensitive information (SSNs, bank codes, one-time passcodes) over the phone — and don’t confirm phone numbers or account details just because the caller “already has some of it.”
  • Watch for signs like urgency, threats, “limited-time” pressure, or instructions to move money for a “sale,” “verification,” or “security hold.”
  • Government agencies — SSA, Medicare, IRS — do not initiate contact by phone to verify your personal information. They use the mail.
  • If a call feels off, hang up and call the official number directly using trusted sources (a statement, the back of your card, or a verified website), not the number the caller gives you.
  • Be cautious with spoof emails, unexpected video links, and “follow-up” messages that try to continue the scam across channels.
  • Use call-blocking apps and register with the Do Not Call Registry (though neither is a silver bullet against AI-powered callers).

If you suspect a scam, report it to the Federal Trade Commission (FTC) and, when appropriate, the Federal Communications Commission (FCC). If investing is involved, check investor resources from your brokerage or regulator before sending money.

Photographer: Taylor Grote | Source: Unsplash

For the tech and security community:

  • This paper is a clear signal that LLM safety evaluation needs to evolve from single-turn testing to multi-turn, agent-level assessment.
  • Voice-channel AI abuse is an under-secured threat vector and needs urgent attention from both developers and regulators.
  • Red-teaming AI products should now explicitly include agentic scenarios, not just isolated prompt testing.
  • Monitoring social media profiles for impersonation and applying data-driven decisions to fraud detection will matter more as deepfakes scale.

ScamAgent is a wake-up call — pun absolutely intended. Researchers have demonstrated that today's AI models, combined with widely available TTS tools, can be orchestrated into a system capable of sophisticated, adaptive voice-based fraud at near-human levels of persuasiveness.

The good news is this research was done by people trying to expose the problem, not exploit it. The bad news is that the gap between academic proof-of-concept and criminal deployment is shrinking fast.

The phone is ringing. AI might be on the other end.

Comments

Popular posts from this blog

How AI-powered social engineering exploits help desk staff and what tech companies can do to stay ahead

Photographer: Centre for Ageing Better | Source: Unsplash In today’s digital world, technology advances swiftly, bringing both opportunities and challenges. Businesses and individuals alike rely on tech for solutions and support. However, cybercriminals have adapted, using artificial intelligence (AI) to conduct sophisticated social engineering attacks targeting help desk staff. Understanding these threats and implementing effective countermeasures is crucial for companies aiming to bolster their cybersecurity. Understanding AI-powered social engineering AI-powered social engineering involves using AI tools to mimic human-like interactions, exploiting the natural trust help desk staff have in their clients. These attacks can be compelling, as AI can generate language patterns and adapt quickly to responses, making it difficult for employees to distinguish between legitimate queries and those of malicious actors. AI's ability to learn and adapt in real-time makes these attacks part...

NVMe vs SSD: Understanding the Differences and Choosing the Best Drive Type for Your Needs

Photographer: Michael Kahn | Source: Unsplash Delve into the world of hard drive storage and discover the differences between NVMe drives and SSDs, the fastest storage solutions available for your desktop or laptop. Understanding Hard Drive Storage: A Brief Overview Hard drive storage is an essential component of desktop and laptop computers. It refers to the space for storing files, documents, and software. Different hard drives exist, including traditional spinning drives, solid-state drives (SSDs), and NVMe drives. Understanding the basics of these storage solutions is crucial for making informed decisions about upgrading or purchasing a new computer. Traditional spinning drives, or hard disk drives (HDDs), utilize a spinning magnetic disk to store data. They have been around for decades and offer ample storage capacities at affordable prices. However, they are relatively slower compared to SSDs and NVMe drives. SSDs, on the other hand, use flash memory to store data. They have no ...

The AI Revolution: Who's Leading the Charge in 2025

Photographer: Igor Omilaev | Source: Unsplash Hey there, tech enthusiasts! As someone who's been tracking the AI landscape closely, I wanted to share some exciting developments happening in the world of artificial intelligence this year. 2025 has already seen some game-changing partnerships and product launches that are reshaping our perspective on technology. Let's break it down in simple terms! The Big Tech Players: What They're Up To Google's Bold Moves Google isn't holding back! They've rolled out Gemini 2.5 Pro and Gemini 2.5 Flash, which are now top performers in learning and coding benchmarks. What I find most exciting is Gemini Live, which lets you interact with AI in real-world situations through multiple formats (text, images, voice). They've also launched an AI-powered TV and enhanced their search with a new AI Mode. Remember Project Starline? It has evolved into Google Beam, offering incredibly realistic 3D video calls. Nvidia: Powering th...