Criminal AI: What Every Business Needs to Know in 2025
- Rosemary Sweig
- 4 days ago
- 8 min read
Updated: 2 days ago
Criminals are ahead of most CEOs. While businesses debate AI integration, bad actors are already using it to steal trust, money, and reputation. Here’s what you need to know.

Image generated using AI
I recently shared on LinkedIn the tragic story of a family friend who was robbed of all his business and family wealth by a voice-cloned message from his bank. He was a small business owner, intense and focused, with 15 staff who depended on their jobs to survive.
He took the call during a busy moment in his store, with customers around him. The voice sounded official. In seconds, his bank accounts were drained. He and his family were in financial ruin.
It doesn’t matter that he ran a small business; the devastation was no less than if he had led a billion-dollar enterprise.
This is happening at a devastatingly rapid scale around the globe. Small companies, individuals, and massive corporations - none are immune to the threats posed by criminal AI.
In 2019, a UK-based energy firm became the first known victim of an AI-generated voice deepfake scam. The CEO believed he was speaking to his boss at the German parent company, who instructed him to urgently wire €220,000 (approximately $243,000) to a Hungarian supplier. The voice, complete with accent and vocal "melody," was a fake, generated by AI.
The fraudster called three times: first to request the transfer, then to claim it had been reimbursed, and finally to ask for another payment. It wasn’t until the third call, made from an Austrian number, that the CEO grew suspicious and halted the process.
This was the first reported instance of AI-driven voice cloning used in a corporate scam, signalling just how real the threat had become.
This isn’t an isolated case. It’s a warning.
AI is no longer just a tool; it has become a transformative force. It is also a growing threat to brand credibility and public trust. In the wrong hands, it is being used to generate misinformation, impersonate executives, and launch highly targeted phishing attacks. This isn’t theoretical. It is happening now, and corporate communications teams must be prepared for this.
As AI accelerates, so do its abuses. From deepfake videos to fake press releases, criminal AI is becoming a real reputational and operational risk. For communications leaders, the question is no longer "Will this affect us?" but "When will we be affected, and are we prepared?"

What Is Criminal AI?
Criminal AI refers to bad actors using artificial intelligence for harmful or illegal purposes. These activities mostly involve deception, manipulation, or unauthorized access. And increasingly, the targets are businesses and their communication channels, not just government systems or personal devices.
This isn’t just a cybersecurity concern. It’s a communications crisis waiting to happen. I’ve had conversations with clients who assume that if their systems are secure, they’re safe. But what happens when it’s your CEO’s voice being faked? Or a press release you never wrote? Or your social channels used to push misinformation?
Take the recent voice cloning attack on Canada’s new
Prime Minister Mark Carney.
France 24 English aired a video of the prime minister announcing that, effective immediately, cars manufactured before 2000 would be removed from Canadian roads, and that unauthorized window tinting would be prohibited.
This flew around the internet in minutes, allowing extreme right-wing trolls to start suggesting that the prime minister was behaving like Cuba’s Fidel Castro, as they had accused former Prime Minister Justin Trudeau during the COVID-19 pandemic. At that time, there was no cloned video.
The video of Prime Minister Carney was convincing. However, closer inspection revealed that his lip movements didn’t match the audio, and his colouring was off.
Communications leaders need to understand not only what criminal AI is, but also how it manifests in our specific sphere.
Standard Terms You Should Know:
Deepfake: AI-generated video or audio that makes people appear to say or do things they never did.
Voice Cloning: AI replication of someone's voice to create audio that sounds identical to the real person.
Synthetic Media: Any visual, audio, or text content created by AI to imitate or replace human-generated content.
Phishing: Fraudulent communication meant to trick people into sharing sensitive information.
Spear Phishing: Highly targeted phishing using specific information about the victim or organization.
Disinformation Bots: AI-driven accounts that amplify false narratives online, often across multiple platforms, with emotional, polarizing language.
Here are the most pressing forms of criminal AI communications that leaders need to understand:
Deepfakes and Voice Cloning
Use Case: According to 2024 research by Starling Bank, 28% of people reported being targeted by an AI voice-cloning scam in the past year. Scammers use audio lifted from social media to create convincing imitations of friends or relatives, then call or message their loved ones to request urgent money transfers.
Shockingly, 46% of people were unaware that such scams even exist, and 8% admitted they might send money even if the request felt suspicious. In one corporate case under police investigation in Hong Kong, an employee was tricked into transferring £20 million during a deepfake video conference impersonating senior leadership.
Why It Matters: These scams exploit personal trust and familiarity, whether targeting consumers or corporate teams. A single convincing call or video can trigger irreversible damage, both financially and reputationally.
Phishing, Spear-Phishing, and Impersonation
Use Case: IBM’s X-Force team found that AI-generated phishing emails were more convincing than traditional ones, with 40% more people clicking on them.
Although fewer people are falling for phishing scams than in 2022, there has been a significant increase in phishing emails that contain hidden software designed to steal passwords and private information. These emails are now sent weekly. In 2024, phishing was involved in 30% of all security issues that IBM X-Force investigated.
Why It Matters: These emails often look completely legitimate. In times of crisis or urgency, even cautious employees may be susceptible to them. Communications teams risk having their channels used against them. When executive identities are spoofed or internal messages are weaponized, the damage is not just technical; it’s reputational.
AI-Generated Financial Misinformation
Use Case: In 2024, the CEO of WPP, the world's largest advertising firm, was targeted by a sophisticated deepfake scam. Fraudsters created a WhatsApp account using a publicly available image of the CEO and set up a Microsoft Teams meeting that appeared to include him and another senior executive.
During the meeting, the impostors used AI-generated voice clones and video footage to impersonate the executives, attempting to persuade an agency leader to establish a new business venture and obtain financial information and personal details. The scam was thwarted due to the vigilance of the targeted executive and WPP staff.
Why It Matters: AI can mimic not just individual voices or faces, but institutional messaging. A single convincing video or statement can shake investor confidence, confuse employees, or damage relationships with regulators and stakeholders.
Bot-Driven Disinformation
Use Case: In the lead-up to Romania's 2025 presidential runoff election, the country faced a surge of online disinformation. Reports indicated that 24% of Romanian-language Telegram channels were promoting Kremlin-backed narratives.
These campaigns were amplified by coordinated bot activity on platforms like TikTok and Telegram, aiming to influence public opinion and sow discord. This is not dissimilar to the alleged Russian bot farms that flooded U.S. social media with misinformation to persuade voters in Donald Trump’s favour in 2016.
Why It Matters: AI-powered bots can rapidly disseminate false information, masquerading as real people and using emotional messaging to influence public opinion. These campaigns can damage reputations, mislead voters, and even destabilize entire institutions. Communications leaders must closely monitor digital spaces and be prepared to respond to coordinated misinformation attacks.
Identity and Brand Hijacking
Use Case: In June 2024, Quebec police arrested three individuals in connection with a massive data breach and fraud operation involving Desjardins, one of Canada's largest financial institutions. The suspects allegedly used stolen personal data to commit nearly $9 million in fraud between 2018 and 2019.
The breach affected 9.7 million Desjardins members and was enabled in part by tactics now being enhanced by artificial intelligence (AI), including data scraping and impersonation techniques.
Why It Matters: These attacks show how personal data and brand trust can be weaponized. AI now allows scammers to create more realistic spoofing and impersonation schemes. Communications leaders must ensure that brand assets, customer messages, and crisis protocols are closely monitored and clearlydistinguishable from fake content.

Image generated using AI
5 Strategic Steps Communications Leaders Should Take
1. Audit Your Vulnerabilities
Review your website, press materials, logos, and social content for spoofing risk.
Examine your current protocols for verifying executive quotes or urgent messages.
Set up Google Alerts or media monitoring tools to detect unauthorized mentions or impersonations.
2. Establish AI-Specific Crisis Protocols
Add AI threats like deepfakes and spoofed media to your crisis playbooks.
Develop media holding statements for exposure to fake content.
Align with legal and IT on chain-of-command decisions for synthetic threat response.
3. Partner with Legal, IT, and Enterprise Risk
Proactively schedule quarterly syncs across departments to ensure seamless communication and collaboration.
Advocate for communications presence in all AI governance efforts.
Share updates on public AI misuse cases to drive internal urgency.
4. Train Employees and Public-Facing Leaders
Host quarterly lunch-and-learns or mandatory microlearning sessions to provide ongoing training.
Include real-world examples of AI scams in all training.
Encourage external spokespeople to build trust on platforms like LinkedIn, where fakes can emerge.
5. Model Responsible AI Use Internally
Disclose AI-assisted content where applicable (blog footers, disclaimers).
Document editorial review practices if using generative AI.
Lead discussions on ethical technology.
The Flip Side: AI as a Compliance Ally
Criminals use AI, but so can compliance and communications teams to detect fraud, missteps, and misinformation early. According to the Journal of Economic Criminology (Vol. 5, 2024), AI is becoming a powerful tool for preventing corporate misconduct. With predictive analytics, anomaly detection, and automated data surveillance, AI can:
Detect Abnormal Internal Communication Patterns
Use Natural Language Processing (NLP) to identify sudden tone shifts or suspicious phrasing.
Monitor for early signs of insider threats or policy violations to prevent potential issues from arising.
Flag Risky Language Before It Goes Public
Use AI-powered content review tools to catch red flags in press releases and social posts.
Build workflows that include legal review for sensitive topics.
Identify Misinformation Loops Early
Employ sentiment analysis to catch bot-driven distortions or sudden drops in brand trust.
Used responsibly, AI gives communications leaders not just defence, but foresight.
Final Thoughts
AI isn’t criminal. But criminals are using it, and communicators are now on the front line.
I'd like to tell you that our family friend and his business have bounced back, but unfortunately, that is not the case.
Family and friends have rallied to help, but the outlook is grim for a small business owner in his 60s who must now rebuild from scratch. He has lost his savings, investments, and retirement, all in the matter of seconds, due to a momentary distraction.
This is why communications leaders, in tandem with IT and security teams, must take the lead in informing and educating their organizations. It is not enough to implement tools. It is not enough to follow trends. We must shape the systems, ethics, and understanding around AI because the harm is real and the damage can be permanent.
You’re not just protecting messages. You’re protecting perception, public trust, and your organization’s future.
The best defence is awareness. The best strategy is readiness. And the best opportunity is to lead.
Is your organization prepared for a deepfake attack or a synthetic news event? Now is the time to build a response strategy - before you need it. Let CommsPro help you audit your vulnerabilities and develop a tailored AI communications crisis plan.
Contact us or view our services at commspro.ca
About Rosemary Sweig
Rosemary Sweig is the founder of CommsPro and the blog, CommsPro HQ. She is a trusted advisor to senior communications executives navigating the future of AI-driven communications. With decades of experience as both a corporate executive and a strategic consultant, Rosemary helps teams integrate AI responsibly, without sacrificing the human voice that fosters trust. www.commspro.ca
Sources
Forbes: "A Voice Deepfake Was Used To Scam A CEO Out Of $243,000"
The Guardian: "Warning: Social media videos exploited by scammers to clone voices"
Reuters: “Romania Braces for Wave of Disinformation Ahead of 2025 Election”
France 24 English broadcast, May 2024 – deepfake video of Prime Minister Carney
© 2025 CommsPro. Some rights reserved. This content may be shared with attribution and a link to www.commspro.ca.
It’s a dangerous world out there! Well done, Rosemary. The site is easy to work through and understand. Good flow.