
Top 10 Phishing Threats Happening Right Now
Phishing attacks are showing no signs of slowing down; they are getting smarter, more convincing and harder to spot. Cybercriminals are using advanced social engineering tactics such as generative AI and deepfake voices to catch even the most security-aware people off guard. At Echo Secure AI, we have been monitoring these threats closely and what we are seeing is not just a minor annoyance - it is a major security crisis.
Here are the top 10 phishing and vishing threats making waves right now and why they matter.
1. IT Support Call
The Threat: Employees receive a call from ‘IT Support', claiming their email has been compromised. The caller guides them through ‘fixing’ the issue but is misleading the individual into handing over their account credentials.
Why it’s Dangerous: Attackers now use AI-driven interactive bots in their targeted campaigns. Traditional, easily identifiable robotic voices are no longer prevalent; these bots are becoming increasingly sophisticated, making these calls appear legitimate. Without necessary precautions, innocent individuals are at risk of being deceived into performing actions detrimental to their security, which can result in unauthorised access to sensitive company data.
How Echo Secure AI can Help: We craft realistic AI-driven vishing scenarios to train teams to better detect and respond to fraudulent calls, such as those impersonating IT support. Our AI models are updated frequently to ensure that our training is based upon the most current attack vectors.
2. CEO Request
The Threat: A finance manager receives a WhatsApp voice note from their CEO asking them to urgently transfer £250,000 to a new supplier. The voice sounds exactly like the CEO - but it is actually a highly realistic AI-generated deepfake.
Why it’s Dangerous: This kind of vishing threat can go undetected by traditional security measures and relies on the targeted individual to identify the attempt as suspicious. People instinctively trust voices they recognise and thus, without education will not know how to spot the deception, inadvertently transferring funds to a cybercriminal.
How Echo Secure AI can Help: We offer organisations simulated vishing attacks using personalised deepfake voices, exposing vulnerabilities before real attackers do. This is part of a unique learning experience that prepares your staff for real world attacks.
3. HMRC Refund Link
The Threat: Individuals receive a convincing email or text message, complete with an official-looking HMRC logo saying: “You’re eligible for a £700 tax rebate: click this link to claim it.” Clicking the link however, leads to malware installation or identity theft.
Why it’s Dangerous: People often rush to respond when they believe financial compensation is at stake, and in their haste, may divulge sensitive personal and banking information. Cybercriminals exploit this urgency, knowing that individuals are more likely to make mistakes when under perceived time pressure.
How Echo Secure AI can Help: We replicate these threats in sophisticated phishing campaign simulations to help individuals identify fraudulent communications, thereby preparing them for attacks.
4. Supplier Payment Change Request
The Threat: A finance team receives notification from an existing supplier to update their bank details for future payments. The email looks legitimate, and even a follow-up call confirms it. However, the notification originates from a threat actor, and the funds are redirected to criminals.
Why it’s Dangerous: Threat actors use AI to mimic voices, making verification calls appear authentic. By utilising deepfake technology, cybercriminals can replicate the voices of known individuals, thereby deceiving targets. This can result in significant financial loss and disruption to business operations.
How Echo Secure AI can Help: Our phishing simulations train personnel to verify supplier changes through secure channels, beyond email and singular phone calls. This can turn your employees from potential targets into vigilant defenders online, empowering them to recognise and avoid real-world phishing attempts.
5. LinkedIn Job Offer
The Threat: An individual receives an exciting job offer from a ‘recruiter’ on LinkedIn. They are invited to a Zoom meeting to discuss, however, the provided link leads to a fraudulent login page that steals the individual’s credentials. The ‘recruiter’ is not even a real person. This can result in the compromise of their LinkedIn account and potentially other linked accounts.
Why it’s Dangerous: With AI, threat actors can create entire fake recruiters, utilising deepfake technology to even hold real-time conversations. This advanced level of deception can easily trick unsuspecting individuals, leading to significant data breaches.
How Echo Secure AI can Help: We can provide custom LinkedIn phishing simulations to help organisations protect their teams from social engineering threats, offering realistic scenarios that prepare employees to identify and resist these attacks.
6. ‘Mum, I’ve Lost my Phone’
The Threat: A parent receives the following message from a cybercriminal: ‘Mum, it’s me. My phone’s broken - can you send money to my new account?’ The message may even include their child’s name. A panicked parent then sends hundreds of pounds before realising their child’s actual phone is undamaged.
Why it’s Dangerous: This preys on the immediate emotional response of parents. Attackers may examine targets’ social media posts to gather personal details and deploy AI-generated messages to create highly believable communications that can deceive the individual, leading to financial loss.
How Echo Secure AI can Help: We offer personalised text-based phishing attack simulations and educate individuals best practice behaviours to stay safe.
7. Bank Fraud Call
The Threat: Individuals receive a call from ‘their bank’, claiming they have detected fraudulent activity on their account. The caller requests the individual to confirm their details, deceiving them into divulging personal data, granting access to their bank account.
Why It’s Dangerous: Cybercriminals can attempt to use these calls to directly gain access to an individual’s bank account, but with AI technology, they can also use them as an opportunity to record an individual and replicate their voice through deepfake technology. This can lead to unauthorised transactions and other forms of identity theft.
How Echo Secure AI can Help: We provide awareness training to organisations on deepfake technology to educate individuals on best practices during unsolicited calls, such as verifying caller identities and avoiding the provision of voice samples, thereby strengthening their defences.
8. Delivery Notification
The Threat: Individuals receive a highly realistic-looking email from a well-known delivery company saying: “Your parcel is undelivered due to unpaid fees. Click here to resolve.” Clicking the link instead, leads to a phishing site that steals their payment information.
Why it’s Dangerous: This tactic exploits the rise of online shopping and delivery anxiety. By creating a sense of urgency and concern about an undelivered package, cybercriminals can manipulate individuals into acting impulsively.
How Echo Secure AI can Help: Our advanced phishing campaign simulations include sophisticated fake delivery notifications, training employees to identify suspicious emails and thus, strengthening their ability to recognise real-world attempts.
9. Crypto Investment Offer
The Threat: A target receives a call with an enticing offer: “Join our exclusive Bitcoin investment group with guaranteed high returns, simply transfer £500 to begin!” They then receive a follow-up email with a payment link intended to induce a transfer of funds.
Why it’s Dangerous: This exploits the growing trust in cryptocurrency and the fear of missing potential financial gains. Deploying a call before the email creates a deceptive sense of legitimacy that can manipulate targets, leading to significant financial loss, as these investment groups are typically fraudulent.
How Echo Secure AI can Help: Vishing calls are an integral part of our simulated campaigns, teaching employees awareness of this tactic to decrease their vulnerability to such threats, and thus, bolstering their overall security posture.
10. Charity Donation Appeal
The Threat: In times of conflict or following a major disaster, emails flood in, asking for donations. However, in this instance, the transferred funds are directed to a threat actor, not to a legitimate charity.
Why It’s Dangerous: These operations exploit the heightened emotional state of individuals during crises. Driven by a desire to help, individuals are vulnerable to emotionally manipulative phishing emails, such as those impersonating aid organisations and requesting urgent donations.
How Echo Secure AI can Help: We continually update our phishing simulation training to stay ahead of evolving threat trends, incorporating real time world events into social engineering strategies. This ensures that our clients receive the most relevant and effective training available.
Why Echo Secure AI Matters Now More Than Ever
As we can see, phishing attacks are becoming nearly indistinguishable from real interactions. Cybercriminals are leveraging AI and social engineering tactics in terrifyingly effective ways, making traditional security tools obsolete.
This isn’t a ‘nice-to-have’ anymore. Echo Secure AI is built to expose and defend against these next-gen threats before they cause real damage. If your organisation isn’t preparing for adversarial threats, it’s already behind.
Want to see how your team would handle these attacks? Let’s talk. We’ll show you exactly where your weak spots are - before the real attackers do.