AI is making phishing scams more dangerous
AI chatbots have taken the world by storm in recent months. We’ve been having fun asking ChatGPT questions, trying to find out how much of our jobs it can do, and even getting it to tell us jokes. But while lots of people have been having fun, cybercriminals have been powering ahead and finding ways to use AI for more sinister purposes.
They’ve worked out that AI can make their phishing scams harder to detect – and that makes them more successful. Our advice has always been to be cautious with emails. Read them carefully. Look out for spelling mistakes and grammatical errors. Make sure it’s the real deal before clicking any links.
And that’s still excellent advice.
But ironically, the phishing emails generated by a chatbot feel more human than ever before – which puts you and your people at greater risk of falling for a scam. So we all need to be even more careful. Crooks are using AI to generate unique variations of the same phishing lure. They’re using it to eradicate spelling and grammar mistakes, and even to create entire email threads to make the scam more plausible.
Security tools to detect messages written by AI are in development, but they’re still a way off.
That means you need to be extra cautious when opening emails – especially ones you’re not expecting. Always check the address the message is sent from, and double-check with the sender (not by replying to the email!) if you have even the smallest doubt.
If you need further advice or team training about phishing scams, just get in touch. Contact us today or book a discovery session