AI chatbots have taken the world by storm in recent months. We’ve been having fun asking ChatGPT questions, trying to find out how much of our jobs it can do, and even getting it to tell us jokes.
However, while lots of people have been having fun, cyber criminals have been powering ahead and finding ways to use AI for more sinister purposes. They’ve worked out that AI can make their phishing scams harder to detect – and that makes them more successful.
Our advice has always been to be cautious with emails. Read them carefully and look out for spelling mistakes and grammatical errors. Make sure it’s the real deal before clicking any links and that is still excellent advice.
Ironically, the phishing emails generated by a chatbot feel more human than ever before – which puts you and your people at greater risk of falling for a scam. So we all need to be even more careful.
Crooks are using AI to generate unique variations of the same phishing lure. They are using it to eradicate spelling and grammar mistakes and even to create entire email threads to make the scam even more plausible.
Security tools to detect messages written by AI are in development, but they are still a way off. That means you need to be extra cautious when opening emails – especially ones you are not expecting. Always check the address the message is sent from, and double-check with the sender (not by replying to the email!) if you have even the smallest doubt.
If you need further advice or team training about phishing scams, get in touch. We are always happy to help.
Published with permission from Your Tech Updates.