A wave of sophisticated scams on Instagram is impersonating legitimate banks to mislead users into surrendering their credentials and cash. Using AI-generated deepfake videos and phoney ads, cybercriminals have successfully mimicked institutions such as the Bank of Montreal and EQ Bank, prompting victims to divulge sensitive data or make immediate payments into fraudulent accounts.
The scam typically begins with users encountering ads that closely mirror authentic bank branding. One such example mimics EQ Bank under the guise of “Eq Marketing,” boasting an attractive 4.5 per cent interest rate. Tapping the ad redirects users to a domain virtually identical to the real bank’s, where they’re prompted to enter login details—information that is swiftly harvested by scammers to access actual accounts.
Another tactic includes AI-generated video clips pretending to feature Brian Belski, BMO’s chief investment strategist. Viewers are drawn into a process involving preliminary questions, supposedly leading to exclusive investment guidance. Ultimately, they are asked to join a “private WhatsApp investment group,” where they are coerced into providing financial information or making transfers.
Deepfake technology is at the core of these scams. Scammers increasingly exploit AI-generated imagery and audio to create realistic videos that erode viewers’ ability to identify fraudulent content. According to Deloitte, losses from AI-enabled fraud soared to $12.3 billion in 2023, with projections reaching $40 billion by 2027.
Instagram’s vulnerability stems from its advertising interface: scammers often leverage Facebook-based advertiser accounts to target Instagram without owning active Instagram profiles. Such accounts resemble established entities due to their history and follower counts—factors that shield them from immediate removal.
Meta, Instagram’s parent company, has implemented recognition tools to detect manipulated media. However, critics argue the current moderation protocols are insufficient, allowing these ads to remain live for days, even after being flagged.
Financial watchdogs and cybersecurity experts are urging a heightened level of user vigilance. They recommend that individuals always verify the legitimacy of advertisements by navigating to the main bank website or contacting customer support independently. Red flags include unrealistic yields, demands for immediate action, and prompts to provide login credentials through social media channels.
Technical safeguards are equally essential. Enabling multi-factor authentication on both banking and social media accounts can significantly reduce the impact of compromised credentials. Users are also advised to report suspicious advertisements and to utilise web protection tools—such as browser extensions and scam-detection software—that block known malicious websites.
Financial journals highlight a broader trend: deepfake scams are becoming mainstream, especially on platforms like Instagram and WhatsApp. Media figures like Martin Lewis and Martin Wolf have reportedly had their likeness used in fraudulent investment ads, fuelled by rapidly advancing generative AI technology.
The impact on victims can be devastating. Data from Malwarebytes reveals that 36 per cent of social engineering targets suffered financial loss, with 17 per cent encountering forms of deepfake-based extortion such as doctored videos or voice calls.
Banks have begun strengthening protocols around customer alerts and fraud detection. In jurisdictions like Canada and the UK, regulatory bodies are urging stricter vetting of digital advertisements. In the UK, the Online Safety Act requires urgent action on digital fraud materials, while Canada is intensifying scrutiny of financial marketing on social media platforms.
Until technology and policy catch up, cybersecurity specialists emphasise that consumer awareness remains the most potent defence. As deepfake technology becomes more accessible, every social media interaction—especially those promising high yields—demands scrutiny and scepticism.