Passwords are in for a rough year...
A researcher just released BruteForceAI: a tool that uses AI + an automated browser to:
-- automatically detect login pages, -- understand login/password fields, -- launch brute force or password spray attacks in a "human" way (timing, random user-agent, etc.), -- and report successes directly to Discord / Slack / Teams.
On paper, it is presented as a pentest tool (and it MUST remain within that scope, otherwise it is illegal). In reality, it mostly confirms one thing: credential attacks are scaling up with AI.
Concretely, what does this mean for a "normal" company?
If your security still mainly relies on
-- login + password -- without MFA -- with little to no lockout after multiple attempts ...you become an ideal target for these kinds of "intelligent" tools.
A few reflexes to implement (yesterday)
MFA everywhere (at minimum on VPN, webmail, CRM, exposed internal tools) Lockout and throttling after X failed attempts (CAPTCHA, delay, temporary lockout) Strong passwords + a password manager for everyone Monitoring of abnormal logins (spikes in attempts, suspicious IPs, failure patterns) Simple rule: "Anything that exposes a login page = critical asset to monitor and harden"
Attackers are industrializing credential testing with AI. It is up to us to industrialize defense and authentication hygiene.
A North Korean IT worker hired by an American cybersecurity company. First day. In less than 25 minutes, he attempts to deploy malware on the internal network. This is not a joke.
It happened at KnowBe4, a specialist in cybersecurity awareness training. The irony sometimes writes itself.
The candidate Kyle checks every box
4 video interviews Solid technical skills Verified references 100% remote position Laptop shipped to an address in Washington State
Except Kyle does not exist.
When the FBI goes to the address, they do find the laptop.
But behind the screen: an American intermediary.
The "employee" is actually working from China.
A North Korean IT worker, connected remotely via VPN and remote desktop, working night shifts to match US hours. Dedicated guy.
His LinkedIn photo?
An AI-generated image from stock photos. Realistic enough to pass 4 video interviews. Altered enough to not be detected in a reverse image search. And Kyle is just one among hundreds.
Mandiant (Google) researchers followed the trail
300+ US and UK companies infiltrated Fortune 500 companies Defense, media, and industrial players
A single American intermediary used dozens of stolen identities to get North Koreans hired at more than 300 companies.
Revenue generated: several million dollars.
She was just sentenced to several years in prison.
The scheme is industrialized.
On the resumes: Degrees obtained abroad, difficult to verify Clean US addresses Requests to ship the laptop to a different address Identical phrases from one resume to another
In interviews: Personas tested and optimized (yes, A/B testing) Profiles that land the job, then hand off
These IT workers are linked to unit UNC 5267, attached to Bureau 121, North Korea's cyber warfare unit.
Same ecosystem as the Lazarus Group (Sony Pictures, WannaCry, billions stolen in crypto).
The objectives are crystal clear
Fund the regime Maintain dormant access in critical systems Industrial espionage
What this story reveals is uncomfortable.
Our hiring processes rely on trust
Self-reported resumes Foreign degrees rarely verified References skimmed over Photos never authenticated Video interviews that have become "good enough" as normal
Against a hostile state that industrializes fraud with AI and years of optimization, our defenses are built for a different world.
The next time you hire a remote tech profile, with a foreign degree and a request to ship the laptop to a different address...