ChatGPT creates phisher's paradise by serving the wrong URLs for major companies
"It's actually quite similar to some of the supply chain attacks we've seen before [...] you're trying to trick somebody who's doing some vibe coding into using the wrong API."
I have renewed faith in the universe. It totally makes sense that vibe coding would be poisoned into useless oblivion so early in this game.
In 2015 it was copy and pasting code from stackoverflow, in 2020 it was npm install left-pad, in 2025 it's vibecoding.
I refuse to join them, and I patiently await the day of rapture.
So a lot of people are quietly watching lots of ships slowly taking water.
After questions like "Is your router plugged in", "What lights are on", etc., they eventually asked for my account email address and password (I was already getting suspicious by that point and of course did not provide). I could hear other agents in the background having similar conversations with other callers.
Based on some probing questions I asked, I think the agents themselves didn't even realize they weren't really working for Starlink.
I was in a hurry and should have double checked the number before calling, rather than blindly trusting the AI shoveled slop.
I tried but wasn't able to recreate the search query that produced the result, and a Google search for the number itself came up blank. A few weeks later, the number just rings with no answer.
“Crims have cottoned on to a new way to lead you astray”
Was - was the article written by AI?
It seems they are selling some product to protect against this. So, I could believe the headline here, but less confident that this is an unbiased look at the problem.
Note this wording: "Netcraft ran an experiment using a GPT-4.1 family of models."
Why GPT 4.1? Nobody uses that. Also 'family'? That just seems like a way to hide that they used GPT 4.1-mini rather than the full model.