Dear "Security Researchers"
My first foray into "beg bounties" was with Chalk. We received a report that inputs that contained malicious terminal escape sequences would be emitted to the terminal if passed through chalk.
But... yeah, of course they would. It's just a glorified string formatter, we don't care what you pass to us. They would have been emitted anyway if you just used console.log. There was literally nothing actionable for us to do. It wasn't our responsibility.
It wasn't just left there. The "researcher" persisted, threatening to file a CVE (which wreaks havoc on an OSS dependency such as Chalk that has millions of downloads a day) and kept swinging their proverbial member around with how they work for soandso esteemed research company (it wasn't) and ultimately demanded we compensate for their time, citing it as a responsibility and obligation.
I would have ignored it, but the threat of CVE (and the fact we'd have literally zero recourse against it) kept me on the hook.
Ever since then it's really watered down my view of the CVE/CVSS systems and has turned me a bit bitter toward "security researchers" in general, which isn't where I'd like to be.
With the rise of automatic ReDos detection the problem has only compounded over the last 4-5 years - things that might even technically fall under the "vulnerability" umbrella, but only if the code is so intentionally and egregiously misused, and in a manner that is dangerous with any library let alone ours, still earning a plea for monetary compensation.
It's silly, saddening and only discourages people to work on OSS stuff at a scale larger than hobbies, to be honest.
(Thank you for coming to my TED talk)
EDIT: I should mention that I have received legitimate reports by well-meaning researchers (no quotes) that are detailed and professional in nature I'm always proud to service them and see them through. They're increasingly rare, though. The downside is that doing OSS for free means I still cannot compensate for their time, even though I would love to.
threatening to file a CVE (which wreaks havoc on an OSS dependency such as Chalk that has millions of downloads a day)
Why is that? If you aren't on payroll what leverage does anyone have over you here? Can't you quickly close all newly opened issues as WONTFIX with a copy-pasted link to the first one explaining the situation?
Maybe I'm missing something but if I'm not being paid I really don't understand why I should care about something like this. If the people raising the issues are doing it as part of their job then this approach shifts the problem back to them - they can either coordinate among themselves to correct the CVE system or they can spend the time to explain the systemic failure to their manager.
I would have ignored it, but the threat of CVE (and the fact we'd have literally zero recourse against it) kept me on the hook.
Why? If the person is truly making unreasonable demands then why would you expect anything you do to change the eventual outcome? They will do whatever they will do. Tell them their scam bit has failed on you, block them, and forget they exist.
Idk. Guess it's just not how I'd rather do GitHub interactions. I'd rather explain why, so that people who have hot heads are forced to be mad at the decision rather than me. It stems back to when I was on GitHub almost 15 years ago, and the culture it used to have, and not wanting to give that up. Could be a lost cause now, though.
I'd rather explain why
I might have given you the wrong idea. I think it's important to explain why - exactly once. After that a link to that original explanation of what happened and why things are the way they are should suffice.
The thing is, if someone isn't willing to follow that link and take the time to understand what's going on then presumably they aren't a reasonable individual. If that's the case then I estimate it's unlikely that engaging with them will lead anywhere useful.
It's one thing if it's your manager or a teammate being unreasonable. But when it's a pseudonymous avatar with no financial or professional relationship to you that seems like all downsides and no upsides to me.
And yes, people who get scared file reports asking us to fix it, oftentimes we can't (because it's not a real vuln), and 99% of the time they're not even remotely affected. The requests are generally not very nice, either.
Most of the time, the request to fix a CVE is the first we've ever heard of it being filed. "Security research agencies" oftentimes have their own databases, and rarely do these "researchers" follow any amount of responsible disclosure, let alone even telling us about their findings or that they've filed. Many of them don't even cite a reporter, email, or any such contact information.
Especially when the CVE is nonsense, improperly scored, etc. there is almost no way to get it taken down or to reduce its severity. Apparently in the eyes of these agencies the people writing the code cannot adequately gauge the severity of vulnerabilities. It's nearing levels of almost zero collaboration with maintainers before things are filed.
I can't remember the last time I even had the opportunity to release a proper fix before a CVE went out. I can count on one hand the number of times I was asked before a CVE was filed. I don't think a single CVE of any code I maintain has had a score that properly reflected the severity of the actual vulnerability (assuming there even was one).
Remember, most of us do this in free time, for free. So all of the extra bureaucracy and synthesized urgency can be extremely detrimental.
I believe there must be some "Murphy's law" or something with a name for that situation. When there is an established system and process there will be people geniunely using and it will work 100% for such people and bring value to all parties. And then there will be people "gaming the system" (for instance "security" "researchers" using CVE created by them as a personal or corporate KPI) and abusing other legitimate players. It is like patent trolls but better fortunately for us.
Fortunately for all people of reason, the system itself gets patched. Maybe not in the best way possible but still. Two examples:
- Openvex - this is my way as a project owner/developer/maintainer to declare that CVE-XXXX-YYYYY is false positive and why.
- context-aware vulnerability scanning (or exploitability scanning) - govulncheck is a great example of that. You run and it says, module X has vulnerabilities Y and Z but you keep calm because we have not found any symbols in your codebase using it.
I wish more scanners would adopt the second approach and projects like npm would greatly benefit from it. As you truly note you pick a random project and it will be 50% filled with ReDos or something that will be sitting in some nested dependency of eslint's dep tree that will never see the light of production environment's CPU.
That being said, any vulnerability reporting and obtaining a CVE ID bypassing the project's security policies and responsible disclosure procedures must be prohibited. As a maintainer I want to know about a newly discovered vulnerability in my code to fix it and then the kind person that reported it to me can proceed to MITRE with the report in one hand and fixed version in the other.
Also interesting recommendations, I haven't heard of them either. One problem, having not tried any of these, is that if the "loud" mechanism (e.g. npm's audit tool) doesn't also check and reconcile, then it really doesn't do much good.
The people that open the issues and don't generally know what CVSS is, are not otherwise checking these databases first (oftentimes not even checking for duplicate issues, to begin with). Unless I can revoke a CVE or at least put in a correction, it will remain broken. Simple as that.
The people that open the issues and don't generally know what CVSS is, are not otherwise checking these databases first
Add a notice to the issue template checklist to check the database. Maybe link to a wiki page that illustrates how. Mercilessly issue temporary bans for violations.
This is a spam issue plain and simple even if the perpetrator didn't intend it that way.
In ideal world people who rely on hand picked dependencies they know well would provide fixes or workarounds instead of screaming at maintainers.
But yeah we live in a world where people just add dependencies or install software they don’t need or understand. So we have to deal with that.
The thing security people refuse to accept though, is that security isn’t a paramount business concern, even if management understands the real risks. Stolen customer data is often followed by an apology and password reset request. Nobody cares, especially in a world where personal and private data no longer exists. Restore from backup, and move on.
Should it be that way? No. But it is. It’s not a security or awareness problem, it’s a business/culture problem. You can’t fix a broken engine with a better taillight.
But even in 2025, I have come across companies who do not at all care about rewarding good security researchers who report issues. Hell, I have even been ghosted after reporting the bug which they promptly fixed and did not even write back to say a "thank you". Has anyone else also encountered this behavior from tech companies? (not talking about a non profit, hospital or gov agency here)
I'm a security researcher - no quotes. I write detailed, highly technical write-ups for all of the issues I discover, including reproduction steps, root cause analysis and suggestions for fixes. I follow all responsible disclosure guidelines + any guidelines that the company or entity might have for security disclosures.
It's disheartening when you put this amount of effort into it, it gets silently patched, and you get no recognition or even a "thank you". But I don't let it bother me too much. I'm doing this research mostly for myself and because I find it interesting. The fact that I'm disclosing the issues is me being a good citizen, but I shouldn't expect a pat on the head for every issue I disclose.
Being ignored always sucks. But it's still infinitely better than doing all of the above and being threatened with a lawsuit (which has, unfortunately, happened as well).
I made multiple attempts to report it to their security team/mailbox over a several months and never got any response or acknowledgement back from them. Then a few months later they quietly fixed the issue.
As you note, the field has been damaged by bounty hunters. When the SNR drops low enough there's no point even reading the damn things and high-quality reports will be discarded along with the dross.
Without feedback you don't know that the bug was fixed in reaction to your bug report.
In this particular case, they did say they will consider a reward for a severe bug (it was severe, DNS hijack) and then once I shared details, the next day I checked, they had fixed it and never wrote back.
I did not know bug bounty had such a bad rep. Is this for reporting bugs outside of the bug bounty platforms?
Is this for reporting bugs outside of the bug bounty platforms?
Nah, in this case they simply had no official bug bounty program/platform.
I would guess that a big factor is mindset and tech culture across different companies or having a bad head of something who doesn't get the point of bug bounty / promoting responsible disclosure.
Beg bounty hunters have damaged the field so much.
Sure, the grifters themselves are guilty too. But hear me out: maybe the corporate geniuses who decided to crowdsource security using non-contractual if-we-feel-like-it bounty payments could have contributed to the grifting culture.
Hell, I have even been ghosted after reporting the bug which they promptly fixed and did not even write back to say a "thank you".
Just curious, why perform labor without a contract? If it’s just for personal interest, I wouldn’t even bother to report unless the company has something to offer first.
I'm sorry that the security industry is a cesspool. We all know it's a cesspool. We can't pump it out.
However, please do not let the absolute state of things cause you to give up on security. Don't stop patching, don't go back to writing your passwords on post-it notes, don't just expose everything to the open internet and don't let an LLM perform your only code security review. Keep doing the boring, basic things, and you'll have the best chance at keeping the attackers out.
Ultimately security is a chore, like showering or visiting the dentist. And there are always going to be people telling you that you absolutely must apply deodorant to your groin or that you can avoid the dentist by rinsing with apple cider vinegar. Ignore them, and just keep doing the basics as well as you can.
However, please do not let the absolute state of things cause you to give up on security.
I'm the security guy on our team and I'm pretty much over it. Once you get a project's security up to baseline status quo, then the security community only produces tiny scraps of actionable input for improving something they're vocally unhappy with.
How did specialized QA from people who like breaking security controls for fun turn into acting like the business owner for security requirements? And also somehow getting away with a level of haranguing that we wouldn't accept from our actual stakeholders?
Please send me $12,000 dollars.
Please send money."
1. Log in with admin credentials
2. Copy the session cookie
3. Run curl command using above session cookie
4. You have bypassed authentication!!
When I was growing up I worked at a record-and-tape store. One of my co-employees was a fake-cop-car guy. It was really awkward. Sometimes he would show up “in uniform” just to “check on things”. It was cringeworthy at a sphincter-clenching level.
To: abuse@yourdomain.com Subject: Bug bounty , PII data made available port 22. Please provide bug bounty for critical software flaw.
Issue description
This is critical, exploitation of the ftp server provides source code to a popular debian server allowing attacker to sidestep usual reverse engineering procedures required to attack a system. (Authentication Bypass).
I will release this bug in thirty (30) days if no bug bounty has been granted and attackers will be able to take full advantage of this problem.
Reproducibility
This issue is trivial to reproduce, with popular hacking tools such as ftp and internet explorer.
Bounty value
Please be mindful and understand that this research takes up many hours and bugs like this can fetch up to $25,000 on popular bug bounty programs ( https://www.hackerone.com/ ).
An attacker COULD if the stars align right EXPLOIT ...
I'm too tired of the current scareware industry to write more.
The sad part is real security issues can get lost in the noise...
Nowadays I tend to more rely on tech news to hear when there's an actual serious vuln I need to address.
(Note I'm not advocating everyone do this. Do your own risk assessment).
The only CVE's it had for 2 years only happened if you allowed random users to sign up.
There is a firewall plugin and basically the only thing it does is check if you have outdated plugins and log all the times a bot tried to log in by going posting user:admin password:admin to /wp-login.php. It's rare but a few of them tried my domain name as username instead. It sends me e-mails about new vulnerabilities found, and it's always some plugin. Sure, some of them are "installed" in thousands or millions of websites, but it's never anything in the Wordpress core itself.
If you hide /wp-login.php and avoid dependencies, it's practically impenetrable since it has to be the most battle-tested CMS out in the wild, and yet people swear it's Swiss cheese of security holes.
Wrong place, did not read. Here go the ``security researchers'' begging/threatening for money.
~ $ whois -h whois.abuse.net ftp.bit.nl
abuse@bit.nl (for bit.nl)
I did not know of this service until now, so any correct result it has for any of my domains is a matter of coincidence.
In addition anyone who has listed their domains there probably knows what they're doing, and won't demand a CAPTCHA, an essay or an account to report abuse.
A common challenge is assessing whether [security firm] did actually do their job, or whether there just weren't any tigers around here in the first place. Hence, SOC2.
So not only is it often difficult to measure the actual impact of a security mitigation, it is often possible (or even easy) to measure the friction caused by a security mitigation. You really need everybody to believe in the necessity of a mitigation or else it becomes incredibly easy to cut.
Hi Team,
We are following up regarding the critical vulnerabilities we had previously reported — we are still awaiting your acknowledgment and decision on appropriate compensation.
Clear communication is vital in responsible vulnerability disclosure programs, and it’s important for both sides to remain engaged to ensure vulnerabilities are properly handled and rewarded fairly.We have also discovered additional high-risk issues that could impact your user security and overall platform integrity.
However, we are waiting for closure on the earlier reports before moving forward with new disclosures.Please let us know the status update at your earliest convenience so we can proceed accordingly.Thank you for your attention to this matter.
There is NO SENSITIVE INFORMATION on this server.
So if hypothetically I would find a .csv file with emails, names, dates of births and addresses on this website, I should not send an email because it can't possibly be a data leak.
Want to be a bounty beggar? It's dead simple, you just use tools like Qualys' SSL Labs, dmarcian or Scott Helme's Security Headers, among others. Easy point and shoot magic and you don't need to have any idea whatsoever what you're doing!
I'm at work and a little afraid of clicking on the 'pr0n' folder :)
Critical vulnerability on port 80: an attacker could exfiltrate all comments posted therein. Please provide a bug bounty for this critical vulnerability.