Bug bounty programs have proliferated in recent years, popularized by open platforms like BugCrowd and HackerOne and by big companies like Microsoft. The principle is fairly simple - hackers join an organized framework within which to look for security flaws in the software of participating companies. When hackers find a serious vulnerability, they are encouraged (with a $$$ bounty) to disclosed it to the participating company. This company, in turn, is given a reasonable amount of time to fix the vulnerability before it is publicized. Hackers get cash to legally identify software vulnerabilities and companies fix serious problems at a reasonable cost.
That is the theory. Unfortunately, the practice may be very different. In the improbable words of Yogi Berra:
In theory, there is no difference between theory and practice.
Incentives
As with most things human, a proper understanding of the failures of a model can be gleaned from studying its associated incentives.
Hackers
In the bug bounty structure, hackers have, in principle, two main motivations:
Warm fuzzy feelings - Hackers discover vulnerabilities in important software and are given some assurances that those vulnerabilities will either be fixed or else made public so that other users may beware.
Dinero - Serious and verifiable vulnerabilities are provided cash rewards that give hackers bragging rights and some pizza money.
The darker side of the motivations is access. Bug bounties give hackers a free pass to probe company defenses in an overt and conspicuous manner without significant risk of capture or prosecution. It is something like leaving a 16-year old with a fresh driver's license overnight in a Ferrari dealership.
For the less than scrupulous hackers out there, the bugs discovered as apart of a bug bounty program may very well be much more valuable when sold illegally on the black market.
Companies
Companies that participate in bug bounties also benefit through:
Identifying vulnerabilities - Doing comprehensive security analysis of code is time consuming and difficult (i.e., costly). The more review that code gets, the more likely it is that dangerous vulnerabilities will be discovered.
Responsible patching - Companies are given time to patch serious vulnerabilities before they are publicly disclosed. This protects company data and computer systems from attack and leakage.
The problematic aspects are also clear:
Judge, jury and executioner - Participating companies (i) define what elements of their software are "in-scope" for bug bounties, (ii) judge submitted vulnerabilities for severity and impact, and (iii) decide whether or not to award bounties. There is an obvious conflict of interest, as recognized vulnerabilities impose liability, effort, and cost on their respective companies.
Too many vulnerabilities - Even professional, reviewed code written by experienced experts has many vulnerabilities. Fixing all these vulnerabilities can significantly limit a companies ability to innovate (you know, to make money).
Patching is hard - Even the smallest vulnerabilities, if embedded deeply enough in the code, could require significant effort and code-rewriting to patch. As we have seen over and over again with data leakage, it is often much cheaper to apologize for an exploit than it is to patch it up front.
Indeed, bug bounties can be abused to delay (or altogether avoid) dealing with problem code while simultaneously preventing the issue from being publicized.
Practice
My sense of how bug bounties work in practice is entirely anecdotal, based on comments from participating students and colleagues that I have met over the years, in person and in conferences. Though some companies appear to run these virtuously (e.g., larger companies like Google and smaller companies like Mozilla and Brave), a number of companies appear to cut corners (no, I will not name them :-) ). Examples include:
Downplaying the severity of bugs. This includes classifying vulnerabilities so that they do not require public patching or are out of scope for the bug bounty program, regardless of the potential damage to users.
Minimizing bounties. Companies like to tout the large sums that they have paid in bug bounties. For example, Microsoft has recently boasted of paying $4.4 million to hackers in the last year, a whopping 0.004% of their $110.4 billion 2018 revenue; this payout is the equivalent of roughly one coffee per month for each of Microsoft's 144,106 employees! In practice, companies seem to pay very small bounties unless a vulnerability reaps significant publicity.
Avoidance. Some companies delay addressing, much less patching, disclosed vulnerabilities for long periods of time. In some cases, companies may altogether ignore problems or issue legal threats and bank on the hacker choosing not to expose users to attacks by disclosing the vulnerability.
Narrowing focus. This involves restricting engagement terms to a small subset of the software code-base, thereby deriving the public relations benefits of participating in a bug bounty without the corresponding responsibility of addressing serious concerns. In some cases, the companies are aware of the unpatched vulnerabilities in parts of their code, and restrict the bug bounty scope precisely around them.
Programs like HackerOne do formally provide arbitration opportunities when vendors and hackers disagree about vulnerabilities. However, by their very nature, these are likely to remain private, and, indeed, I have not heard of such successful arbitrations. As a result, hackers may simply give up on working with a vendor, even if major bugs are evident.
My Recommendation
I strongly believe in responsible disclosure, and I believe that the respectable bug hunters must provide (and even help) companies with appropriate time to patch their vulnerabilities. It is no more moral to take advantage of an unpatched vulnerability than to walk through an unlocked door in someone else's house.
At the same time, I do not have confidence in the general efficacy of bug bounty programs to actually fix bugs (rather than obscure them from public scrutiny), and I advise hackers to find other independent ways to responsibly disclose vulnerabilities that they discover (e.g. Carnegie Mellon's CERT Coordination Center).