Right now, cyber criminals and foreign nations are probing state systems for vulnerabilities. Malicious actors exploit errors in software code and other design flaws—some widely known, some unknown—to steal data and disrupt services. Defenders guard against attackers by trying to avoid, find, and fix these vulnerabilities before adversaries can use them to launch an attack.
These adversaries have competition. A global community of engineers, professors, entrepreneurs, high school students, and others regularly uncover security problems. Some do so accidentally, stumbling on vulnerabilities while they tinker with software for fun. Others publish research to gain recognition and respect. Many cybersecurity professionals are paid to test computer systems for flaws. Still others are altruists, hunting down vulnerabilities so they can help potential victims close a security gap before it is too late. Yet while many of these “white hat” hackers disclose vulnerabilities they find, many others do not.
Despite good intentions, white hats can face legal risks. Unclear language in computer crime laws may inadvertently apply to good faith research conducted by white hats. This legal ambiguity can dissuade white hats from reporting vulnerabilities to government and businesses.
Legal risks aside, disclosing a vulnerability is always a delicate process. A white hat who finds a software vulnerability faces a practical dilemma: Who should they tell, and when? Total secrecy means the problem cannot be fixed, leaving defenders (including government) exposed. Publicizing a vulnerability widely would let cyber criminals exploit it before stakeholders can repair the problem.