Ask an Ethicist: Grey-hat Hacking

Question: Is it ethical (or even legal) for a company to prosecute an individual for discovering a vulnerability when they purposely broke in, grey-hat style, but they caused no harm?


First, we cannot comment on legal issues. They vary by jurisdiction, but even more fundamentally, we don’t have any legal experience. We are merely ethicists. Alas.

Beyond that, Oy! This is a complicated question. For starters, it very much depends on how the vulnerability was discovered. Based on the framing here, there was an intention to access a computing system beyond what was authorized, which is not ethical regardless of the (apparent) lack of harm (contravening Principle 2.8 of the Code). It should be emphasized, though, that one should not think of ethical/not ethical as binary. There are many, many shades of not ethical. As an analogy, jaywalking and murder are both illegal, but one is a whole lot worse than the other. So the intentionally unauthorized access is not ethical, but the severity of the violation depends on many factors.

One reason that this action is not ethical is because it is impossible for the individual to know, in advance, that no harm will result. Case in point: the Morris worm. In 1988, Robert Tappan Morris wrote code that exploited vulnerabilities in Unix utilities (finger and sendmail) to allow his code to replicate. His intention was to use this unauthorized access to spread across the world and count the number of machines on the Internet. As a result of a bug in the logic, systems crashed worldwide, causing millions of dollars in losses. It would be a mistake to suggest that this worm was morally acceptable at first, until the damage became apparent.

Returning to the original case, arguing the intentional break-in was okay because there was no harm is just utilitarian moral luck; it’s a very flawed premise for ethical reasoning. The unauthorized access was not ethical because of the intention to bypass other’s security mechanisms (which they have a right to employ) and it posed the possibility of causing harm (contraving Principle 1.2).

Furthermore, the phrasing here suggests that more thought or reflection regarding “harm” is needed. Security folks classify problems as breaches of confidentiality (unauthorized reads), integrity (unauthorized changes or deletions), or availability (denial of service). Under this “CIA” model as it is known, there was harm: unauthorized information leakage, which is a breach of confidentiality.

Starting from that point the individual committed an ethical violation by breaching another’s system, does that company have an ethical argument should it chose to retaliate, possibly by pressing charges? In the past, some companies or organizations have reacted with draconian fury at the slightest breach. Others have take a more measured response that is commensurate to the intention; in fact, many companies specifically have bug bounty programs for this very reason and welcome the disclosures. The reaction one is likely to receive can vary quite a bit and depends on that particular organization’s views on the subject.

In general, there is little ethical support to poke around at random systems in an attempt to discover vulnerabilities. It is difficult to make the case required that there is a compelling public need for such attempts at discovery as is required by Principle 2.8. Students wishing to move into this area of information security should seek official mentors and academic advisors who can oversee and guide their work in a way that is consistent with Principles 2.1, 2.2, 2.3, 2.4, 2.5, and 2.6). One should always seek permission to attempt to breach a system’s security BEFORE the attempt (Principles 1.3 and 2.2). If the company or organization says no, respect that decision and move on. If you do not seek permission ahead of time, there is no way for an organization to determine whether your actions are malicious or benign, particularly if harm does occur.