Clock5 Minute Read

3 Dangers of Learning by Pentest Report Alone

Penetration testing is an important part of the Software Development Lifecycle, but it's important to learn how to improve in many ways.

Penetration tests are a critical part of any organization’s security program. There’s no doubt that you should always be evaluating the state of the security of the organization on a regular basis. After all, if you (or a contracted third party) can find security flaws in your infrastructure, others can too. It’s important to both evaluate yourself (“Hack thy self”) and have third parties regularly do it as well. Penetration testing is in reality the offensive counter-part of the Software development Lifecycle (SDLC) immune system, discussed here. Who performs the test will completely determine the results. If a junior person with little exposure to the types of systems and technology does the assessment, you will get wildly different results from somebody with lots of experience in similar systems. Sometimes for the better, sometimes for the worse.

Security developer working

These assessments are important. As are other venues for discovering where your weaknesses are, such as vulnerability disclosure processes, bug bounty programs, red team exercises, you name it. A “clean” report will tell you where you have done a good job and, if the reporting is done well, tell you where the blind-spots are. Similarly, a “bad” report will tell you where you have not done a good job and will hopefully also inform you of blind-spots.

It’s generally thought that these reports will, beyond the specific actions that have to be taken as a result of them, provide value far beyond those outcomes. By having concrete data on what types of issues occur within your organization, you are able to educate your organization and ensure similar mistakes don’t happen. You train your internal T-Cells to ensure you will not be infected by a pathogen again.

However there’s a catch. How the results of a penetration test are handled creates a lot of perverse incentives. When penetration testers write the report, they generally include as much information as possible to make it possible for members of the organization to understand how something happened and how to prevent it. But that opens up a can of worms, on top of the basic nature of the report. 

Problem 1 - The cobra effect

Cobra in a tree

(Photo by David Clode on Unsplash)

First of all, due to the aggressive nature of deadlines that many organizations enforce in the hope of being “agile”, it’s far too common to see that project schedules slip. By the time you’re supposed to release something, you’ve barely just got a working version. Because functionality is what matters to most people, things such as security gets pushed to the end. Leaving little time for testing, or testing that occurs while code is still changing by the hour.

This creates a situation where developers realize that somebody (Usually a person with “security” in their title) gets added into the project at the end, to find any issues that may exist. That’s great, right? Now the developer can just focus on making the system work and somebody else takes care of the security part!

This creates a culture of “Not my problem”. Project managers will transfer the pressure from their leadership onto the security organization, and make them the scapegoat. And they will argue that when the issues are found doesn’t matter as long as they occur within a release cycle. Any bug found before release is not a bug, because it didn’t hit production, so there was no risk.

This perpetuates a culture of not doing the right thing. The team does not think of security as a first-class citizen and a part of their job description. This results in learned helplessness, where the pressure of delivering causes the bucket to simply go down the line. This breeds insecurity. It’s a text-book case of a moral hazard

Problem 2 - The report drawer

When a report lands in the inbox of the manager who ordered the report, it’s disturbingly common to see an email like this:
Everyone, please go over this, and keep it to yourself as much as possible. I will communicate this upstairs, to ensure there’s no fallout, and we can control messaging. Please work with the relevant programmers to fix the issues and ensure word doesn’t spread too much so that nobody starts pointing fingers. We can go over the report with the full team in the next sprint.

All seemingly well-intentioned, right? A focus on dealing with the immediate risks, and then dealing with the larger organization afterwards. And yes, most of the time the issue gets (mostly) fixed. But afterwards, the report lands in the drawer, never to be spoken of again. Because nobody wants to embarrass their colleagues, or cause any hurt egos, right?

This logic seems somewhat “right” at first. But the idea that there’s even any chance that finger-pointing could occur indicates a broken culture where security mistakes are seen as the failure of an individual, or the team. Meanwhile, the reality is that security mistakes are none of those. It’s a failure of the organization. The second the report lands in the drawer, or is kept under wraps, the opportunity for the organization to learn is killed. 

Problem 3 - You can’t learn by skimming

Let us assume for a second that we have an organization that does not fall into the traps outlined above. Indeed, they distribute the report to everybody in the development organization and nobody points fingers, but thanks to the developer for having made mistakes, they can learn.

They go over the report, and go “Oh that’s interesting! I would probably have made the same mistake”. Then they go on with their day. Did they learn anything? Statistically speaking, they didn’t.

learn, practice, repeat graphic

The reality is that very few will remember the issue, let alone recognize it 3 months down the line if they saw a similar mistake, which they were aware of before the report.

What to do instead? 

We’ve identified three pit-falls which commonly occur, but what do we take away from all of this? Let's summarize:

  1. Create a culture and way of working, where security is a first-class citizen.

  2. Create a culture in which mistakes are embraced and people are not only comfortable with mistakes, but want to share their learning with the organization.

  3. Take the mistakes that occur and give everybody in the organization the ability to learn in a hands-on way how it occurred and how to prevent it

It doesn’t work to simply  take the findings, and do a one-off presentation about it, even if you include some hands-on material. The material has to be repeated over time and everybody who joins the organization afterwards has to be exposed to this learning as well. Otherwise, the knowledge simply dies over time and past mistakes will soon be future mistakes as well. Mistakes are unavoidable and we can learn from them,  but repeat mistakes are unacceptable.

Charlie Eriksen
Try Adversary for free
Try now