The Application Security Spending Conundrum

Recently I needed to purchase automobile insurance. To obtain a quote, the online insurer asked my age, where I lived, how much I drive and where, the year, make, and model of my cars, about my driving record, and how much coverage I wanted. Behind the scenes, they likely took these data points, applied them to some vehicle claim actuarial data, and presented me with a rate based upon MY effective overall risk score. The process made sense, the price was fair, and I ended up buying.

This got me thinking. What if instead the insurer had said, “We’ll give you the same coverage as everyone else who applied, add some protection for a new, obscure, scary-sounding road hazard, and bill you 15% over last year.” Without taking anything about at all about ME into account, it would seem that there was no real risk management involved in their decision-making. As a consumer, I would reject this offer. Clearly this makes zero sense. Ridiculous as this scenario sounds, isn’t this fairly similar to the process of creating information security budgets?

Gunnar Peterson explains it best, “Security budgets are often based on a combination of last year’s spending, this year’s threat(s) du jour, and “best” practices, i.e. what everyone else is doing. None of these help to address the main goal of information security which is to protect the assets of the business. The normal security budgeting process results in overspending (as a percentage) on network security, because that’s how the budget grew organically starting from the 90s.”

I agree and I think this is precisely why we see so many organizations spending a larger percentage of their budgets protecting their networks and infrastructure, as opposed to their applications, where the largest chunk of IT dollars are invested. In Gunnar’s words, “…they are spending $10 to protect something worth $5, and in other cases they are spending a nickel to protect something worth $1,000. If you look at the numbers objectively, you see why it is out of control…” Worse still, this budget misallocation persists despite real-world data revealing where the real threats are (at the application layer, Verizon’s DBIR) and in stark contrast to the infosec pros’ own stated priorities.

A survey conducted by FishNet Security of IT pros and C-level executives from 450 Fortune 1000 companies found that: “45% say firewalls are their priority security purchase, followed by antivirus (39%), and authentication (31%) and anti-malware tools (31%).” The report goes on to say, “Nearly 70% [of those surveyed] say mobile computing is the biggest threat to security today, closely followed by social networks (68%), and cloud computing platforms (35%). Around 65% rank mobile computing the top threat in the next two years, and 62% say cloud computing will be the biggest threat, bumping social networks.” This is pretty funny because Mobile, Social Networking, and Cloud attacks specifically bypass those firewall investments.

To resolve this spending conundrum, and begin closing the application security gap, I see two option:

1) Information security professionals must align their investments with business priorities, which is what Gunnar wisely advocates. He says, “the biggest line item in [non-security] spending should match the biggest line item in security.” In almost every enterprise, this would mean redirecting network security dollars to application security. Even if this approach makes perfect sense, there is no question budget re-allocation would meet fierce opposition. Nothing less than a paradigm shift in thinking, culture and regulatory design would allow this to come to pass. Unfortunately, I think it is nearly impossible for the masses.

2) Information security professionals would need to convince management to approve new additional budget dollars specifically for application security, without reducing other budgets. Ideally, these application security investments could be justified directly or indirectly to increased revenue or reduced costs. Ask yourself, how might application security investments contribute to new customer acquisition? Can the business increase its differentiation? Obviously this won’t solve the spending inefficiency conundrum, but we might be able to gain ground and close the gap using this approach. To do so we need more case studies and benchmarks to demonstrate how other organizations are investing.

Fortunately, from an industry perspective, these choices are NOT mutually exclusive. Each organization will of course have to find its own path. In a future post I’ll list out ways I’ve seen organizations justify application security budgets. In the meantime, if you have ways that you’ve found successful, comment below!


Hack Yourself First: Jeremiah Grossman


Continue reading The Application Security Spending Conundrum

Posted in Uncategorized

Final Fifteen – Web Hacking Techniques

Open community voting completed last week. From the ~67 Web hacking techniques, we’ve gotten down to the final fifteen (see below). Congratulations to all the researchers whose work made it. Also, thank you very much to all those who took the time to complete the survey. There were a total of 74 respondents, 63% of which were“Breakers” and the other 37% “Builders.” Good representation.

Now it’s time for the final phase where our panel of security experts vote on the list (same position point system) to determine the Top Ten Web Hacking Techniques of 2010. All those on the panel have substantial industry technical experience, domain knowledge in application security, and do not have entries on the list.

This year we’re very pleased to have:
Ed Skoudis (InGuardians Founder & Senior Security Consultant)
Giorgio Maone (Author of NoScript)
Caleb Sima (CEO, Armorize)
Chris Wysopal (Veracode Co-Founder & CTO)
Jeff Willams (OWASP Chairman & CEO, Aspect Security)
Charlie Miller (Consultant, Independent Security Evaluators)
Dan Kaminsky (Director of Pen-Testing, IOActive)
Steven Christey (Mitre)
Arian Evans (VP of Operations, WhiteHat Security)

Final Fifteen


WhiteHat Security is a leading provider of website security services.


Continue reading Final Fifteen – Web Hacking Techniques

Posted in Uncategorized

Open letter to OWASP

The OWASP Summit 2011 in Portugal is coming up soon! This is an opportunity for the community’s leaders and influencers to discuss the future of the organization and that of the application security industry. The working sessions are creative, diverse and forward-thinking, designed to direct standards, establish roadmaps, and improve organizational governance. Unfortunately I’ve a conflict in my schedule and unable to attend, but I am excited to be presenting at IT-Defense in Germany. Fortunately for me Jeff Williams (OWASP Chairman) put a call out for feedback on the Summit’s. Since I can’t be physically present, I’ve taken this as opportunity to share my thoughts for organizers and attendees to consider.

Before getting to the list, I’d like to remind everyone that I was personally present many years ago at the beginnings of OWASP. Since then I’ve contributed to many different projects where I prefer to spend my time. I’ve visited over a dozen local OWASP chapters, including several international conferences to present, where I met new people and shared ideas. Written blog posts and articles directing people to OWASP materials. Through sponsorship dollars from WhiteHat Security, we’ve financially supported the good work the organization does. So with this in mind, please take the following as purely constructive with a desire for OWASP and the industry at large to flourish.

1) Hold a Board of Directors Vote
To my knowledge, and I’m open to correction, OWASP has never had an official Board of Directors vote. At least not one where membership could participate. Is this covered in the by-laws? It should be. Update: Indeed I have been corrected. See Dan Cornell’s comment below that nicely detail a 2009 membership vote that resulted in the addition of two new BoD seats. Embarrasing that I missed this. I’m told (via twitter) that after the summit there will be an plan laid out where half the current seats will go out for a vote. Progress!

OWASP is a community of volunteers and like any community it should be managed openly and democratically. I love the fact that the budget itself has been made transparent. Holding a BoD vote would increase confidence in the organization and establish personal ownership and accountability in OWASP’s future. A future where a someones individual contribution, commitment, and merit may be rewarded with a position of greater influence and responsibility.

I do not make this recommendation lightly as I know most of the current board members personally, whom I respect, who have given so much of themselves over the many years, and deserve our appreciation. They’ve done a remarkable job and this is in no way should be considered an indictment. I’m saying that for OWASP to continue to thrive, room must be made at the top most levels for new participants with fresh ideas.

2) It is time for an OWASP Chief Executive Officer
OWASP would be well-served by the creation of a President / CEO position just like Mozilla and other highly successful non-profits. A full-time person responsible for the day-to-day operational affairs and growing the organization. A go to person for global committee members, project leaders, members, sponsors, press, etc. who has the authority to make decisions and get stuff done expeditiously. OWASP generates enough revenue, with sufficient growth, and has enough stuff to easily justify such a position. No doubt others besides myself have experienced much internal confusion and disorganization within that stifles and frustrates those seeking to contribute. The right person could help clean all that up and make things much more efficient and productive.

Second, this person also must serve as an industry cheerleader. It is vital that someone representing OWASP is constantly out there raising awareness and sharing why its a good idea for every developer, security professional, and software generating organization to be involved. Someone who can meet personally with CEOs, CIOs, CTOs, and CSOs of organizations large and small to gain their support. Obviously this can’t happen on a part-time basis with people employed by for-profit “vendors.”

3) Less preaching to the choir, engage more with the outsiders
Everyone in the community recognizes the echo chamber issue. We know the vast majority of who we need to reach, those who do not voluntarily come to us, the application security industry. So of course they have no way of knowing why the work we do is important, how it affects the safety and privacy in their lives, and the viability of online business. Without addressing this issue, the summit runs the risk of perpetuation the problem. I’ve been as guilty as anyone. Therefore instead of continuing to expect people to come to us over the last several years I’ve been transitioning to going to where they are, and with much success! OWASP should do the same to spread the word and take itself to the next level.

For example, OWASP representatives could attend, sponsor, and present at every possible non-security conference such as JavaOne, F8, Google I/O, any O’Reilly event, Star East/Web and so on where thousands of developers gather. In my experience at these events, when in their own element, developers are eager to learn about the state-of-the-art in application security, especially when presented in a way where they can derive value immediately when they get back to work. These attendees also represent a segment of developers who really care about their software. OWASP should proactively reach out to conference organizers with menu of official up-to-date topics and facilitate the CFP process on behalf of qualified representatives. Or, better still, offer to establish and manage an entire security track! Done right with a call to action, this alone would drive much needed membership.

4) Investment justification
Mountains of documentation on what organizations “should be doing,” are already available. Information security professionals are desperate for resources in how to justify to the business why an investment in application security is crucial. Effective application security programs aren’t easy or cheap to build. They require real organizational change and budget dollars to involve people, process, technology, and services. The justification cannot be because it’s “the right thing to do,” “PCI-DSS said so,” or “the APTs will get us!” That’s unconvincing and mind numbingly old. OWASP can help everyone do better.

One way is by capturing success stories from the OWASP corporate and individual membership. Real people, real companies, who are named, documented, and publicly highlighted. Ask them share how much OWASP materials helped them. What they did exactly and how it positively impacted the organization. Ask them to quantify some metrics in how much they are investing, how they are budgeting, all of which creates a watermark for others. These stories are key proof points their peers can use to follow the paths paved by early adopters.

5) Directly get involved with the PCI-DSS
PCI-DSS, despite whatever you think of it, does drive people to OWASP, but often under negative circumstances. Adoption of the OWASP Ten Top is not something e-commerce merchants necessarily want to do, but are forced to and no one likes to be forced to do “security.” As has been said privately to me, “What is OWASP except a bunch of crap I have to deal with for PCI?” This is the unfortunate net effect on attitudes. Merchants are incentivized to do the least application security they can get away with and NOT apply the Top Ten in the spirit of its intent. Either way, this makes OWASP look bad because the outcomes are indeed, bad. Of course PCI-DSS’s usage of the Top Ten in this manner was not something OWASP ever asked for, but here we are just the same.

Perhaps I’m not the first to say it, but this misuse has gone on long enough. If the PCI Council insists on using OWASP materials as an application security standard, which could be mutually beneficial, a good one must made available. Something clear, concise, and specifically designed for the risk tolerance of their credit card merchants. I believe this is what the OWASP PCI Project was meant to accomplish, but the status appears inactive. Fortunately there’s time to rekindle the effort as my understanding is the next revision to PCI-DSS is at least a year or two off. Done right, this could have a profound impact on a large segment of the Internet who currently get hacked all the time — compliant or otherwise.

There you have it, my thoughts. I have more ideas, but I think that’s enough to chew on for now. 🙂


WhiteHat Security is a leading provider of website security services.


Continue reading Open letter to OWASP

Posted in Uncategorized

Vote Now! Top Ten Web Hacking Techniques of 2010

Update: Open voting is now close. Thank you to all who participated!

The selection process for Top Ten Web Hacking Techniques of 2010 is a little different this time around. Last year the winners were selected by a panel of distinguished security experts. This year we’d like you, the Web security community, to have an opportunity to vote for your favorite research!

Here’s how it’ll work:

Phase 1: Open community voting
From of the field of 67 total entries received, each voter (open to everyone) ranks their fifteen favorite Web Hacking Techniques using a survey. Each entry (listed alphabetically) get a certain amount of points depending on how highly they are individually ranked in each ballot. For example, an each entry in position #1 will be given 15 points, position #2 will get 14 point, position #3 gets 13 points, and so on down to 1 point. At the end all points from all ballots will be tabulated to ascertain the top fifteen overall. And NO selecting the same attack multiple times! 🙂 (they’ll be deleted)

Voting will close at the end of the day this Friday, January 7.

The more people who vote, the better the results! Vote Now!

Phase 2: Panel of Security Experts

From the result of the open community voting, the top fifteen Web Hacking Techniques will be voted upon by panel of security experts (to be announced soon). Using the exact same voting process as phase 1, the judges will rank the final fifteen based of novelty, impact, and overall pervasiveness. Once tabulation is completed, we’ll have the Top Ten Web Hacking Techniques of 2010!

Voting will close at the end of the day on Friday, January 14.

Winners will be announced January 17!

Good luck everyone.


WhiteHat Security is a leading provider of website security services.


Continue reading Vote Now! Top Ten Web Hacking Techniques of 2010

Posted in Uncategorized

Which mountain would you rather climb?

Some Web application vulnerability scanners, dynamic and static analysis, are designed for comprehensiveness over accuracy. For others, the exact opposite is true. The tradeoff is that as the number of “checks” a scanner attempts increases causes the amount of findings, false-positives, scan times, site impact, and required man-hour investment to grow exponentially. To allow users to choose their preferred spot between those two points, comprehensiveness and accuracy, most scanners offer a configuration dial typically referred to as a “policy.” Policies essentially ask, “What do you want to check for?” Whichever direction the comprehensiveness dial is turned will have a profound effect on the workload to analyze the results. Only this subject isn’t discussed much.

Before going further we need to define a few terms. A “finding” is something reported that’s of particular interest. It may be a vulnerability, the lack of a “best-practice” control, or perhaps just something weird warranting further investigation. Within those findings are sure to be “false-positives” (FP) and duplicates” (DUP). A false-positive is a vulnerability that’s reported, but really isn’t one for any variety of potential reasons. Duplicates are when the same real vulnerability is reported multiple times. “False-negatives,” (FN) which reside outside the findings pool, are real vulnerabilities with true organizational risk, that for whatever reason the scanner failed to identify.

Let’s say the website owner wants a “comprehensive” scan. A scan that will attempt to identify just about everything modern day automation is capable of checking for. In this use-case it is not uncommon for scanners to generate literally thousands, often tens or hundreds of thousands, of findings that need to be validated to isolate the ~10% of stuff that’s real (yes, a 90% FP/DUP rate). For some spending many many hours vetting is acceptable. For others, not so much. That’s why the larger product vendors all have substantial consulting divisions to handle deployment and integration post-purchase. Website owners can also opt for a more accurate (point-and-shoot) style of scan where comprehensiveness may be cut down by say half, but thousands of findings becomes a highly accurate hundreds or dozens thereby decreasing validation workload to something manageable.

At this point it is important to note, as illustrated in the diagram, even today’s top-of-the-line Web application vulnerability scanners can only reliably test for roughly half of the known Web application classes of attack. These are the technical vulnerability (aka syntax related) classes including SQL Injection, Cross-Site Scripting, Content-Spoofing, and so on. This holds true even when the scanner is well-configured (logged-in and forms filled out). Covering the other half, the business logic flaws (aka semantic related) such as Insufficient Authentication, Insufficient Authorization, Cross-Site Request Forgery, etc. require some level of human analysis.

With respect to scanner output, an organizations tolerance for false-negatives, false-positives, and personnel resources investment is what should dictate the type of product or scan configuration selected. The choice becomes a delicate balancing act. Dialing up scanner comprehensiveness too high, get buried in a tsunami of findings. What good is comprehensiveness if you can’t find the things that are truly important? On the other hand dialing down the noise too far reduces the number of vulnerabilities identified (and hopefully fixed) to the point where there’s marginal risk reduction because the bad guys could easily find one that was missed. The answer is somewhere in the middle and one of risk management.

About 20 km west of Mount Everest (29,029 ft. ASL) is a peak called Cho Oyu (26,906 ft. ASL), the 6th highest mountain in the world. The difference being the two is only 2,000 ft. For some mountain climbers the physical difficulty, risk of incident, and monetary expense of that last 2,000 ft necessary to summit Everest is just not worth it. For others, it makes all the difference in the world. So, just like scanner selection, an individual decision must be made. Of course the vendor in me says just use WhiteHat Sentinel and we’ll give you a lift to the top of whichever mountain you’d like. 🙂

Vendors take Note: Historically, whenever I’ve discussed scanners and scanner performance the comments would typically be superficial marketing BS with no willingness to supply evidence to backup the claims. As always I encourage open discourse, but respectfully if you make claims about your product performance, and I sincerely hope you do, please be ready to do so with data. Without data, as Jack Daniel as concisely stated, we’ll assume you are bluffing, guessing, or lying.


WhiteHat Security is a leading provider of website security services.


Continue reading Which mountain would you rather climb?

Posted in Uncategorized

Bug Bounty Programs comes to Website Security: What do they mean?

Recently I tweeted a passing thought, “I wonder if the final stage of maturity for website vulnerability management is offering a bug bounty program.” This was stimulated by the news that Mozilla became the second company, following Google, to provide monetary rewards for security researches who find and privately report website vulnerabilities. Only last year this idea would have been considered crazy. Sure, other organizations including Microsoft, Facebook, and PayPal already gladly accept third-party vulnerability disclosures without threatening legal action, but it’s the financial compensation part that sets Google and Mozilla apart.

I’m sure others like myself in the community are asking if website vulnerability bug bounty programs a good idea to begin with and if such programs an anomaly or the start of a 2011 trend?

If we posed the first question to bug hunting masters Charlie Miller, Alex Sotirov, and Dino Dai Zovi there is no question how they’d answer. “No More Free Bugs.” Not that all researchers must ascribe to this philosophy, it’s a personal choice, but there certainly shouldn’t be a stigma attached to those who do. The thing is the bugs these gentlemen generally focus on reside in desktop-based software developed by large ISVs. Software that can be readily tested in the safe confines of ones own computer where permission is not strictly required. Website vulnerabilities are in a word, different.

Website vulnerabilities reside in the midst of a live online business, on someone else’s network, where penetration-testing without permission is illegal and the results of which may cause degraded performance and downtime. Not that legalities ever really got in the way of a free pen-test. See the thousands of public cross-site scripting disclosures on XSSed.com. Still, I’d personally agree that while bug bounty programs can indeed be a good idea for a certain class of website owner, I think everyone would recommend thoughtful consideration before opening up the hack-me-for-cash flood gates.

What’s most interesting to me is understanding why Google and Mozilla themselves believe they need a bug bounty program the first place. It’s not like Google and Mozilla don’t invest in application security or would depend on such an initiative. In fact, from my personal interactions their level of application security awareness is top notch and practices represent among the most mature across the Web. They invest in source code reviews, security QA testing, penetrating tests / scans conducted by insiders and third-parties, developer training, standardized development constructs, threat modeling, and a collection of other Software Security Assurance (SSA) related activities. Activities most organization are still coming up to speed on.

So Google and Mozilla have already done essentially all our industry “recommends.” Yet, as the multitude of negative headlines and unpaid vulnerabilities disclosures historically show, issues are still found by outsiders with annoying regularity. Personally I think that’s where the motivation for a bug bounty program comes from.

Google and Mozilla probably view their bounty programs as a way to remove additional missed bugs from the vulnerabilities pool, remediate them in a manageable way, foster community good will, and for the low low price of few hundred to a few thousand bucks. Check it out. In the first two months of Google’s program, it looks like they’ve paid out a few 10s of thousands of dollars to three dozen or so researchers. Said another way, the PR benefit is perhaps three dozen user confidence shaking news stories DIDN’T get published. All in all for that price, suddenly the idea of paying “the hackers” doesn’t sound as crazy.

It should be made crystal-clear that bug bounty programs are in no way a replacement for any part of an SSA or an SDL program, rather they are complementary and an opportunity to facilitate improvement. Also, bug bounty programs are not for everybody, and probably not even for most. Only those organizations that truly have their application security act together should even consider offering such a program.

For example, the organization should already have reasonably bug free websites or they won’t offering attractively priced bounties for long. Budgets would run out fast and they’ll be forced to suspend the program, which would be quite embarrassing. The organization must also have a strong process in place to receive, validate, respond, act upon, and pay out for submissions. Next as Mike Bailey, a self proclaimed Narcissistic Vulnerability Pimp elegantly stated, “bounty program also involves an implicit commitment to fix bugs quickly.” That’s right, no sitting on bugs for a “reasonable” amount of time — like months to a year or more. Finally the organization will require a super-stable infrastructure capable of enduring sustained attack by hundred or perhaps thousands of entities.

In my humble opinion if an organization has all of this in place, then I’m confident in saying there is a correlation between bug bounty programs and website vulnerability management / SSA maturity. Gunnar Peterson for the graphic>

Jeff Moss, the man behind Black Hat and Defcon, recently encouraged Microsoft, a firm long opposed paying for bugs, to offer a bounty program. “I think it is time Microsoft seriously consider a bug bounty program. They advanced the SDL, it is time for them to advance bounties.” I’ve suggested the very same to Microsoft in person on more than one occasion. Veracode has that as a 2011 infosec prediction. Everyone I know of has received a response similar to the following:

“We do not believe that offering compensation for vulnerability information is the best way we can help protect our customers.” – Dave Forstrom, group manager of Microsoft Trustworthy Computing.

And there you have it. Is the website vulnerability bounty program phenomena the start of a trend? Who can really say? Only time will tell.


WhiteHat Security is a leading provider of website security services.


Continue reading Bug Bounty Programs comes to Website Security: What do they mean?

Posted in Uncategorized

Sandboxing: Welcome to the Dawn of the Two-Exploit Era

Exploitation of just ONE software vulnerability is typically all that separates the bad guys from compromising an entire machine. The more complicated the code, the larger the attack surface, and the popularity of the product increases the likelihood of that outcome. Operating systems, document readers, Web browsers and their plug-ins are on today’s front lines. Visit a single infected Web page, open a malicious PDF or Word document, and bang — game over. Too close for comfort if you ask me. Firewalls, IDS, anti-malware, and other products aren’t much help. Fortunately, after two decades, I think the answer is finally upon us.

First, let’s have a look at the visionary of software security practicality that is Michael Howard as he characterizes the goal of Microsoft’s SDL, “Reduce the number of vulnerabilities and reduce the severity of the bugs you miss.” Therein lies the rub. Perfectly secure code is a fantasy. We all know this, but we also know that what is missed is the problem we deal with most often, unpatched vulnerabilities and zero-days. Even welcome innovations such as Address Space Layout Randomization (ASLR) and Data Execution Prevention (DEP) only seem to slow the inevitable, making exploitation somewhat harder, but not stopping it entirely. Unless the battlefield itself is changed, no matter what is tried, getting hacked will always come down to just one application vulnerability. ONE. That’s where sandboxes come in.

A sandbox is an isolated zone designed to run applications in a confined execution area where sensitive functions can be tightly controlled, if not outright prohibited. Any installation, modification, or deletion of files and/or system information is restricted. The Unix crowd will be familiar with chroot jails. This is the same basic concept. From a software security standpoint, sandboxes provide a much smaller code base to get right. Better yet, realizing the security benefits of sandboxes requires no decision-making on the user’s behalf. The protections are invisible.

Suppose you are tasked with securing a long-established and widely-used application with millions of lines of insanely complicated code that’s deployed in a hostile environment. You know, like an operating system, document reader, Web browser or a plug-in. Any of these applications contain a complex supply chain of software, cross-pollinated code, and legacy components created long before security was a business requirement or anyone knew of today’s class of attacks. Explicitly or intuitively you know vulnerabilities exist and the development team is doing its best to eliminate them, but time and resources are scarce. In the meantime, the product must ship. What then do you do? Place the application in a sandbox to protect it when and if it comes under attack.

That’s precisely what Google did with Chrome, and recently again with the Flash plugin, and what Adobe did with their PDF Reader. The idea is the attacker would first need to exploit the application itself, bypass whatever anti-exploitation defenses would be in place, then escape the sandbox. That’s at least two bugs to exploit rather than just one. The second bug, to exploit the sandbox, obviously being much harder than the first. In the case of Chrome, you must pop the WebKit HTML renderer or some other core browser component and then escape the encapsulating sandbox. The same with Adobe PDF reader. Pop the parser, then escape the sandbox. Again, two bugs, not just one. To reiterate, this is this not say breaking out of a sandbox environment is impossible as elegantly illustrated by Immunity’s Cloudburst video demo.

I can easily see Microsoft and Mozilla following suit with their respective browsers and other desktop software. It would be very nice to see the sandboxing trend continue throughout 2011. Unfortunately though, sandboxing doesn’t do much to defend against SQL Injection, Cross-Site Scripting, Cross-Site Request Forgery, Clickjacking, and so on. But maybe if we get the desktop exploitation attacks off the table, perhaps then we can start to focus attention on the in-the-browser-walls attacks.


WhiteHat Security is a leading provider of website security services.


Continue reading Sandboxing: Welcome to the Dawn of the Two-Exploit Era

Posted in Uncategorized

Why Speed & Frequency of Software Security Testing Matter, A LOT

The length of time between when a developer writes a vulnerable piece of code and when the issue is reported by a software security testing process is vitally important. The more time in between, the more effort the development group must expend to fix the code. Therefore the speed and frequency of the testing process whether going with dynamic scanning, binary analysis, pen-testing, static analysis, line-by-line source code review, etc. matters a great deal.

WhiteHat Sentinel is frequently deployed in the Software Development Life-cyle, mostly during QA or User Acceptance Testing phases. From that experience we’ve noticed three distinct time intervals (1 week, 1 month, and 1 year), from when code is written to vulnerability identification, where the effort to fix is highly distinct. Below is what we are seeing.

The following focuses solely on syntax vulnerabilities such as SQL Injection, Cross-Site Scripting, HTTP Response Splitting, and so on. Semantic issues, also known as Business Logic Flaws, cause a different environmental impact.

When vulnerability details are communicated within ______ of the code being written:

1 Week (Less than 1 hour fix)
The same developer who introduced the vulnerability is the same developer who fixes the issue. Typically the effort required ranges from just minutes to an hour because the code is still fresh in the developers mind and they are probably still working on that particular project. The code change impact on QA and regression is minimal given how new the code is to the overall system.

1 Month – (1 – 3 hour fix)
The original developer who introduced the vulnerability may have moved onto another project. Peeling them off their current task enacts an opportunity cost. While remediation effort might be only 1 – 3 hours of development time, usually an entire day of their productivity is lost as they must reset their environment, re-familiarize themselves with the code, find the location of the issue, and fix the flaw. The same effort would be necessary if another developer was tasked to patch. If the vulnerability is serious a production hot-fix might be necessary requiring additional QA & regression resources.

1 Year (More than 10 hour fix)
The original developer who introduced the vulnerability is at least several projects away by now or completely unavailable. The codebase may have transferred to a software maintenance group, who have less skills and less time to dedicate to “security.” Being unfamiliar with the code another developer will have to spend a lot of time hunting for the exact location, figure out the preferred way fix it, that is if any exists. 10 or more developer hours is common. Then a significant amount of QA & regress will be necessary. Then depending on the release cycle deployment of said fix might have to wait until the next schedule release, whenever that may be.

What’s interesting is that the time and effort required to fix a vulnerability is not only subject to the class of attack itself, but how long ago the piece of code was introduced. Seems logical that it would be, just a subject not usually discussed. Another observation is that the longer the vulnerability lay undiscovered the more helpful it becomes to pinpoint the problematic line of code for the developer. Especially true in the 1 year zone. Again terribly logical.

Clearly then during SDL it’s preferable to get software security test results back into the developer hands as fast as possible. So much so that testing comprehensiveness will be happily sacrificed if necessary to increase the speed and frequency of testing. Comprehensiveness is less attractive within the SDL when results only become available once per year as in the annual consultant assessment model. Of course it’d be nice have it all (speed, frequency and comprehensiveness), but it’ll cost you (Good, Fast, or Cheap – Pick Two). Accuracy is the real wild card though. Without it the entire point of saving developers time has been lost.

I also wanted to briefly touch on the differences between act of “writing secure code” and “testing the security of code.” I don’t recall when or where, but Dinis Cruz, OWASP Board Member and visionary behind the 02 Platform, said something a while back that stuck with me. Dinis said developers need to be provided exactly the right security knowledge at exactly the time they need it. Asking developers to read and recall veritable mountains of defensive programming do’s and don’ts as they carry out their day job isn’t effective or scalable.

For example, it would be much better if when a developer is interacting with database they are automatically reminded to use parameterized SQL statements. When handling user-supplied input, pop-ups immediately point to the proper data validation routines. Or, how about printing to screen? Warn the developer about the mandatory use of the context aware output filtering method. This type of just-in-time guidance needs to be baked into their IDE, which is one of the OWASP O2 Platform’s design objectives. “Writing secure code” using this approach would seem to be the future.

When it comes to testing as you might imagine WhiteHat constantly strives to improve the speed of our testing processes. You can see the trade-offs we make for speed, comprehensiveness, and cost as demonstrated by the different flavors of Sentinel offered. The edge we have on the competition by nature of the SaaS model is we know precisely, which of our tests are the most effective or likely to hit on certain types of systems. Efficiency = Speed. We’ve been privately testing new service line prototypes with some customers to better meet their needs. Exciting announcement are on the horizon.


WhiteHat Security is a leading provider of website security services.


Continue reading Why Speed & Frequency of Software Security Testing Matter, A LOT

Posted in Uncategorized