When you are attacking a web application, automation is your friend. Not only if you are lazy, but also because automation can make your attacks faster, more reliable and more effective. This is the first in a series of posts exploring ways of using automation in web application testing, and the limitations that exist on its effective use.
Web application vulnerability scanners seek to automate many of the tasks involved in attacking an application, from initial mapping through to probing for common vulnerabilities. I've used several of the available products, and they do a decent job of carrying out these tasks. But even the best current scanners do not detect all or even a majority of the vulnerabilities in a typical application.
Scanners are effective at detecting vulnerabilities which have a standard signature. The scanner works by sending a crafted request designed to trigger that signature if the vulnerability is present. It then reviews the response to determine whether it contains the signature; if so, the scanner reports the vulnerability.
Plenty of important bugs can be detected in this way with a degree of reliability, for example:
In some SQL injection flaws, sending a standard attack string will result in a database error message.
In some reflected XSS vulnerabilities, a submitted string containing HTML mark-up will be copied unmodified into the application's response.
In some command injection vulnerabilities, sending crafted input will result in a time delay before the application responds.
However, not every vulnerability in the above categories will be detected using standard signature-based checks. Further, there are many categories of vulnerability which cannot be probed for in this manner, and which today's scanners are not able to detect in an automated way. These limitations arise from various inherent barriers to automation that affect computers in general:
Computers only process syntax. Scanners are effective at recognizing syntactic items like HTTP status codes and standard error messages. However, they do not have any semantic understanding of the content they process, nor are they able to make any normative judgments about it. For example, a function to update a shopping cart may involve submitting various parameters. A scanner is not able to understand that one of these parameters represents a quantity and that another parameter represents a price. Nor, therefore, is it able to assess that being able to modify the quantity is insignificant while being able to modify the price indicates a security flaw.
Computers do not improvise. Many web applications implement rudimentary defences against common attacks, which can be circumvented by an attacker. For example, an anti-XSS filter may strip the expression <script> from user input; however, the filter can be bypassed by using the expression <scr<script>ipt>. A human attacker will quickly understand what validation is being performed and (presumably) identify the bypass. However, a scanner which simply submits standard attack strings and monitors responses for signatures will miss the vulnerability.
Computers are not intuitive. When a human being is attacking a web application, they often have a sense that something doesn't "feel right" in a particular function, leading them to probe carefully how it handles all kinds of unexpected input, including modifying several parameters at once, removing individual parameters, accessing the function's steps out of sequence, etc. Many significant bugs can only be detected through these kind of actions, however for an automated scanner to detect them it would need to perform these checks against every function of the application, and every sequence of requests. Taking this approach would increase exponentially the number of requests which the scanner needs to issue, making it practically infeasible.
The barriers to automation described above will only really be addressed through the incorporation of full artificial intelligence capabilities into vulnerability scanners. In the meantime, these barriers entail that many important categories of vulnerability cannot be reliably detected by today's automated scanners, for example:
Logic flaws, for instance where an attacker can bypass one step of a multi-stage login process by proceeding directly to the next step and manually setting the necessary request parameters. Even if a scanner performs the requests necessary to do this, it cannot interpret the non-standard navigation path as a security flaw, because it does not understand the significance of the content returned at each stage.
Broken access controls, in which an attacker can access other users' data by modifying the value of an identifier in a request parameter. Because a scanner does not understand the role played by the parameter, or the meaning of the different content which is received when this is modified, it cannot diagnose the vulnerability.
Information leakage, in which poorly designed functionality discloses listings of session tokens or other sensitive items. A scanner cannot distinguish between these listings and any other normal content.
Design weaknesses in specific functions, such as weak password quality rules, or easily guessable forgotten password challenges.
Further, even amongst the types of vulnerability that scanners are able to detect, such as SQL injection and XSS, there are many instances of these flaws which scanners do not identify, because they can only be exploited by modifying several parameters at once, or by using crafted input to beat defective input validation, or by exploiting several different pieces of observed behaviour which together make the application vulnerable to attack.
Current scanners implement manual workarounds to help them identify some of the vulnerabilities that are inherently problematic for them. For example, some scanners can be configured with multiple sets of credentials for accounts with different types of access. They will attempt to access all of the discovered functionality within each user context, to identify what segregation of access is actually implemented. However, this still requires an intelligent user to review the results, and determine whether the actual segregation of access is in line with the application's requirements for access control.
Automated scanners are often useful as a means of discovering some of an application's vulnerabilities quickly, and of obtaining an overview of its content and functionality. However, no serious security tester should be willing to rely solely upon the results of a scanner. Many of the defects which scanners are inherently unable to detect can be classified as "low hanging fruit" - that is, capable of being discovered and exploited by a human attacker with modest skills. Receiving a clean bill of health from today's scanners provides no assurance that an application does not contain many important vulnerabilities in this category.