login

Burp Suite, the leading toolkit for web application security testing

PortSwigger Web Security Blog

Tuesday, 22 May 2007

Barriers to automation 1 - vulnerability scanners

When you are attacking a web application, automation is your friend. Not only if you are lazy, but also because automation can make your attacks faster, more reliable and more effective. This is the first in a series of posts exploring ways of using automation in web application testing, and the limitations that exist on its effective use.

Web application vulnerability scanners seek to automate many of the tasks involved in attacking an application, from initial mapping through to probing for common vulnerabilities. I've used several of the available products, and they do a decent job of carrying out these tasks. But even the best current scanners do not detect all or even a majority of the vulnerabilities in a typical application.

Scanners are effective at detecting vulnerabilities which have a standard signature. The scanner works by sending a crafted request designed to trigger that signature if the vulnerability is present. It then reviews the response to determine whether it contains the signature; if so, the scanner reports the vulnerability.

Plenty of important bugs can be detected in this way with a degree of reliability, for example:

  • In some SQL injection flaws, sending a standard attack string will result in a database error message.

  • In some reflected XSS vulnerabilities, a submitted string containing HTML mark-up will be copied unmodified into the application's response.

  • In some command injection vulnerabilities, sending crafted input will result in a time delay before the application responds.

However, not every vulnerability in the above categories will be detected using standard signature-based checks. Further, there are many categories of vulnerability which cannot be probed for in this manner, and which today's scanners are not able to detect in an automated way. These limitations arise from various inherent barriers to automation that affect computers in general:

  • Computers only process syntax. Scanners are effective at recognizing syntactic items like HTTP status codes and standard error messages. However, they do not have any semantic understanding of the content they process, nor are they able to make any normative judgments about it. For example, a function to update a shopping cart may involve submitting various parameters. A scanner is not able to understand that one of these parameters represents a quantity and that another parameter represents a price. Nor, therefore, is it able to assess that being able to modify the quantity is insignificant while being able to modify the price indicates a security flaw.

  • Computers do not improvise. Many web applications implement rudimentary defences against common attacks, which can be circumvented by an attacker. For example, an anti-XSS filter may strip the expression <script> from user input; however, the filter can be bypassed by using the expression <scr<script>ipt>. A human attacker will quickly understand what validation is being performed and (presumably) identify the bypass. However, a scanner which simply submits standard attack strings and monitors responses for signatures will miss the vulnerability.

  • Computers are not intuitive. When a human being is attacking a web application, they often have a sense that something doesn't "feel right" in a particular function, leading them to probe carefully how it handles all kinds of unexpected input, including modifying several parameters at once, removing individual parameters, accessing the function's steps out of sequence, etc. Many significant bugs can only be detected through these kind of actions, however for an automated scanner to detect them it would need to perform these checks against every function of the application, and every sequence of requests. Taking this approach would increase exponentially the number of requests which the scanner needs to issue, making it practically infeasible.

The barriers to automation described above will only really be addressed through the incorporation of full artificial intelligence capabilities into vulnerability scanners. In the meantime, these barriers entail that many important categories of vulnerability cannot be reliably detected by today's automated scanners, for example:

  • Logic flaws, for instance where an attacker can bypass one step of a multi-stage login process by proceeding directly to the next step and manually setting the necessary request parameters. Even if a scanner performs the requests necessary to do this, it cannot interpret the non-standard navigation path as a security flaw, because it does not understand the significance of the content returned at each stage.

  • Broken access controls, in which an attacker can access other users' data by modifying the value of an identifier in a request parameter. Because a scanner does not understand the role played by the parameter, or the meaning of the different content which is received when this is modified, it cannot diagnose the vulnerability.

  • Information leakage, in which poorly designed functionality discloses listings of session tokens or other sensitive items. A scanner cannot distinguish between these listings and any other normal content.

  • Design weaknesses in specific functions, such as weak password quality rules, or easily guessable forgotten password challenges.

Further, even amongst the types of vulnerability that scanners are able to detect, such as SQL injection and XSS, there are many instances of these flaws which scanners do not identify, because they can only be exploited by modifying several parameters at once, or by using crafted input to beat defective input validation, or by exploiting several different pieces of observed behaviour which together make the application vulnerable to attack.

Current scanners implement manual workarounds to help them identify some of the vulnerabilities that are inherently problematic for them. For example, some scanners can be configured with multiple sets of credentials for accounts with different types of access. They will attempt to access all of the discovered functionality within each user context, to identify what segregation of access is actually implemented. However, this still requires an intelligent user to review the results, and determine whether the actual segregation of access is in line with the application's requirements for access control.

Automated scanners are often useful as a means of discovering some of an application's vulnerabilities quickly, and of obtaining an overview of its content and functionality. However, no serious security tester should be willing to rely solely upon the results of a scanner. Many of the defects which scanners are inherently unable to detect can be classified as "low hanging fruit" - that is, capable of being discovered and exploited by a human attacker with modest skills. Receiving a clean bill of health from today's scanners provides no assurance that an application does not contain many important vulnerabilities in this category.

Thursday, 3 May 2007

On-site request forgery

Request forgery is a familiar attack payload for exploiting stored XSS vulnerabilities. In the MySpace worm, Samy placed a script within his profile which caused any user viewing the profile to perform various unwitting actions, including adding Samy as a friend and copying his script into their own profile. In many XSS scenarios, when you simply wish to perform a particular action with different privileges, on-site request forgery is easier and more reliable than attempting to hijack a victim’s session.

What is less well appreciated is that stored on-site request forgery bugs can exist when XSS is not possible. Consider a message board application which lets users submit items that are viewed by other users. Messages are submitted using a request like the following:

POST /submit.php
Content-Length: 34

type=question&name=daf&message=foo

which results in the following being added to the messages page:

<tr>
  <td><img src="/images/question.gif"></td>
  <td>daf</td>
  <td>foo</td>
</tr>


In this situation, you would of course test for XSS. However, it turns out that the application is properly HTML-encoding any " < and > characters which it inserts into the page. Having satisfied yourself that this defence cannot be bypassed in any way, you might move on to the next test.

But look again. We control part of the target of the <img> tag. Although we can’t break out of the quoted string, we can modify the URL to cause any user who views our message to make an arbitrary on-site GET request. For example, submitting the following value in the type parameter will cause anyone viewing our message to make a request which attempts to add a new administrative user:

../admin/newUser.php?username=daf2&password=0wned &role=admin#

When an ordinary user is induced to issue our crafted request, it will of course fail. But when an administrator views our message, our backdoor account gets created. We have performed a successful on-site request forgery attack even though XSS is not possible. And of course, the attack will succeed even if administrators take the precaution of disabling JavaScript.

(In the above attack string, note the # character which effectively terminates the URL before the .gif suffix. We could just as easily use & to incorporate the suffix as a further request parameter.)


User Forum

Get help from other users, at the Burp Suite User Forum:

Visit the forum ›

Copyright 2014 PortSwigger Ltd. All rights reserved.