Thursday, November 10, 2011

Burp is voted #1 web scanner

Every couple of years, SecTools.org carries out a survey of the most popular security tools, as voted for by thousands of users. The latest results are out and Burp has done pretty well:

Needless to say, I'm pretty happy with this result, especially given the survey's overall focus on network security tools.

Burp has come a long way since it started off as a hobby project which soaked up my spare time. Burp wouldn't be where it is today without the huge support I get from users - with useful feature requests, bug reports, and telling other people about the software. So I owe a massive thank you to everyone who uses Burp and has helped over the years.

Despite being number one, I will of course be trying harder. I'm working on a lot of cool new features for Burp, which will be released over the coming year. Please do keep the feature requests coming!

Wednesday, October 12, 2011

Breaking encrypted data using Burp

A while ago, Burp Intruder added a bit flipping payload type, suitable for automatic testing for vulnerable CBC-encrypted session tokens and other data. If you aren't familiar with this vulnerability, take a look at The Web App Hacker's Handbook, 2nd Edition, pages 227-233, and also check out this exercise (login required) in the MDSec online training labs.

Burp Intruder now has a further payload type, suitable for automatic testing for vulnerable ECB-encrypted data. The theory behind these vulnerabilities is described on pages 224-226 of WAHH2e. Here, I'll briefly describe an example of the vulnerability, and show how it can be exploited using Burp.

ECB ciphers divide plaintext into equal-sized blocks, and encrypt each block separately using a secret key. As a result, identical blocks of plaintext always encrypt into identical blocks of ciphertext, regardless of their position within the structure. This means that it is possible to meaningfully modify the plaintext in a structure by duplicating and shuffling the blocks of ciphertext. Depending on the contents of the structure, and the application's handling of the modified data, it may be possible to interfere with application logic - for example, switching the user ID field in a structured session token, changing an encrypted price, etc.

Let's look at an example from the MDSec online training labs. Here, the application uses session tokens containing several meaningful components, including a numeric user identifier:

rnd=2458992;app=iTradeEUR_1;uid=218;username=dafydd;time=634430423694715000;

When encrypted, using an ECB cipher, the token becomes:

68BAC980742B9EF80A27CBBBC0618E3876FF3D6C6E6A7B9CB8FCA486F9E11922776F0307329140AA
BD223F003A8309DDB6B970C47BA2E249A0670592D74BCD07D51A3E150EFC2E69885A5C8131E4210F

The individual blocks of plaintext correspond to blocks of ciphertext as follows:

rnd=2458 68BAC980742B9EF8
992;app= 0A27CBBBC0618E38
iTradeEU 76FF3D6C6E6A7B9C
R_1;uid= B8FCA486F9E11922
218;user 776F0307329140AA
name=daf BD223F003A8309DD
ydd;time B6B970C47BA2E249
=6344304 A0670592D74BCD07
23694715 D51A3E150EFC2E69
000;     885A5C8131E4210F

Now, because each block of ciphertext will always decrypt into the same block of plaintext, it is possible to manipulate the sequence of ciphertext blocks, and meaningfully modify the corresponding plaintext. Depending on how the application handles the modified data, this may allow you to switch to a different user or escalate privileges.

For example, if the second block is copied following the fourth block, the resulting sequence of blocks will be:

rnd=2458 68BAC980742B9EF8
992;app= 0A27CBBBC0618E38
iTradeEU 76FF3D6C6E6A7B9C
R_1;uid= B8FCA486F9E11922
992;app= 0A27CBBBC0618E38
218;user 776F0307329140AA
name=daf BD223F003A8309DD
ydd;time B6B970C47BA2E249
=6344304 A0670592D74BCD07
23694715 D51A3E150EFC2E69
000;     885A5C8131E4210F

The decrypted token now contains a modified "uid" value, and also a duplicated "app" value. What happens will depend on how the application processes the decrypted token. If you are lucky, the application will retrieve the "uid" value, ignore the duplicated "app" value, and not check the overall integrity of the whole structure. If the application behaves like this, then it will process the request in the context of user 992, rather than user 218.

Now, it is possible to perform this attack manually, but this involves a lot of effort if you are working blind, without knowledge of the actual contents of the plaintext blocks. The new Burp Intruder payload type helps you to automate the process of finding and exploiting these vulnerabilities, in a much more effective way.

The configuration for the new payload type looks like this:

In the present case, we're going to tell Burp that the token contains 8-byte blocks and is encoded as ASCII hex. If you didn't know or couldn't guess this information, you could try different configurations, as this attack is normally fairly fast to run.

When the attack runs, Burp will split the base value into blocks, and will systematically duplicate and shuffle them, inserting a copy of each block at each block boundary. In some situations, this method alone will be sufficient to find a vulnerability. However, to make your attack more effective, you should also if possible use the further configuration "obtain additional blocks from these encrypted strings", as described below.

There is often a large element of luck involved when blindly shuffling blocks in ECB-encrypted data structures, and success often depends upon happening to find a block of ciphertext whose decrypted plaintext contains the right meaningful data when inserted into the structure (such as a number that could be a valid user ID within a session token). You can dramatically increase the likelihood of success by providing Burp with a large sample of other data that is encrypted using the same cipher and key. In the present example, you can harvest a large number of valid tokens using Burp Intruder or Sequencer, and configure the ECB block shuffler payload to use these tokens to derive additional blocks. Burp will then take all of the unique ciphertext blocks that you have provided, and use these when inserting blocks into the original data. In practice, if you can find suitable additional encrypted data, this method proves highly effective when targeting vulnerable applications.

Having used Burp Sequencer to obtain a large sample of tokens via the main application login, our configuration of the ECB block shuffler now looks like this:

To deliver the attack, we're going to target the user's account page within the application, which displays information about the logged in user (based, of course, on the supplied session token):

In order to identify the user context associated with each attack request, we're going to use the Extract Grep feature to highlight the name of the current user within the account page response:

We then start the attack and review the results. Unsurprisingly, a lot of the requests cause a redirection back to the login, because we have corrupted the format of the token. Many of the requests result in an apparently valid session, but showing "unknown user" - here, we have modified the UID to a value that does not correspond to any actually registered user. But in other requests, we are apparently logged in as a different user, including, as luck would have it, an application administrator:

A successful attack like this still requires a lot of luck, including whether and how the application tolerates modifications to the modified data, the positioning of block boundaries in relation to interesting data, and the ability to find a block containing suitable data for substitution. But the new Intruder payload type takes a lot of the pain out of testing blind for this vulnerability.

Have fun!

Wednesday, September 21, 2011

It's a biggie

Kindle schmindle. You can't quite beat a kilo and a half of dead tree landing on your desk. Should make an ample paperweight / doorstop if nothing else.

Tuesday, September 20, 2011

MDSec online training labs

Now that the second edition of The Web Application Hacker's Handbook is being shipped, it's time to start talking about the online training labs that accompany the new book. These labs are:

  • Written by the authors of WAHH.

  • Available online and on-demand, for you to use as you work through the book.

  • Very extensive, containing over 300 individual examples demonstrating almost every kind of web application vulnerability.

  • Cheap, costing only $7 per hour to use.

  • Hosted, fashionably, in the cloud, so you get your own server to play on, without worrying about interference from other lab users.

We hope that these labs will make the new edition of the book even more effective as a learning resource, letting you try for yourself any particular vulnerability types or variations that you have not encountered in the wild.

Try out the MDSec labs now!

Wednesday, August 17, 2011

The fame of Peter Wiener

Here's an extract from The Basics of Hacking and Penetration Testing by Patrick Engebretson, which was published earlier this month:


I guess it's a testament to Burp's popularity that Peter is getting around so much.
Remember, if you do want to change the default Burp Spider settings to submit a different name in your forms, the configuration details are here.

Update:
It seems Peter's fame continues to grow. Someone sent me the following photo showing the result of testing a network device:

If anyone has any further evidence of Peter's travels, email me and I'll post them here.

Friday, June 3, 2011

Burp Suite Free Edition v1.4 released

Burp Suite Free Edition v1.4 is now available for download.

This is a major upgrade with numerous new features, including:

Have fun!

Wednesday, May 11, 2011

Web App Hacker's Handbook 2nd Edition - Preview

The first draft of the new edition of WAHH is now completed, and the lengthy editing and production process is underway. Just to whet everyone's appetite, I'm posting below an exclusive extract from the Introduction, describing what has changed in the second edition.

(And in a vain attempt to quell the tidal wave of questions: the book will be published in October; there won't be any more extracts; we don't need any proof readers, thanks.)

What’s Changed in the Second Edition?

In the four years since the first edition of this book was published, much has changed and much has stayed the same. The march of new technology has, of course, continued apace, and this has given rise to specific new vulnerabilities and attacks. The ingenuity of hackers has also led to the development of new attack techniques, and new ways of exploiting old bugs. But neither of these factors, technological or human, has created a revolution. The technologies used in today’s applications have their roots in those that are many years old. And the fundamental concepts involved in today’s cutting-edge exploitation techniques are older than many of the researchers who are applying them so effectively. Web application security is a dynamic and exciting area to work in, but the bulk of what constitutes our accumulated wisdom has evolved slowly over many years, and would have been distinctively recognizable to practitioners working a decade or more ago.

This second edition is by no means a “complete rewrite” of the first edition. Most of the material in the first edition remains valid and current today. Approximately 30% of the content in the second edition is either completely new or extensively revised. The remaining 70% has had minor modifications or none at all. For readers who have upgraded from the first edition and may feel disappointed by these numbers, you should take heart. If you have mastered all of the techniques described in the first edition, then you already have the majority of the skills and knowledge that you need. You can focus your reading on what is new in this second edition, and quickly learn about the areas of web application security that have changed in recent years.

One significant new feature of the second edition is the inclusion throughout the book of real examples of nearly all of the vulnerabilities that are covered. Any place you see a Try it! link, you can go online and work interactively with the example being discussed, to confirm that you can find and exploit the vulnerability it contains. There are several hundred of these labs, which you can work through at your own pace as you read the book. The online labs are available on a subscription basis for a modest fee, to cover the costs of hosting and maintaining the infrastructure involved.

For readers wishing to focus their attention on what is new in the second edition, there follows a summary of the key areas where material has been added or rewritten.

Chapter 1, “Web Application (In)security”, has been partly updated to reflect new uses of web applications, some broad trends in technologies, and the ways in which a typical organization’s security perimeter has continued to change.

Chapter 2, “Core Defense Mechanisms”, has received minor changes, with a few examples added of generic techniques for bypassing input validation defenses.

Chapter 3, “Web Application Technologies”, has been expanded with some new sections describing technologies that are either new or were described more briefly elsewhere within the first edition. The areas added include REST, Ruby on Rails, SQL, XML, web services, CSS, VBScript, the document object model, Ajax, JSON, the same-origin policy, and HTML5.

Chapter 4, “Mapping the Application”, has received various minor updates to reflect developments in techniques for mapping content and functionality.

Chapter 5, “Bypassing Client-Side Controls”, has been updated more extensively. In particular, the section on browser extension technologies has been largely rewritten to include more detailed guidance on generic approaches to bytecode decompilation and debugging, how to handle serialized data in common formats, and how to deal with common obstacles to your work, including non-proxy-aware clients and problems with SSL. The chapter also now covers Silverlight technology.

Chapter 6, “Attacking Authentication”, remains current and has received only minor updates.

Chapter 7, “Attacking Session Management”, has been updated to cover new tools for automatically testing the quality of randomness in tokens. It also contains new material on attacking encrypted tokens, including practical techniques for token tampering without knowing either the cryptographic algorithm or the encryption key being used.

Chapter 8, “Attacking Access Controls”, now covers access control vulnerabilities arising from direct access to server-side methods, and from platform misconfiguration where rules based on HTTP methods are used to control access. It also describes some new tools and techniques that you can use to partially automate the frequently onerous task of testing access controls.

The material in Chapters 9 and 10 has been reorganized to create more manageable chapters and a more logical arrangement of topics. Chapter 9, “Attacking Data Stores” focuses on SQL injection and similar attacks against other data store technologies. As SQL injection vulnerabilities have become more widely understood and addressed, this material now focuses more on the practical situations where SQL injection is still to be found. There are also minor updates throughout to reflect current technologies and attack methods, and there is a new section on using automated tools for exploiting SQL injection vulnerabilities. The material on LDAP injection has been largely rewritten to include more detailed coverage of specific technologies (Microsoft Active Directory and OpenLDAP), as well as new techniques for exploiting common vulnerabilities. This chapter also now covers attacks against NoSQL.

Chapter 10, “Attacking Back-End Components”, covers the other types of server-side injection vulnerabilities that were previously included in Chapter 9. There are new sections covering XML external entity injection and injection into back-end HTTP requests, including HTTP parameter injection/pollution and injection into URL rewriting schemes.

Chapter 11, “Attacking Application Logic”, includes more real-world examples of common logic flaws in input validation functions. With the increased usage of encryption to protect application data at rest, we also include an example of how to identify and exploit encryption oracles to decrypt encrypted data.

The topic of attacks against other application users, previously covered by Chapter 12, has now been split into two separate chapters, as this material was becoming unmanageably large as a single chapter. Chapter 12, “Attacking Users: Cross-Site Scripting” focuses solely on XSS, and this material has been extensively updated in various areas. The sections on bypassing defensive filters to introduce script code have been completely rewritten to cover new techniques and technologies, including various little-known methods for executing script code on current browsers. There is also much more detailed coverage of methods for obfuscating script code to bypass common input filters. There are several new examples of real-world XSS attacks. There is a new section on delivering working XSS exploits in challenging conditions, which covers escalating an attack across application pages, exploiting XSS via cookies and the Referer header, and exploiting XSS in non-standard request and response content such as XML. There is a detailed examination of browsers’ built-in XSS filters, and how these can be circumvented to deliver exploits. There are new sections on specific techniques for exploiting XSS in webmail applications and in uploaded files. Finally, there are various updates to the defensive measures that can be used to prevent XSS attacks.

The new Chapter 13, “Attacking Users: Other Techniques”, draws together the remainder of this huge area. The topic of cross-site request forgery has been updated to include CSRF attacks against the login function, common defects in anti-CSRF defenses, UI redress attacks, and common defects in framebusting defenses. A new section on cross-domain data capture includes techniques for stealing data by injecting text containing non-scripting HTML and CSS, and various techniques for cross-domain data capture using JavaScript and E4X. A new section examines the same-origin policy in more detail, including its implementation in different browser extension technologies, the changes brought by HTML5, and ways of crossing domains via proxy service applications. There are new sections on client-side cookie injection, SQL injection and HTTP parameter pollution. The section on client-side privacy attacks has been expanded to include storage mechanisms provided by browser extension technologies and HTML5. Finally, a new section has been added drawing together general attacks against web users that do not depend upon vulnerabilities in any particular application. These attacks can be delivered by any malicious or compromised web site, or by an attacker who is suitably positioned on the network.

Chapter 14, “Automating Customized Attacks”, has been expanded to cover common barriers to automation, and how to circumvent these. Many applications employ defensive session-handling mechanisms that terminate sessions, use ephemeral anti-CSRF tokens, or use multi-stage processes to update application state. Some new tools are described for handling these mechanisms, which let you continue using automated testing techniques. A new section examines CAPTCHA controls, and some common vulnerabilities that can often be exploited to circumvent them.

Chapter 15, “Exploiting Information Disclosure”, contains new sections about XSS in error messages and exploiting decryption oracles.

Chapter 16, “Attacking Compiled Applications”, has not been updated.

Chapter 17, “Attacking Application Architecture”, contains a new section about vulnerabilities that arise in cloud-based architectures, and updated examples on exploiting architecture weaknesses.

Chapter 18, “Attacking the Application Server”, contains several new examples of interesting vulnerabilities in application servers and platforms, including Jetty, the JMX management console, ASP.NET, Apple iDisk server, Ruby WEBrick web server, and Java web server. It also has a new section looking at practical approaches to circumventing web application firewalls..

Chapter 19, “Finding Vulnerabilities in Source Code”, has not been updated.

Chapter 20, “A Web Application Hacker’s Toolkit”, has been updated with details of the latest features in proxy-based tool suites. It contains new sections about how to proxy the traffic of non-proxy-aware clients, and how to eliminate SSL errors in browsers and other clients, caused by the use of an intercepting proxy. There is a detailed description of the workflow that is typically employed when you are testing using a proxy-based tool suite. There is a new discussion about current web vulnerability scanners, and the optimal approaches to using these in different situations.

Friday, March 25, 2011

Burp v1.4 beta now available

A beta version of the new release of Burp is now available to Professional users. Although this is a beta release it is highly stable and suitable for normal day-to-day use.

There are probably a few bugs to flush out, and I'll hopefully be adding a few more things during the beta period. Any feedback and bug reports would be much appreciated. Please email these directly so I can get back to you for more details if required.

Due to the phenomenal time sink that is the WAHH rewrite, this release will be in beta for at least a month. For users of the free edition who cannot wait that long, there is an easy solution.

Have fun!

Burp v1.4 preview - Session handling: putting it all together

The functionality needed to let Burp automatically handle a wide variety of session handling challenges is necessarily complex, and often requires a lot of careful configuration. The best way to illustrate the power of the new features, and show how the configuration works in practice, is via an example.

Let's look at an application function which can only be accessed within an authenticated session, and employs a further token to defend against CSRF attacks. You want to test this function for various input-based vulnerabilities like XSS and SQL injection. Previously, performing automated (and some manual) testing of this function would have faced two challenges: (a) ensuring that the session being used remained valid; and (b) obtaining a valid token to use in each request. The new functionality can take care of both these challenges.

To do this, we're going to define some session handling rules. These rules will be applied to each request that is made to the function we are testing by the Intruder, Scanner and Repeater tools:

  • Check whether the current session is valid, by requesting the user's landing page in the application, and inspecting the response to confirm that the user is still logged in.

  • If the user is not logged in, log them back in to obtain a valid session.

  • Request the page containing the form whose submission we are going to test. This form contains the anti-CSRF token that we need, within a hidden field.

  • Update the request to the function we are testing with the value of the anti-CSRF token.

In most situations, we need to make use of Burp's own session handling cookie jar, so the first rule we define tells Burp to add cookies from the cookie jar to every request. This is, in fact, the default rule for the Scanner and Spider tools, so we'll just modify the default rule to apply to the Intruder and Repeater tools as well. This rule performs a single action, shown below:

The rule's scope is defined to include the relevant tools, and apply to all URLs:

Next, we need to check that the user's current session on the target application is valid. Assuming we want to apply this rule to all requests within the target application, we can define it to be in-scope for the whole of the application's domain:

We then add a suitable description and add an action of the type "check session is valid":

This opens the editor for this type of action, which contains a lot of configuration options:

The first set of options determines which request Burp uses to validate the current session. The options are:

  • Issue the actual request that is currently being processed. This option is ideal if the application always responds to out-of-session requests with a common response signature, such as a redirection to the login.

  • Run a macro, to make one or more other requests. This option is ideal if, to identify whether the session is valid, you need to request a standard item, such as the user's home page. It is also the best option if you need to apply further rules to modify the request currently being processed - for example (as in the present case) to update an anti-CSRF token in the request. If the option to run a macro is selected, you have a further option whether to do this every for every request, or only every N requests. If the application is aggressive in terminating sessions in response to unexpected input, it is recommended that you validate the session every time; otherwise, you can speed things up by only validating the session periodically.

For the current example, we are going to run a macro to fetch the user's landing page in the application, to check that their session is valid. To do this, we need to define our macro, by clicking on the "new" button in the previous screenshot. This opens the macro recorder, enabling us to select the request(s) that we wish to include in the macro. In the present case, we only need to select the GET request for the user's landing page:

The second set of options in the "check session is valid" action controls how Burp inspects the (final) response from the macro to determine whether the session is valid. Various options are available, and the configuration we need in the present case is shown below:

The final set of options for this action determines how Burp will behave depending on whether the current session is valid:

  • You can tell Burp not to perform any further actions for this request if the session is valid. Using this option lets you define subsequent, separate actions to recover a valid session. This option is mandatory if the request itself has already been issued in order to determine whether the session is valid.

  • You can tell Burp to perform a sub-action if the session is invalid, and then continue to process subsequent actions. This is useful when you need to define subsequent actions in any case, following the session validity check, for example to run a macro to obtain a request token or modify the application's state.

In the present example, we need to use the second option. If the session is invalid, we will run a macro to log the user back in. We need to record a further macro, to perform the actual login, and tell Burp to run this macro and update the session handling cookie jar based on the results:

At this point, we have configured Burp to update requests with cookies from its cookie jar, and to log the user back in to the target application when their session is invalid. To complete the required configuration, we need to define a further rule to deal with the anti-CSRF token used in the function we want to test. The request we are testing looks like this:

POST /auth/4/NewUserStep2.ashx HTTP/1.1
Content-Type: application/x-www-form-urlencoded
Host: mdsec.net
Content-Length: 137
Cookie: SessionId=39DD9F0CB979BFB431005524A4010244

realname=testuser&username=testuser&userrole=user&password=letmein1
&confirmpassword=letmein1&nonce=938549246127349541173

To ensure that our requests to this function are properly handled, we need to ensure that a valid nonce is supplied with each request. The value of this nonce is supplied by the application in a hidden field within the form that generates the above request. So our rule needs to run a macro to fetch the page containing the form, and update the current request with the value of the nonce parameter. We add a further rule with an action of the type "run macro" and configure it as follows:

In the above configuration, we have specified that Burp should run a new macro, which fetches the form containing the anti-CSRF token, and then obtain the nonce parameter from the (final) macro response, and update this in the request. Alternatively, we could select the "update all parameters" option, and Burp would automatically attempt to match parameters in the request with those specified in the macro response.

In terms of the scope for this rule, this obviously needs to be defined more narrowly than the whole application domain. For example, we could define the rule to apply only to the exact URL in the above request. This is the best option if the application only employs anti-CSRF tokens in a few locations. However, in some applications, tokens are used for a large number of functions, and a token obtained within one function can be submitted within a different function. In this situation, we could define a rule that applies to the whole domain, but only to requests containing a specified parameter name. In this way, any time a request is made to the application that contains an anti-CSRF token, the rule will execute and Burp will fetch a new valid token to use in the request.

The full configuration, with its three session handling rules and three macros, looks like this within the main Burp UI:

You can test the configuration is working by logging out of the application, sending the authenticated, token-protected request to Burp Repeater, and verifying that it performs the required action. The request will probably take longer to return than normal, because behind the scenes Burp is making several other requests, to validate your session, log in again if necessary, and obtain a token to use in the request. Once you are happy that all of this is working correctly, you can send the request to Burp Intruder or Scanner, to perform your automated testing in the normal way.

Thursday, March 24, 2011

Burp v1.4 preview - Macros

A key part of Burp's new session handling functionality is the ability to run macros, as defined in session handling rules. A macro is a predefined sequence of one or more requests. Typical use cases for macros include:

  • Fetching a page of the application (such as the user's home page) to check that the current session is still valid.

  • Performing a login to obtain a new valid session.

  • Obtaining a token or nonce to use as a parameter in another request.

  • When scanning or fuzzing a request in a multi-step process, performing the necessary preceding requests, to get the application into a state where the targeted request will be accepted.

Macros are recorded using your browser. When defining a macro, Burp displays a view of the Proxy history, from which you can select the requests to be used for the macro. You can select from previously made requests, or record the macro afresh and select the new items from the history.

When you have recorded the macro, the macro editor shows the details of the items in the macro, which you can review and configure as required:

As well as the basic sequence of requests, each macro includes some important configuration about how items in the sequence should be handled, and any interdependencies between items:

For each item in the macro, the following settings can be configured:

  • Whether cookies from the session handling cookie jar should be added to the request.

  • Whether cookies received in the response should be added to the session handling cookie jar.

  • For each parameter in the request, whether it should use a preset value, or a value derived from a previous response in the macro.

The ability to derive a parameter's value from a previous response in the macro is particularly useful in some multi-stage processes, and in situations where applications make aggressive use of anti-CSRF tokens. When a new macro is defined, Burp tries to automatically find any relationships of this kind, by identifying parameters whose values can be determined from the preceding response (form field values, redirection targets, query strings in links, etc.). You can easily review and edit the default macro configuration applied by Burp before the macro is used. Further, the configured macro can be tested in isolation, and the full request/response sequence reviewed, to check that it is functioning in the way you require.

Of course, the full power of using macros is only realised once they are incorporated into suitable session handling rules, to control the way that different requests are processed by Burp's tools, and work with the session handling mechanism and related functionality being used by the target application. As is perhaps already apparent, the configuration required in many real-world situations is complex, and mistakes are easily made. There is a need for a full in-tool debugger for troubleshooting the session handling configuration. In the meantime, an effective workaround is to chain a second instance of Burp as an upstream proxy from the instance being configured. The proxy history in the upstream instance will show you the full sequence of requests and responses that occur when your session handling rules are executed, helping you to find and fix any problems in your configuration.

Wednesday, March 23, 2011

Burp v1.4 preview - Session handling

Some problems commonly encountered when performing any kind of fuzzing or scanning of web applications are:

  • The application terminates the session being used for testing, either defensively or for other reasons, and the remainder of the testing exercise is ineffective.

  • Some functions use changing tokens that must be supplied with each request (for example, to prevent request forgery attacks).

  • Some functions require a series of other requests to be made before the request being tested, to get the application into a suitable state for it to accept the request being tested.

All of these problems can also arise when you are testing manually, and resolving them manually is often tedious, reducing your appetite for further testing.

The second broad area of new functionality in Burp v1.4 is a range of features to help in all of these situations, letting you continue your manual and automated testing while Burp takes care of the problems for you in the background. This functionality is quite complex, with a lot of configuration to explain. We'll start by looking at the core session handling features.

Firstly, Burp's cookie jar, which was previously part of the Spider tool, is now more sophisticated and is shared between all tools. Cookies set in responses are stored in the cookie jar, and can be automatically added to outgoing requests. All of this is configurable so, for example, you can update the cookie jar for cookies received by the Proxy and Spider, and have Burp automatically add cookies to requests sent by the Scanner and Repeater. The cookie jar configuration is shown in the new "sessions" tab within the main "options" tab:

As shown, by default the cookie jar is updated based on traffic from the Proxy and Spider tools. You can view the contents of the cookie jar and edit cookies manually if you wish:

For all tools other than the Proxy, HTTP responses are examined to identify new cookies. In the case of the Proxy, incoming requests from the browser are also inspected. This is useful where an application has previously set a persistent cookie which is present in your browser, and which is required for proper handling of your session. Having Burp update its cookie jar based on requests through the Proxy means that all the necessary cookies will be added to the cookie jar even if the application does not update the value of this cookie during your current visit.

Burp's cookie jar honours the domain scope of cookies, in a way that mimics Internet Explorer's interpretation of cookie handling specifications. Path scope is not honoured.

In addition to the basic cookie jar, Burp also lets you define a list of session handling rules, which give you very fine-grained control over how Burp deals with an application's session handling mechanism and related functionality. These rules are configured in the new "sessions" tab:

Each rule comprises a scope (what the rule applies to) and actions (what the rule does). For every outgoing request that Burp makes, it determines which of the defined rules are in-scope for the request, and performs all of those rules' actions in order (unless a condition-checking action determines that no further actions should be applied to the request).

The scope for each rule can be defined based on any or all of the following features of the request being processed:

  • The Burp tool that is making the request.

  • The URL of the request.

  • The names of parameters within the request.

Each rule can perform one or more actions. The following actions are currently implemented:

  • Add cookies from the session handling cookie jar.

  • Set a specific cookie or parameter value.

  • Check whether the current session is valid, and perform sub-actions conditionally on the result.

  • Run a macro.

  • Prompt the user for in-browser session recovery.

All of these actions are highly configurable, and can be combined in arbitrary ways to handle virtually any session handling mechanism. Being able to run arbitrary macros (defined request sequences), and update specified cookie and parameter values based on the result, allows you to automatically log back in to an application part way through an automated scan or Intruder attack. Being able to prompt for in-browser session recovery enables you to work with login mechanisms that involve keying a number from a physical token, or solving a CAPTCHA-style puzzle.

By creating multiple rules with different scopes and actions, you can define a hierarchy of behaviour that Burp will apply to different applications and functions. For example, on a particular test you could define the following rules:

  • For all requests, add cookies from Burp's cookie jar.

  • For requests to a specific domain, validate that the current session with that application is still active, and if not, run a macro to log back in to the application, and update the cookie jar with the resulting session token.

  • For requests to a specific URL containing the __csrftoken parameter, first run a macro to obtain a valid __csrftoken value, and use this when making the request.

We'll be examining some more details of how this functionality works shortly. In the meantime, it is worth noting a few points about how the new session handling features affect some of Burp's existing functionality:

  • There is a default session handling rule which updates all requests made by the Scanner and Spider with cookies from Burp's cookie jar. This changes the session handling of requests made by the Scanner. Previously, these requests were always made within the same session as the relevant base request - in other words, cookies appearing in the request that was sent to be scanned were used for all scan requests for that item. Now, different cookies may be used if Burp's cookie jar has been updated since the request was sent to be scanned. This change is normally desirable, as it ensures that all scan requests are made in-session, provided you maintain a valid session using your browser. It also means that items in the active scan queue that are loaded from a state file will be scanned within your current session, not the session that was active when the state file was saved. If this is not the behaviour you require, you should disable the default session handling rule before performing any scanning.

  • In previous versions of Burp, the Spider configuration included some basic settings for using cookies from the cookie jar. These settings have now been removed, and the Suite-wide session handling functionality should be used instead. The previously default behaviour for the Spider's handling of cookies is replicated by the new default session handling rule, as described above.

  • Similarly, Intruder previously included an option to run a specified request before each attack request, to obtain a new cookie to use in the attack request. This option has now been removed, and instead you should define a session handling rule to run a macro before each request (or, more likely, use some more elegant configuration to achieve the same objective).

  • In cases where session handling rules modify a request before it is made (for example, to update a cookie or other parameter), some of Burp's tools will show the final, updated request, for purposes of clarity. This applies to the Intruder, Repeater and Spider tools. Requests that are shown within reported Scanner issues continue to show the original request, to facilitate clear comparison with the base request, where necessary. To observe the final request for a scan issue, as modified by the session handler, you can send the request to Burp Repeater and issue it there.

  • When the Scanner or Intruder makes a request that manipulates a cookie or parameter that is affected by a session handling action, the action is not applied to that request, to avoid interfering with the test that is being performed. For example, if you are using Intruder to fuzz all the parameters in a request, and you have configured a session handing rule to update the "sessid" cookie in that request, then the "sessid" cookie will be updated when Intruder is fuzzing other parameters. When Intruder is fuzzing the "sessid" cookie itself, Burp will send the Intruder payload string as the "sessid" value, and will not update it as is done normally.

As I said, the new session handling is powerful and complex. In the next couple of posts, we'll look at how it works in more detail.

Tuesday, March 22, 2011

Burp v1.4 preview - Testing access controls using your browser

In the previous post, we described how the "compare site maps" feature can be used to automate much of the laborious work involved in testing access controls. In some situations, however, performing a wholesale comparison like this may not meet your needs. It may be that you prefer to work in a more piecemeal way, individually testing the controls over a small number of key requests. Further, where a sensitive action is performed using a multi-stage process, simply re-requesting an entire site map in a different user context many be ineffective. In this situation, to perform an action, the user must typically make several requests in the correct sequence, with the application building up some state about the user’s actions as they do so. Simply re-requesting each of the items in a site map may fail to replicate the process correctly, and so the attempted action may fail for reasons other than the use of access controls.

For example, consider an administrative function to add a new application user. This may involve several steps, including loading the form to add a user, submitting the form with details of the new user, reviewing these details, and confirming the action. In some cases, the application may enforce access controls over some of these steps but not others. For example, the application may protect access to the initial form, but fail to protect the page that handles the form submission, or the confirmation page. The overall process may involve numerous requests, including redirections, with parameters submitted at earlier stages being retransmitted later via the client side. Each step of this process needs to be tested individually, to confirm whether access controls are being correctly applied.

The new version of Burp contains several features to facilitate this kind of testing. Firstly, when you are testing multi-stage functions in different user contexts, it is often helpful to review side-by-side the sequences of requests that are made by different users, in order to identify subtle differences that may merit further investigation. Burp now lets you:

  • Filter the proxy history based on the local proxy listener.

  • Open additional proxy history windows, each with its own filter.

To use these features to help test access controls, you need to use a separate browser for each user context you are testing, and create a separate proxy listener in Burp for use by each browser (you will need to update your proxy configuration in each browser to point to the relevant listener). For each browser, you can then open a separate proxy history window in Burp, and set the filter to show only requests from the relevant proxy listener. As you use the application in each browser, each history window will show only the items for the associated user context.

To open a new proxy history window, use the context menu on the main proxy history tab:

You can then use the filter bar on each window to show requests from different proxy listeners:

So far, this just gives you a way of separating the series of requests made by different user contexts. To actually evaluate access controls over a multi-stage process, you need to test each stage individually, reissuing the request in a different user context, and determining how this is handled by the application. This can often be a laborious process, and involves walking through the multi-stage process repeatedly using your browser, intercepting requests using the proxy, and updating the cookie in one or more requests so that they are issued in a different user context.

A further feature in the new version of Burp takes away some of this pain. It lets you select a request anywhere within Burp, and reissue the identical request from within your browser, within the browser's current session with the application. Using this feature, you can test access controls over a multi-stage process in the following way:

  • In your Burp proxy history, find the series of requests that are made when a higher privileged user performs the multi-stage process.

  • For each request, select the "request in browser in current session" option from the context menu. This gives you a unique URL to paste into your browser.

  • Paste the URL into the address bar of a browser that is logged in to the application as a lower privileged user who should not be able to perform the action. (This browser must be configured to use Burp as its proxy.)

  • If the application lets you, follow through the remainder of the multi-stage process in the normal way, using your browser.

  • Review the results to determine whether the lower privileged user was successful in performing the action.

This methodology, of manually pasting a series of URLs from Burp into your browser, is normally a lot easier than repeating a multi-stage process over and over, and modifying cookies manually using the proxy. In general, the new feature provides a highly efficient means of grabbing a request from the site map, or proxy history, or anywhere else, and repeating the request within your current browser session, to see how it is handled.

For people who are interested, the new feature is implemented as follows. When your browser requests the unique URL provided by Burp, Burp returns a redirection to the URL in the original request that you selected. When your browser follows the redirection, Burp replaces the outgoing request with the full request that you originally selected (headers and body), with the exception of the Cookie header, which is unmodified. When the browser receives the application's response to this request, it processes it in the context of the original URL, so all relative links, DOM access, etc. work correctly. Neat, huh?

Monday, March 21, 2011

Burp v1.4 preview - Comparing site maps

Somewhat later than planned, as is customary, Burp v1.4 is nearly ready, and it's time to share with you the highlights of what is coming. This release focuses on a small number of frequently requested features which, though you may not use them every day, can in some situations really make your life easier. Over the next few days, I'll be blogging about different features, to whet your appetite. Then I'll release a beta version for Pro users to play with. Everyone with a current license will receive an automatic upgrade to the new version.

The first broad area of new functionality in Burp v1.4 is various features to help test access controls. Fully automated tools generally do a terrible job of finding access control vulnerabilities, because they do not understand the meaning or context of the functionality that is being tested. For example, an application might contain two search functions - one that returns extracts from recent news articles, and another that returns sensitive details about registered users. These functions might be syntactically identical - what matters when evaluating them is the purpose of each function and the nature of the information involved. These factors are way beyond the wit of today's automated tools.

Burp does not try to identify any access control bugs by itself. Instead, it provides ways of automating much of the laborious work involved in access control testing, and presents all of the collected information in a clear form, allowing you to apply your human understanding to the question of whether any actual vulnerabilities exist.

One exciting new feature to help with access control testing is the facility to compare two site maps and highlight differences. This feature can be used in various ways to help find different types of access control vulnerabilities, and identify which areas of a large application warrant close manual inspection. Some typical use-cases for this functionality are as follows:

  • You can map the application using accounts with different privilege levels, and compare the results to identify functionality that is visible to one user but not the other.

  • You can map the application using a high-privileged account, and then re-request the entire site map using a low-privileged account, to identify whether access to privileged functions is properly controlled.

  • You can map the application using two different accounts of the same type, to identify cases where user-specific identifiers are used to access sensitive resources, and determine whether per-user data is properly segregated.

You can access the new feature using the context menu on the main site map:

This opens a wizard that lets you configure the details of the site maps you want to compare, and how the comparison should be done. When selecting the site maps you want to compare, the following options are available:

  • The current site map that appears in Burp's target tab.

  • A site map loaded from a Burp state file that you saved earlier.

  • Either of the above, re-requested in a different session context.

You can choose to include all of the site map's contents, or you can restrict only to selected or in-scope items. If you choose to re-request a site map in a different session context, it is particularly important not to include requests that might disrupt that context - for example, login, logout, user impersonation functions, etc.

To perform the comparison, Burp works through each request in the first site map, and matches this with a request in the second site map, and vice versa. The responses to matched requests are then compared to identify any differences. Any unmatched items in either site map are flagged as deleted or added, respectively. The exact process by which this is done is highly configurable, allowing you to tailor the comparison to features of the target application.

The options for configuring how Burp matches requests in the two site maps are shown below:

The default options shown will work well in most situations, and match requests based on URL file path, HTTP method and the names of parameters in the query string and message body. For some applications, you will need to modify these options to ensure that requests are correctly matched. For example, if an application uses the same base URL for various different actions, and specifies the action using the values of query string parameters, you will need to match requests on the values of these parameters as well as their names.

The options for configuring how Burp compares the responses to matched requests are shown below:

Again, the default options will work in most situations. These options ignore various common HTTP headers and form fields that have ephemeral values, and also ignore whitespace-only variations in responses. The default options are designed to reduce the noise generated by inconsequential variations in responses, allowing you to focus attention on differences that are more likely to matter.

The results of a simple site map comparison are shown below. This shows an application that has been mapped out with administrative privileges, and the resulting site map re-requested with user-level privileges. The results contain a colourised analysis of the differences between the site maps, and show items that have been added, deleted or modified between the two maps. (In this case, since the whole of the first site map was re-requested, there are no added or deleted items in the maps themselves.) For modified items, the table includes a “diff count” column, which is the number of edits required to modify the item in the first map into the item in the second map. When you select an item, the corresponding item in the other site map is also selected, and each response is highlighted to show the locations of the differences:

Interpreting the results of the site map comparison requires human intelligence, and an understanding of the meaning and context of specific application functions. For example, the screenshot above shows the responses that are returned to each user when they view their home page. The two responses show a different description of the logged-in user, and the administrative user has an additional menu item. These differences are to be expected, and they are neutral as to the effectiveness of the application’s access controls, since they only concern the user interface.

The screenshot below shows the response returned when each user requests the top-level admin page. Here, the administrative user sees a menu of available options, while the ordinary user sees a “not authorised” message. These differences indicate that access controls are being correctly applied:

The screenshot below shows the response returned when each user requests the “list users” admin function. Here, the responses are identical, indicating that the application is vulnerable, since the ordinary user should not have access to this function and does not have any link to it in their user interface:

As this example shows, simply exploring the site map tree and looking at the number of differences between items is not sufficient to evaluate the effectiveness of an application’s access controls. Two identical responses may indicate a vulnerability (for example, in an administrative function that discloses sensitive information), or may be harmless (for example, in an unprotected search function). Conversely, two different responses may still mean that a vulnerability exists (for example, in an administrative function that returns different content each time it is accessed), or may be harmless (for example, in a page showing profile information about the currently logged-in user). All of these scenarios may coexist even in the same application. This is why fully automated tools are so ineffective at identifying access control vulnerabilities.

So Burp does not relieve you of the task of closely examining the application's functionality, and evaluating whether access controls are being properly applied in each case. What the site map comparison feature does is to automate as much of the process as possible, giving you all the information you need in a clear form, and letting you apply your knowledge of the application’s functionality to identify any actual vulnerabilities.