login

Burp Suite, the leading toolkit for web application security testing

PortSwigger Web Security Blog

Sunday, 30 November 2008

[MoBP] Pro beta version now available


A beta version of the new release of Burp Suite Professional is now available to licensed users. The free edition will be made available in two or three weeks time. If you just can't wait that long to get your hands on the new Burp, there is an easy solution!

If you bought or renewed your Burp license within the last year, you should today have received the new beta. If you think you have missed out, or have any other licensing questions, please email us directly. If you have feedback about the beta, including bugs, either email us or use the comments below.

There are still one or two bugs that we are aware of, and presumably many that we are not, so today will see a freeze on functionality changes, and a focus on ironing out any glitches. The final edition will be released just as soon as we are happy with it, but the beta release is highly stable and suitable for day-to-day working right away. Have fun!

Saturday, 29 November 2008

[MoBP] Saving and restoring state

Here is a feature that has been frequently requested, and which, like Burp Scanner, will be restricted to the professional edition of Burp.

The new version of Burp lets you save the state and configuration of the key tools, and restore this on another occasion. This facility is of huge benefit to penetration testers, enabling you to seamlessly resume yesterday's work, perform backups of key information throughout a job, and take a complete archive of the information accumulated at the end of an engagement. You can also create as many customised configurations of suite tools as you want, and save each configuration separately for reuse on future occasions.

The items that can be saved include:
  • The target site map, which includes all of the content discovered via the proxy and spider.
  • The proxy history.
  • The issues identified by the scanner.
  • The contents and histories of the repeater tabs.
  • The configuration of all suite tools.
The save and restore process is fully configurable, and really easy. First off, you select "save state" from the Burp menu:



This launches a wizard which lets you choose which items you want to save the state and configuration of:



You then choose your output file, and Burp does the rest:



To restore previously saved state and configuration, you select "restore state" from the Burp menu. Each save file can include the state and configuration for any combinations of tools, and Burp lets you choose which tools you want to restore, and how to do it (whether to add to or replace their existing state):




Again, Burp goes to work and restores everything you have selected:



In our testing, this new feature is working beautifully. Obviously, the files can grow pretty large, because they include the full requests and responses accumulated within the tools you are saving. A few hours' testing will typically save or restore in a minute or two. You can make this process leaner and quicker by deleting unneeded items from the site map and proxy history before performing a save.

I hope that everyone who has this feature will get into the habit of using it. Picking up exactly where you left off the night before is a time saver. Being able to re-open your work from a completed job, to answer a client question or re-test a fixed issue, is a real benefit. If your colleague has fully enumerated an application's content, they can save this out for you to add to the current state of your instance of Burp to save you duplicating the effort. If you have worked on a problem for a while and got stuck, you can pass your entire work to someone else to think about. And team leaders can optimise Burp's configuration for a particular engagement, including fine-grained target scope definition, and pass this configuration straight to other team members to begin testing. You can create configuration templates designed for different kinds of task, and switch between these easily. There are probably plenty of other applications of this new functionality - let me know if you think of any.

Friday, 28 November 2008

[MoBP] Improved memory handling

A significant implementation challenge in the creation of software like Burp concerns efficient memory handling, and resilience in the face of low memory conditions. As the software becomes more functional, this challenge grows in importance.

If you use Burp in anger for a few hours, it will typically generate thousands of HTTP requests and responses, and sometimes many times more. Keeping track of this raw data, and the associated information that is analysed from it, requires a huge amount of storage. The current release of Burp already makes extensive use of temporary files for persisting raw HTTP messages, and in-memory structures are designed to be as lean and non-duplicative as possible. However, the new release grows up a little in its memory handling, providing you with feedback about actual or potential memory problems, and recovering from memory allocation failures wherever possible.

If you start Burp by double-clicking the JAR file, the Java runtime will allocate a relatively modest memory heap, which is usually way smaller than your computer is capable of supporting. In future, Burp will alert you of this fact, and remind you of the command line switches which you can use to allocate a larger heap:

As well as encouraging you to use more memory in the first place, Burp also alerts you to any memory problems that arise at runtime, enabling you to save* your work while you are still able to:

Now, you might suppose that memory allocation failures are a pretty fatal condition, indicating an imminent crash or loss of functionality. But they needn't be. Many web applications include a few downloadable resources that are huge relative to a typical response - things like flash movies, ZIP archives, office documents, etc. If you cause Burp to request and process these (for example, by requesting them via the proxy, or spidering them), then Burp will ask the Java runtime for large amounts of memory. If you do this enough in close succession, then the runtime will start to reject these requests. When this happens, Burp's handling of the affected item will obviously fail. But other, more modestly sized, memory requests will normally succeed, and so other items can still be processed as normal. In the new release, Burp is much more defensive in catching failed memory allocations, keeping alive the affected thread to see another day. Whereas in the past, a critical component like the proxy request queue might have crashed on memory failure, leaving behind only a command line stack trace, in future only individual requests will be lost, triggering an alert that memory is low, and allowing you to take appropriate action.

*See tomorrow's post.

Thursday, 27 November 2008

[MoBP] Flexible redirection

Burp currently lets you configure Intruder and Repeater to automatically follow redirects to targets on the same host and port. This restriction to on-site URLs was implemented to prevent you from inadvertently attacking third-party web sites, by following off-site redirectors which relayed your attack payloads in the redirection URL.

However, you often need more flexibility than this restriction allows. For example, if you are performing a brute force attack against a login request which uses HTTPS, you may be redirected to an HTTP URL to receive the login result. Alternatively, if the application uses single sign-on, you may submit credentials to a central authentication host, and need to follow a redirection back to the actual application to determine the result of each login attempt. In larger applications, you may be redirected between different hosts for all kinds of reasons.

In other situations, you may only be following redirects on the same host, but need more fine-grained control over which you follow. For example, if you are fuzzing a interesting function, you may be redirected sometimes to a dynamic error page, and sometimes to the logout function. You will want Intruder to follow redirects to the error page, to harvest error messages, but avoid redirects to the logout function, which would kill your session.

In the new release, Burp gives you full control over how redirects are followed, enabling you to work effectively in these situations. You can tell Burp to follow only on-site redirects, or only in-scope redirects, or to follow every redirect, or none of them:

The "in-scope" option is normally the easiest way to ensure that you follow relevant redirects while avoiding inappropriate ones. If you have configured the suite-wide target scope correctly, then simply selecting this option within Repeater or Intruder will be right for you, and Burp will only follow redirects to the hosts and URLs that you are willing to attack.

Wednesday, 26 November 2008

[MoBP] Deflate support

Burp has always been able to unpack GZIP-encoded responses, but for some reason never supported deflate encoding, which you see occasionally. Joe Hemler wrote a handy plugin to do the job, but the new release will support this encoding natively. So while before you would have seen this:

in future you will see this:

If time permits, the Decoder will also be updated to unpack deflate-encoded content for you.

Tuesday, 25 November 2008

[MoBP] Burp Extender extended

Burp Extender is an interface which allows third-party code to extend Burp's functionality. As it currently stands, the interface is fairly basic, but several people have used it to do cool stuff. I would like to see this interface get a lot more sophisticated, and the new release sees a step in this direction.

The IBurpExtender interface now has a new method:

public void registerExtenderCallbacks
(burp.IBurpExtenderCallbacks callbacks);

This is invoked on startup, and passes to implementations an instance of the new IBurpExtenderCallbacks interface, which provides methods that may be used by the implementation to perform various actions. The IBurpExtenderCallbacks interface currently looks like this, but it may change slightly before release:

package burp;

public interface IBurpExtenderCallbacks
{
public byte[] makeHttpRequest(
String host,
int port,
boolean useHttps,
byte[] request) throws Exception;

public void sendToRepeater(
String host,
int port,
boolean useHttps,
byte[] request,
String tabCaption) throws Exception;

public void sendToIntruder(
String host,
int port,
boolean useHttps,
byte[] request) throws Exception;

public void sendToSpider(
java.net.URL url) throws Exception;

public void doActiveScan(
String host,
int port,
boolean useHttps,
byte[] request) throws Exception;

public void doPassiveScan(
String host,
int port,
boolean useHttps,
byte[] request,
byte[] response) throws Exception;

public void issueAlert(String message);
}

As you can see, the new methods enable you to pro-actively interface with several of the Burp tools. Note that the new way of making HTTP requests replaces the old, rather clunky method, so anyone who has used the old method will need to tweak their code a little.

The next phase of development for Burp Extender will see several new ways in which Burp can call out to your code, to enable custom implementations of key tasks. Unfortunately, these are unlikely to make an appearance in the forthcoming release, but they are on the list for the future:

  • Methods like the existing processProxyMessage method, but for other tools. These will enable custom modifications of HTTP requests and responses. So, for example, if you are using Intruder against an application which employs an unusual session handling mechanism, you can write a request preprocessor to log in a new session and add the relevant tokens to the request.

  • When sending a request to Intruder, a method to specify custom payload positions. This will enable implementations to handle custom data formats and place attack payloads in locations that are applicable to that format.

  • During Intruder attacks, a method to perform custom manipulation of attack payloads prior to placement in requests. Also a method to process responses and return custom result information which will appear in the attack UI.

  • When the Spider is parsing response content for links, a method to perform custom parsing and return a list of discovered URLs.

If anyone has further suggestions for extensibility, do let me know.

Monday, 24 November 2008

[MoBP] Alert overload

While we're on the subject of network errors, take another look at the alerts table in yesterday's post. Pretty messy, yeah? Although it's essential to know that your work has run into problems, you probably don't need to be told about the same error several hundred times per second.

We've seen already how the new release of Burp will help you filter out unnecessary noise, and the alerts table is another example. In future, when an alert is received which matches the previous alert, they are simply aggregated, and a counter is shown. You can determine the nature, time and extent of the problem, without your UI filling with thousands of table entries. Here is what the alerts table now looks like when the target server stops responding during an Intruder attack:

Sunday, 23 November 2008

[MoBP] Windows socket exhaustion

If you've used Burp Sequencer or Intruder much on Windows, the chances are you've encountered this problem:

It happens when you are issuing a large number of requests in quick succession, as Sequencer and Intruder are designed to do. Windows seems to be running out of sockets for you to connect with, resulting in connection failures. The cause is that when you call close() on a Windows socket, the OS leaves it in an open state for a short period. If you open and close a huge number of sockets, even in a single thread, then you can exhaust the pool of available sockets, because they are all in an open state waiting to close. If you run the command netstat -an you will see something like this:

Now, you can apparently tweak the Registry to prevent this behaviour, but I bet hardly anyone does so. The good news is that in the new release, Burp detects the problem and deals with it for you, by rapidly throttling back its requests. When normal network service is resumed, usually in under a minute, Burp reissues the requests that failed and picks up where it left off. This is particularly beneficial in Intruder - for example, if you are performing a lengthy data harvesting exercise, you don't want to worry about large chunks of your attack getting lost because of network problems. In future, you won't have to.

Saturday, 22 November 2008

[MoBP] Scan optimisation

I guess some people are getting bored of hearing about how Burp Scanner is going to solve all of the world's ills, so this will be the last post on the subject. There are still a few other rabbits to pull out of the hat during the rest of the month.

One of my complaints about existing scanners was that they give you limited feedback or control over what is happening during scans. Burp tries to address this problem in a few ways.

First, take another look at the active scan queue:

For each request that is being scanned, you can monitor the progress of that individual request. The table shows you the number of "insertion points" where Burp is placing payloads, and the number of attack requests that have been generated. The latter is not a linear function of the former - observed application behaviour feeds back into subsequent attack requests, just as it would for a human tester.

This information lets you quickly see whether any of your scans are progressing too slowly, and understand the reasons why. Given this information, you can then take action to optimise your scans. Within the scan queue, there is a context menu which lets you cancel or re-prioritise individual items. You can also tweak the scanner configuration based on what you have learnt about the application.

A key factor in the speed and effectiveness of scans is the selection of attack insertion points. Burp gives you fine-grained control over the locations within the base request where attack payloads will be placed. You can tell Burp to skip time-consuming server-side injection tests for specific parameters which you believe are not vulnerable (client-side tests like XSS are always performed because they impose minimal overhead for non-vulnerable parameters). You can also tell Burp to intelligently select attacks based on the base values of parameters - for example, if a parameter's value contains characters that don't normally appear in filenames, Burp will skip file path traversal checks. The UI for all of this configuration looks like this:

The rest of the Scanner configuration lets you fine-tune other aspects of attacks. You can configure the size of the scanning thread pool, and the way Burp should handle dropped connections. You can also turn on or off each individual category of check. So if you know that an application does not use any LDAP, you can turn off LDAP injection tests. Or you can configure Burp to do a quick once-over of an application, checking only for XSS and SQL injection in URL and body parameters, before returning later to carry out more comprehensive testing. The key point is that you can see everything that the Scanner is doing, and control exactly how its actions should be applied to different areas of the application.

Friday, 21 November 2008

[MoBP] Bespoke vulnerability advisories

When Burp Scanner finds an issue, it generates a fully customised advisory containing all relevant detail about the vulnerability, and how to reproduce it. This is in a format, and level of detail, that you can copy directly into a penetration testing report if you desire.

Let's see an example. Below, Burp has found a reflected XSS vulnerability:

The advisory tells us:

  • The request parameter in which the attack input is supplied (SearchTerm).

  • The synactic context in which the input is returned in the response (within a piece of JavaScript, in a single-quote-delimited string).

  • That the application escapes any single quote characters in our input, but fails to escape the backslash, allowing us to circumvent the filter.

  • The exact proof-of-concept payload which Burp submitted to the application, and the form in which this payload was returned.

  • That the original request used the POST method, and Burp was able to convert this to a GET request to facilitate demonstration and exploitation of the issue.

The advisory also provides some custom remediation advice, based on the observed features of the vulnerability. And in addition to the customised content, the advisory includes a "standard" description of the issue, and general defences for preventing it, enabling a less knowledgable report reader to understand the nature of the vulnerability:

Alongside the advisory, Burp shows the requests and responses that were used to identify the issue, with relevant portions highlighted. These can be sent directly to other tools to manually verify the issue, or fine-tune the proof-of-concept attack that was generated by Burp:

When you have finished testing, you can export a report of vulnerability advisories in HTML format. To do this, you select the desired issues from the aggregated results display (you can multi-select individual hosts, folders, issues, etc.) and select "report issues" from the context menu. The reporting wizard lets you choose screen- or printer-friendly output, the level of issue description and remediation to include, whether to show request and response details in full, or extracts, or not at all, and whether to organise issues by type, severity or URL. Here is the report extract for the issue illustrated above, with all detail turned on, and showing extracts of application responses in printer-friendly format:

Thursday, 20 November 2008

[MoBP] Live scanning as you browse

On Monday, I described one way in which Burp Scanner can integrate with the actions you carry out when testing an application - you can select individual requests and send them for active or passive scanning. There are several other ways too.

You can configure Burp Scanner to automatically scan selected requests while you are browsing an application. When running in this mode, each unique request (based on URL and parameter names) that you make via Burp Proxy is sent for scanning without any action by you. You can configure different settings for active and passive scanning, and you can use the suite-wide target scope, or define a custom scope for each kind of scan. Below, we have configured Burp to actively scan every request we make to www.myapp.com, with the exception of login requests, and to passively scan every request we make to any destination whatsoever:

When you use the live scanning feature, you will see the scanner tab flash each time a vulnerability is identified (with a colour indicating the severity of the issue). All you need to do is browse around the application's content in the normal way, and Burp will check for vulnerabilities whose detection can be reliably automated, leaving you to focus on test activities that require human intelligence to perform.

Configuring Burp to perform live passive scanning of every request you make is particularly interesting. As you browse around random sites on the web, you will see the scanner tab constantly flashing with issues that have been identified without sending a single malicious request:

A further way in which you can initiate scans against interesting targets is via the target site map. After you have browsed around an application, and built up the site map with all of its content, you can select hosts and folders within the tree view to perform active or passive scans of the entire branch. Or you can select multiple items within the table view to do the same:

Used in the ways described, Burp Scanner gives you fine-grained control over everything that it does, and fits right in to your existing testing activities. It lets you prioritise areas of an application that interest you, by browsing them using live scanning, or selecting them for scanning from the site map. And it provides immediate feedback about those areas to inform your manual testing actions.

Wednesday, 19 November 2008

[MoBP] Passive vulnerability scanning

Burp Scanner divides the checks it performs into active and passive checks. With active checks, Burp sends various crafted requests to the application, derived from the base request, and analyses the resulting responses looking for vulnerable behaviour. With passive checks, Burp doesn't send any new requests of its own - it merely analyses the contents of the base request and response, and deduces vulnerabilities from those.

There are numerous issues which can be identified using solely passive techniques, including:

  • Clear-text submission of passwords.

  • Insecure cookie attributes, like missing HttpOnly and secure flags.

  • Liberal cookie scope.

  • Cross-domain script includes and Referer leakage.

  • Forms with autocomplete enabled.

  • Caching of SSL-protected content.

  • Directory listings.

  • Submitted passwords returned in later responses.

  • Insecure transmission of session tokens.

  • Leakage of information like internal IP addresses, email addresses, stack traces, etc.

  • Insecure ViewState configuration.

  • Ambiguous, incomplete, incorrect or non-standard Content-type directives.

Burp checks passively for all of these issues, and more. Many of them are relatively unexciting, and recording them is pretty dull and repetitive for a human. But as penetration testers we are obliged to report them. Having an automated tool to reliably mop up these issues as you browse an application is a time and sanity saver. (By the way, if you don't think your clients need these kind of low-rent issues reported to them, then read this.)

Being able to carry out passive-only vulnerability scans is beneficial in a range of situations. Passive scans won't send any new requests to the application. If you are testing a critical production application, you may want total control of every request that you send. You don't want an automated scanner running amok and knocking things over. So you can use passive scanning to pick up a wide range of issues, while probing manually for those that require active interaction with the application.

Similarly, some applications are aggressive in reacting to attacks, by terminating your session or locking your account every time an apparently malicious request is received. In this situation, you may be restricted to piecemeal manual testing, but you can still use passive scanning to pick up a number of issues without causing any problems.

Finally, if you don't (yet) have authorisation to attack a target, you can use passive scanning to identify vulnerabilities purely by browsing the application as a normal user. For example, if you are proposing for a new penetration testing engagement, then knowing in advance that you already have a dozen reportable issues in the bag can give you some comfort that you won't be looking into a bare cupboard when you start the work.

Tuesday, 18 November 2008

[MoBP] Can we automate? Yes we can

Anyone who has read chapter 19 of The Web Application Hacker's Handbook knows what I think about the limitations of automated vulnerability discovery. And I haven't had a Damascene conversion to a blind faith in the virtues of web scanners. So Burp Scanner was designed with a clear awareness of the kinds of issues that scanners can reliably look for.

The issues that Burp Scanner is able to identify mostly fall into three categories:

  1. Input-based vulnerabilities targetting the client side, like XSS, CRLF injection, open redirection, etc.

  2. Input-based vulnerabilities targetting the server side, like SQL injection, OS command injection, file path traversal, etc.

  3. Non-input-based vulnerabilities which can be deduced directly by inspecting application requests and responses, like clear-text password submission, insecure cookie attributes, information leakage, etc.

Issues in category #1 can typically be detected by automated scanners with a very high degree of reliability. In most cases, everything that is relevant to finding the bug is visible on the client side. For example, with reflected XSS, the scanner can submit some benign input to the application, and look for this being echoed in responses. If it is echoed, the scanner can parse the response content to determine the context(s) in which the echoed input appears. It can then supply various modified inputs to determine whether strings that constitute an attack in those contexts are also echoed. If the scanner has knowledge of the wide range of broken input filters, and associated bypasses, that arise with web applications, it can check for all that apply to the context. By implementing a full decision tree of checks, driven by feedback from preceding checks, the scanner can effectively emulate the actions that a skilled and methodical human tester would perform. The only bugs that the scanner will miss are those with some unusual feature requiring intelligence to understand, such as a custom scheme for encapsulating inputs.

Issues in category #2 are inherently less amenable to automated detection, because in many cases the behaviours that are relevant to identifying the bugs occur only on the server, with little manifestation on the client side. For example, SQL injection bugs may return nice database errors in responses, or they may be fully blind. Burp employs various techniques to identify blind server-side injection issues, by inducing time delays, changing boolean conditions and performing fuzzy response diffing, etc. These techniques are inherently more error prone than the methods that are available in category #1. Nevertheless, Burp Scanner achieves a high success rate in this area. In fact, based on our pre-release testing, I'm willing to make a bold claim: Burp Scanner performs markedly better than the big commercial scanners that you have heard of.

Issues in category #3 can generally be reported with near-perfect reliability by an automated scanner, because they are visible within the application's own requests and responses, and only require good content parsers and a robust set of rules concerning what issues to infer from what observed behaviours.

Every issue that Burp Scanner reports is given a rating both for severity (high, medium, low, informational) and for confidence (certain, firm, tentative). When an issue has been identified using a less reliable technique, Burp makes you aware of this, by dropping the confidence level. Burp also shows you the full requests and responses that were used to identify each issue, with relevant sections of these highlighted, enabling you to quickly understand the application's behaviour, and reach your own conclusions.

Users of some other web scanners will notice that some issues which those scanners attempt to report do not appear on the above lists, most notably broken access controls and cross-site request forgery. In my experience, scanners do a terrible job of identifying these issues, reporting literally hundreds of false positives and missing many genuine bugs. These are vulnerabilities which require genuine intelligence to understand and identify, because their existence depends upon the context and meaning of the application's behaviour, and today's computers do not understand these features. As was described yesterday, Burp Scanner is designed for hackers - for users who know how to attack an application, but can benefit from using automation to speed up parts of their work. In my opinion, it is preferable to leave areas like access controls and XSRF to the human tester. Let Burp automate everything that can be reliably automated, giving you confidence in its output, and leaving you to focus on the aspects of the job that require human experience and intelligence to deliver.

Monday, 17 November 2008

[MoBP] Hacker-oriented web scanning

Next month will see an exciting addition to the Burp family: a brand new web application vulnerability scanner.

Before going any further, I'll note that this new product will only be available to users who pay a nominal subscription to use the commercial version of Burp, so if for some reason you object to people writing software for a living, please look away now.

Burp Scanner is designed for hackers, and fits right into your existing techniques and methodologies for performing semi-manual penetration tests of web applications. You have fine-grained control over each request that gets scanned, and direct feedback about the results.

Using most web scanners is a detached exercise: you provide a start URL, click "go", and watch a progress bar update until the scan is finished and a report is produced. Using Burp Scanner is very different, and is much more tightly integrated with the actions you are already carrying out when attacking an application. Let's see how.

When you are testing an application and find an interesting request, you might intercept it with Burp Proxy to modify parts of the request. You might send it to Repeater to reissue the same request with different inputs. Or you might send it to Intruder to perform various automated custom attacks. In future, you can also send the request to Burp Scanner to scan for a wide range of vulnerabilities within that single request. The results of the scan are shown immediately, and can inform your other actions in relation to that request. You can even modify the base request in arbitrary ways, and re-scan it, combining your own knowledge and understanding of how web applications commonly fail with Burp Scanner's powerful capabilities for discovering many types of vulnerabilities.

It's time for some eye candy. Below, we have intercepted a request, and send it to Burp Scanner to perform an active scan (this is one which sends crafted requests to the target application, and analyses its responses looking for vulnerabilities):

All requests sent for active scanning land in the scan queue. Below, we have sent a large number of requests for scanning. A typical request with a dozen parameters is scanned in a couple of minutes, and the scan queue is processed by a configurable thread pool, so the number of waiting items rarely grows very large:

As each item is scanned, the scan queue table indicates its progress - the number of requests made, the percentage complete, and the number of vulnerabilities identified. This last value is colourised according to the significance and confidence attached to the most serious issue. We can double-click any item in the scan queue to display the issues identified so far:

Each issue contains a bespoke advisory, and also the full requests and responses which Burp Scanner used to identify the vulnerability. Of course, you can send these requests to other tools, to further understand the issue and perform follow-up attacks:

In addition to tracking the issues that are identified for each individual scanned request, Burp maintains a central record of all the issues it has discovered, organised in a tree view of the target application's site map. Selecting a host or folder in the tree shows a listing of all the issues identified for that branch of the site, enabling you to quickly locate interesting vulnerable areas of the application for further exploration:

Used in this way, Burp Scanner can be of huge benefit when you are testing a web application. Being able to perform quick and reliable scans for many common vulnerabilities on a per-request basis reduces your manual effort, enabling you to direct your human expertise towards vulnerabilities whose detection cannot be reliably automated. This mode of scanning also addresses a common frustrating problem, in which a monolithic automated scan takes an age to complete, with little assurance over whether the scan has worked, or whether it encountered problems that impacted on its effectiveness. By controlling exactly what gets scanned, and by monitoring in real time both the scan results and the wider effects on the application, you combine the virtues of reliable automation with intuitive human intelligence, often with devastating results.

In this post, I've described just one way in which Burp Scanner can be used to help automate the discovery of web application vulnerabilities. In the next few days, I'll explore some other key aspects of its functionality, and other ways in which it can be used to help you hack web applications.

Sunday, 16 November 2008

[MoBP] Sucky scanners

How many people have used a commercial scanner to look for vulnerabilities in web applications? Lots of you, right.

And who thinks that the scanner they use is as good as it could possibly be?

Anyone? Anyone? Bueller?

I often talk to people about their experience with web scanning products, and these are the complaints I hear:

  • They are too slow, and provide little feedback or control over what they are doing during scans.

  • They try to perform checks that can't be reliably automated, resulting in too many false positives.

  • Even with the core input-based bugs that should be their bread-and-butter, they miss too much low hanging fruit.

  • Their issue reporting is often vague and generic, requiring a lot of manual work to confirm issues and produce write-ups that you can give to a customer.

  • They are too expensive.

If you would like to see a web scanner that addresses some of these issues, then watch this space. If you would like to see one that addresses all of them, then experience a pleasurable quickening of the heart rate. And still watch this space.

Saturday, 15 November 2008

[MoBP] New message analysis views

Here's another small new feature, before moving back to some more weighty stuff.

Anywhere in Burp where you can view web requests and responses, you have a number of tabs showing different views and analysis of the message (plain text, hex, headers, etc). These tabs now appear and disappear based on the content of the message being displayed, and whether it supports the relevant view. There are also some new tabs for analysing messages, including colourised display of HTML and XML content. This view makes it much easier to visually scan a piece of HTML to see what it contains:

If you are editing an HTML response within the Proxy, you can also use the auto-colourising feature to sense-check that you have preserved the syntactic validity of the content, before sending it on to the browser.

Friday, 14 November 2008

[MoBP] SOAP parameter parsing

When you send a request to Intruder to perform custom automated attacks, it makes a guess at where you will want to place your attack payloads. By default, it places payload markers around the values of each URL and body parameter, and each cookie value. If you've ever tried to attack a SOAP request using Intruder, you'll know that this auto-placement doesn't help you very much.

In the new release, auto-placement also supports XML request bodies, and by default places payload markers around the values of each XML element and attribute. If you need to fuzz several SOAP requests, this will now be a simple task of sending each request to Intruder, and starting an attack using the default payload positions:


The new support for parameters in XML request bodies is used elsewhere within the new release, including automated vulnerability scanning, of which more shortly.

Thursday, 13 November 2008

[MoBP] Spidering authenticated applications

Related to yesterday's post is a further enhancement to the way the Spider handles form submission. In the new version, you can control how Burp handles login forms, separately from the configuration for forms in general. You can tell the Spider to perform one of four different actions when a login form is encountered:

  • You can ignore the login form, if you don't have credentials, or are concerned about spidering sensitive protected functionality.

  • You can prompt for guidance interactively, enabling you to specify credentials on a case-by-case basis.

  • You can treat login forms as any other form, using the configuration and auto-fill rules you have configured for those.

  • You can automatically submit specific credentials in every login form encountered.

In the last case, any time Burp encounters a form containing a password field, it will submit your configured password in that field, and will submit your configured username in the text input field whose name most looks like a username field. The UI for configuring application login looks like this:

Wednesday, 12 November 2008

[MoBP] Custom form filling rules

One cool feature that Burp Spider always had was the ability to submit HTML forms whilst spidering, either by prompting the user to supply suitable values, or by automatically filling in text fields with a default value.

This feature is now getting more flexible, with the ability to configure customised rules for filling in forms, by specifying the value that should be used based on the name of the individual form field:

Supplying valid input in forms is particularly important when spidering a web application, to increase the likelihood that the input is accepted by the application, enabling the spider to access the content that is reached by submitting the form. Burp comes with a set of default rules that have proven successful when automatically submitting form data to a wide range of applications. Of course, you can modify these or add your own rules if you encounter form field names that you want to submit specific values in.

Tuesday, 11 November 2008

[MoBP] Intelligent MIME type recognition

The new version of Burp employs heuristic rules to recognize most types of content commonly used in web applications. Information about response MIME types is used in various ways, for example:

  • Display filters in various locations allow you to show or hide different MIME types.

  • The Spider uses MIME type information to perform tailored content parsing.

  • You can define Proxy interception rules based on MIME type.

  • Vulnerability analysis performs different checks and actions based on a response's MIME type.

Applications typically include a Content-type header in their responses, which announces the MIME type of the content in the response body. However, it is good not to trust this header, because it is often wrong. Look at the following very common example. The response's Content-type header states that it contains HTML. However, in the MIME type column of the proxy history, the content is correctly identified as JavaScript. If we trusted the MIME type stated by the application, we would handle the response incorrectly, potentially missing some interesting vulnerabilities.


Monday, 10 November 2008

[MoBP] The all new Burp Spider

In its current incarnation, the Spider is the weakest of the core Burp tools, with more than its fair share of old buggy code, and several obstacles to usability. I don't use it much myself, and I doubt if too many of you do either.

In the new release, Spider has been completely rewritten from scratch, with much improved content parsing and several new features. Spidering is now driven entirely via the target site map and other tools.

When you first map out a new application's content and functionality, it is generally best to work manually with your browser, giving you full control over the requests that are issued, and ensuring that you comply with any input validation, navigational structure and other constraints imposed on normal usage of the application. As you do this, Burp will passively compile its site map of all the items you have requested, as well as those which it has inferred from the application's responses.

When you have explored all of the content you can find with your browser, you will typically see a site map containing a number of unrequested items - these are shown in grey in the tree and table. At this point, you can still proceed manually, copying the relevant URLs into your browser and exploring further. Or you can let the Spider do its work to map out the rest of the application's content. The easiest way to do this is to select one or more nodes within the tree, and choose "spider from here" from the context menu:

When you tell Burp to spider a branch of the site map, it will perform the following actions:

  • Request any unrequested URLs identified within the branch.

  • Submit any forms whose action URLs lie within the branch.

  • Re-request any items which previously returned 304 status codes, to retrieve a fresh (uncached) copy of the application's responses.

  • Parse all content retrieved to identify new URLs and forms.

  • Recursively repeat these steps as new content is identified.

Sunday, 9 November 2008

[MoBP] Search me

Several of the Burp tools accumulate a wealth of information about the applications you access. Digging through these different repositories to find specific items just got a whole lot easier, with the addition of a suite-wide search function. You can access this from the "burp" menu:


The search function is nice and simple. You just enter an expression, and tell Burp where to look - whether in request headers, response bodies, etc., in specific tools, or everywhere. The key details of each search match are shown in a sortable table, with a preview pane where you can see the full request and response, included highlighted matches for your search item. The usual context menus can be used to initiate attacks against specific items, or send them to other tools for further analysis.

One situation recently where I found the new search function to be useful was when looking for leakage of specific information from a target application. I was looking at an application which held users' credit card numbers, and these were supposed to be masked everywhere following the point of initial submission, to mitigate the impact of a user's account being compromised. Testing whether this was the case was a simple matter of stepping through all of the relevant functionality using my browser, with proxy interception turned off, and then using the search function to look for the credit card number I had earlier registered. Although the number was masked everywhere on-screen, using the search function identified a couple of obscure locations where the number was transmitted to the client within HTML source.


Saturday, 8 November 2008

[MoBP] Tabbed repeating

Now that every browser has jumped on the tabs bandwaggon, it was about time that Burp caught up.

Burp Repeater was always intended to be a very simple tool for performing manual attacks, providing the facility to reissue a single request over and over, manually editing its contents, and keeping a history of the requests made and responses received. And in most situations, this is all that a skilled hacker needs to fine-tune a manual attack.

Occasionally, however, being restricted to a single request window and history is an annoying limitation. Sometimes, your attack involves more than one step, and you need to issue multiple manual requests in sequence. Other times, you need to submit a payload in one request, then issue a different request to establish its impact. Trying to manage two manual requests in the same repeater window, constantly clicking backwards and forwards through the history, is a real pain.

Enter tabbed repeating. In the new version, when you send a request to Repeater from another tool, that request gets its own tab. Each tab has its own request and response windows, and its own history. You can rename tabs to help you keep track of what is where. And you can manually add new tabs or delete old ones, as required. Other than the tabs, Repeater is unchanged:

Friday, 7 November 2008

[MoBP] The new proxy history

If I had a pound for every time someone has asked me if you can clear the Burp Proxy history, I'd have quite a few quid by now.

Well, the Proxy history just got a whole lot more powerful, and yes, you can even clear it if you want to.

Without further ado, here is what the Proxy history now looks like:

The most obvious addition is the preview pane, which means you can quickly see the contents of requests and responses by selecting an individual item in the table. As previously, you can still double-click an item to pop up a new window showing the request and response details.

There are also a few new columns, showing the response MIME type, HTML page title and the time of day. The table content is now forward- and reverse-sortable by clicking on any column header, enabling you to quickly locate what you are looking for.

There is a filter bar above the table which works in the same way as the site map filter, allowing you to filter on MIME type or HTTP status code, or to show only requests containing parameters, only items that are within the defined attack scope, etc.

The context menu is improved with several new items including ... drum roll ... the facility to delete the selected item(s), so you can clear any or all unnecessary items from the history. By combining column sorting, multi-select, and item deletion, you can quickly eliminate items from the history that you don't need there:

Thursday, 6 November 2008

[MoBP] Automated HTML rewriting

Here are a bunch of handy new functions in Burp Proxy that you can use to achieve various tasks by automatically rewriting the HTML in application responses (all are off by default):


Unhiding hidden fields enables you to edit their values directly in the browser, rather than by intercepting subsequent requests. Similarly with enabling disabled fields, and removing length limitations. Here is what the Google Blogger application looks like with hidden fields unhidden:


On the web hacking course which me and Marcus deliver at Black Hat, we have around a dozen labs illustrating various cases of unsafe reliance on client-side controls. The first few examples involve easy scenarios like hidden and disabled fields, length limits and client-side input validation. Well, those labs will get even easier with the new version of Burp, because solving the lab will be a simple matter of checking the relevant box in Burp's configuration!

Wednesday, 5 November 2008

[MoBP] Invisible proxying

Standard intercepting proxies generally work fine in almost any situation where you are using a standard web browser to access an application. You simply need to configure your browser to use the proxy listening on your loopback interface, and you are away.

Things get trickier if the application employs a thick client component that runs outside of the browser, or makes its own HTTP requests outside of the browser's framework. Sometimes, these clients don't support HTTP proxies, or don't provide an easy way to configure them to use one.

In this situation, you face two distinct problems. The first is that the client's requests are being sent straight to the destination host, rather than to your loopback interface (or wherever your intercepting proxy is listening). This problem can usually be quickly resolved by redirecting the client's requests lower down the stack - e.g. by adding an entry to your hosts file, or changing your routing configuration.

The second problem is that the client typically generates standard HTTP requests rather than proxy-style requests. A proxy-style request looks like this:

GET http://myapp.com/foo.php HTTP/1.1
Host: myapp.com

whereas the corresponding non-proxy request looks like this:

GET /foo.php HTTP/1.1
Host: myapp.com

HTTP proxies need to receive the full URL on the first line of the request in order to determine which destination host to forward the request to. Proxies do not (if they follow the standards) look at the Host header to determine the destination.

This means that if your intercepting proxy complies with the standards, it will fail to process non-proxy style requests. And this is what the currently available version of Burp does. So even if you can redirect your thick client's requests to the Burp Proxy listener, it will ignore them.

In the new version, you can configure Burp to support invisible proxying, meaning that it will tolerate non-proxy style requests. When such a request is received, Burp will by default parse out the contents of the Host header, and use that as the destination host for that request. Alternatively, you can specify a host and port to which all requests should be forwarded, regardless of the Host header. You can also use this redirection function even for regular proxy-style requests, if you want to redirect all traffic to a different host than the one the browser seeks to connect to.

The UI for configuring proxy listeners now looks like this, with an example of an invisible proxy listener and redirector configured on port 8888:

More perceptive readers may have wondered: what about SSL? If a non-proxy-aware client uses SSL, it obviously won't issue CONNECT requests to the Burp Proxy listener, but will attempt to negotiate SSL directly with the listener. Does this mean we need to configure the listener to expect SSL or plain HTTP connections?

Actually, you don't. When operating in invisible proxy mode, Burp is clever enough (what else would you expect?!) to figure out whether each incoming request is HTTP or HTTPS, and to handle the SSL negotiation seamlessly in the latter case. You can even configure a different server SSL certificate for each proxy listener, if your thick client requires a particular server certificate.

If the client you are testing issues both HTTP and HTTPS requests, to different ports, you will need to configure a separate Proxy listener on each relevant port. Again, not a problem now that Burp supports multiple listeners.

In summary, this is a feature that will hardly ever be required, but will occasionally be a life saver and enable you to continue using Burp with many kinds of unusual thick client components.

Tuesday, 4 November 2008

[MoBP] Suite-wide target selection

Burp can do lots of things to make your life easier when you are attacking a web application. Often, you want Burp to just go ahead and do these without being prompted. But, if you value your freedom, you don't want Burp going after just any target. Rather, you want Burp to know what is in scope for your attacks and what isn't.

In the new version, you can define at the Suite level what your targets are for your current activity. You can specify hosts, IP ranges, URL regexes, etc., as being in scope or out of scope. Currently, the UI looks like this, but I will hopefully make this a bit more sophisticated if time permits:

The target scope which you define here can affect the behaviour of the individual Burp tools in numerous ways. You can set display filters to show only in-scope items. You can tell the Proxy to intercept only in-scope requests. The Spider will only follow links that are in scope. You can automatically initiate vulnerability scans of in-scope items. You can configure Intruder and Repeater to follow redirects to any in-scope targets. And so on.

In all these cases, you can fine tune the target scope and the associated behaviour at the level of individual tools, or you can let them go after whatever is within the suite-wide scope. This provides a quick and easy way to tell Burp what is fair game and what is off limits, whilst also enabling the usual fine-grained control over everything that Burp does, if you need it.

Monday, 3 November 2008

[MoBP] Filtering and deleting content

One frequent complaint about Burp is that it can easily accumulate a huge amount of data, in locations such as the proxy history. After lengthy usage, these repositories can become unwieldy, making it hard to find what you are looking for. Further, in noisy applications, such as those making frequent asynchronous Ajax requests, browsing a few pages can generate hundreds of individual requests, causing interesting items to get lost.

The new version of Burp uses display filters to address this problem. For example, at the top of the site map, there is a filter bar. Clicking on this shows a popup enabling you to configure exactly what content will be displayed within the map:


You can choose to display only requests with parameters, or which are in-scope for the current target (of which more shortly). You can filter by MIME type and HTTP status code. If you set a filter to hide some items, these are not deleted, only hidden, and will reappear if you unset the relevant filter. This means you can use the filter to help you systematically examine a complex site map to understand where different kinds of interesting content reside.

Sometimes, however, you accumulate data within Burp that you just don't need - for example, if you have browsed to off-target domains. In this situation, you can permanently delete the superfluous items using the context menu. For example, you can select multiple hosts or folders within the tree view and delete them altogether:

You can do the same thing by selecting single or multiple items in the tree view, proxy history, etc.

Sunday, 2 November 2008

[MoBP] The new target site map

The first difference you will notice when you fire up Burp is the new "target" tab. This is where you can view all of the information which Burp has gathered about the application you are attacking. This includes all the resources which have been directly requested, and also items which have been inferred by analysing the responses to those requests. For example, if you open your browser and make a single request to the front page of BBC news, you will see the following in the target site map:


Items that your browser requested are shown in black; those which Burp has inferred are shown in grey. Clearly, from browsing to a single page, we can deduce a large amount of information about the target application.

The site map interface works pretty much like a graphical email client. A tree view of hosts and directories is shown on the left. Selecting one or more nodes in the tree view causes all of the items below these nodes to be shown in table form on the top right. This table includes the key detail about each item (URL, status code, page title, etc.) and allows the items to be sorted according to any column. Selecting an item in the table causes the request and response for that item to show in a preview pane on the bottom right. This preview pane contains all of the functions familiar from elsewhere in Burp - analysis of headers and parameters, text search, media rendering, etc.

As well as displaying all of the information gathered about your target, the site map enables you to control and initiate specific attacks against it, using the context menus that appear everywhere. For example, you can select a host or folder within the tree view, and perform actions on the entire branch of the tree, such as spidering or scanning:



Or you can select an individual file within the tree or table, and send the associated request to other tools, such as Intruder or Repeater. If the item has not yet been requested by your browser, Burp will construct a default request for the item, based on the URL and any cookies received from the target domain:


Much of this information and functionality is present somewhere within the current release of Burp. But having everything accessible together via a single prominent and powerful interface will hopefully make it easier to keep track of your target's attack surface, and initiate the right attacks against it.

Blog Archive


User Forum

Get help from other users, at the Burp Suite User Forum:

Visit the forum ›

Copyright 2014 PortSwigger Ltd. All rights reserved.