Wednesday, January 16, 2008

John Heasman is blogging

John Heasman, one of my colleagues at NGS, has just started blogging. For anyone who doesn't know him, John is one of the most talented and inventive security researchers around, having reported numerous bugs in enterprise software products, and developed new ideas in areas such as rootkit research.

John is going to be talking about all kinds of software security, including webappsec topics like browser security and Java. He also shows his good education in his choice of blog title. I urge everyone to check it out.

Sunday, January 6, 2008

When good XSRF defence turns bad

Talk about cross-site request forgery bugs seems to be in vogue, with various explanations of what the vulnerability is, and how to avoid it. As awareness has increased, and more developers attempt to defend against XSRF, it is not uncommon to find cases where someone has followed a standard piece of advice, but achieves nothing in terms of preventing attacks.

An application is vulnerable to XSRF if an "important" user action is performed using requests all of whose non-cookie parameters can be determined in advance by an attacker. For example, in a banking application a user might perform a funds transfer using the following request:

POST /TransferFunds.asp HTTP/1.0
Content-Length: 48
Cookie: sessid=191r309ru13d10219029r31r90f1re029e

An attacker wishing to induce a victim to transfer funds to his account can forge a request containing all of the necessary parameters with the exception of the cookie containing the session token. If this request is initiated from a web site the attacker controls, at a time when the user is logged in to the banking application, then the user's browser will automatically add the cookie parameter, and so the funds transfer will be carried out.

Now, a common recommendation for preventing XSRF attacks is that "important" actions like funds transfers should be implemented in two steps. In response to the first request, the application sends a nonce (an unpredictable value) to the client, which is submitted as a parameter in the second request. The application verifies the nonce in the second request, and only performs the action if this is valid.

The thought behind this defence is that the two-step approach confirms that the action originated from an authentic user, and not from a third-party web site. Although code running in an attacker's web page can initiate requests to the bank, the browser's same origin policy prevents it from accessing the bank's responses, and so the attacker's code will be unable to retrieve the nonce required for the second request. However, given the vague way in which the defence is often characterised, a developer who isn't thinking for themselves may be forgiven for getting it wrong.

I recently came across an application which had previously been full of XSRF flaws. Developers had reimplemented numerous functions using two steps, by issuing and validating a nonce. However, to enhance usability, the second step was implemented as an HTTP redirect. So the preceding request returned a response like the following:

HTTP/1.0 302 Object Moved
Location: /TransferFunds2.asp?nonce=120491746317280

The user's browser follows the redirect, thereby submitting the nonce (together with the user's session cookie), which is validated by the server. But the defence achieves nothing, because the user's browser behaves in just the same way if the first request originated from a third-party web site. The fact that the same origin policy prevents the attacker's code from accessing the bank's responses is irrelevant because it does not need to - it just relies upon the browser to process and resubmit the nonce in the normal way. A lot of development time had been wasted.

For the nonce-based defence to be effective, the request in which the nonce is resubmitted must result from some informed user interaction. For example, instead of performing a redirect, the first response could display the details of the proposed transfer, with the nonce in a hidden form field, which is submitted using "confirm" or "cancel" buttons. Because code on the attacker's web page cannot access this response, it cannot parse out the nonce and resubmit it. If performing actions over two stages is undesirable for usability reasons, then the nonce can be placed into the original form used to initiate the action. Provided the application properly ties the nonce to the session in which it was issued, then (in the absence of another vulnerability) an attacker will be unable to determine all of the necessary parameters to the original request, and so the main prerequisite for an XSRF attack to get going is not fulfilled.

Friday, January 4, 2008

Business as usual

As a UK-based creator of "hacking tools", I have more than a passing interest in the new amendments to the Computer Misuse Act. These have been on the statute book for over a year, but have not yet come into force. The new law makes it illegal to supply software "believing that it is likely to be used to commit an offence".

Burp was downloaded 10,483 times last month. Were all of these used for lawful purposes? I would say it is absolutely certain that some people who download Burp use it unlawfully, and the same goes for any other popular security tool.

The arguments about "dual use" software are well worn, and scarcely need repeating. The same tools that are used by criminal attackers are also used in legitimate security testing. Demonstrating what attackers can do helps people to defend against them. Blanket restrictions will only penalise the good guys.

The same situation exists in many other domains which, being more familiar, do not invite such ill-considered legislation. Kitchen knives can be used for chopping food or for stabbing people. Manufacturers know that it is likely that some of their products will be used unlawfully. But we don't ban the production of kitchen knives - we just make it illegal to stab people.

The British Crown Prosecution Service has this week published its guidance on the new law, which responds to the preceding objections. The guidance notes the existence of a "legitimate industry" producing software "to test and/or audit hardware and software". This software may have a "dual use" and so prosecutors need to ascertain that a suspect has a criminal intent.

How can this be done? The following factors are relevant, says the CPS:

  • Does the distributor have in place robust and up to date contracts, terms and conditions or acceptable use polices?

  • Are users made aware of the Computer Misuse Act and what is lawful and unlawful?

  • Do users have to sign a declaration that they do not intend to contravene the CMA?

  • What thought did the suspect give as to who would use the software; for example, was it circulated to a closed and vetted list of IT security professionals or posted openly?

  • Has the software been developed primarily, deliberately and for the sole purpose of gaining unauthorised access to computer material?

  • Is the software available on a wide scale commercial basis and sold through legitimate channels?

  • Is the software widely used for legitimate purposes?

  • Does it have a substantial installation base?

  • What was the context in which the article was used to commit the offence compared with its original intended purpose?

This is a weird set of considerations, several of which can be trivially complied with by any criminal wishing to cover themselves. Some of the other factors apparently assume that "good" software must be sold commercially and widely used, and hence presumably that small-scale, freely distributed tools are "bad".

The function of CPS guidance is not to determine what is legal, but rather to advise prosecutors who to pursue. Taken together, the law and guidance leave a huge amount of discretion within the legal process. The net of literal illegality is cast very widely, and prosecutors are told to ask a set of vague questions about an individual's intentions when deciding whether to take action. In other words, everyone producing hacking tools is criminalised, and it will be up to prosecutors which people they don't like the look of. Most legal processes involve some discretion, but too much can be a bad thing, particularly when the parties involved don't really understand the subject matter. Would you like to take your chances against the British judge in a computer crime trial who asked lawyers to explain what a website is?

I don't plan to stop distributing or updating Burp any time soon. This is clearly a crap law, but I'm guessing that prosecutions will be rare, and that I'll be some way down anyone's target list. Oh, and keep it legal, kids.