Security – Simulating And Protecting Against A DoS Attack

On a recent project, I was created a web service which parsed a set of financial statements into name/value pairs from an XBRL document. The complexity of the XBRL   specification means that parsing an XBRL document takes approximately 90 seconds on mid-spec server. To avoid users having to wait for the 90 seconds for the data, the parsed data was saved to a database. However, since the service covers over 5000 companies it would need to store approximately 100,000 sets of financial statements which places a strain on database resources as each set of statements contains several thousand data points. The obvious solution is to only store the most commonly requested financial statements in the database and load the others on-demand with a message to users that the financial statements were being loaded. However this inoccuous design decision creates a major security vulnerability.

Slow Loading Pages = Security Vulnerability
One of the hardest attacks to defend against is a Denial of Service (DoS) attack which is simply a repeated set of requests for a server resource such as a web page. The repeated requests generate an excess of load on the server which is then rendered incapable of serving other legitimate site users and hence the web site is taken down.
The difficulty in launching a DoS attack is that most web pages involve very little CPU processing and hence the flood of requests necessary to bring the site down would be extremely large and hard to generate. Attackers therefore will target slow loading pages which may be consuming a lot of server resources.

Simulating a DoS Attack
To test a web application’s vulernability to DoS attacks we can use a tool to issue multiple simultaneous requests. For this demo I will use  LOIC (Low Orbit Ion Cannon) with the source code available at https://github.com/NewEraCracker/LOIC.
LOIC comes with a simple Windows application front-end, although you could just use the source code in the XXPFlooder.cs file to programatically launch the attack (note the simplicity of the code for running DoS attacks – the XXPFlooder.cs file is a mere 89 lines).
Using LOIC Windows app I configured it to make Http requests to the /loadaccounts?id=1002 page on my development server:

This quickly overwhelmed the development server, bringing the CPU utilization close to 100%:

Solution
DoS attacks are one of the most challenging security issues as there are numerous different ways of launching a DoS attack. Typically the defense will be at the network or server level, but in the case of a resource intensive page there are several potential application level optimisations. Firstly, the resource intensive page can be configured as a separate application with a dedicated application pool. A attack will therefore be isolated to the specific page (or area in the website) and leave the rest of the site unaffected.
Next, the page itself can be protected by using a short term in-memory cache. After executing the the resource intensive operation the resulting object can be placed in memory for a short period and subsequent requests will be served from the in-memory cache. A basic outline of the code is below:

FinancialStatements financialStatements;

//First test if there is an an object in the cache
if(Cache["FinancialStatements" + id] == null)
{
//If there is no object in the cache, create it an then load it into the cache
financialStatements = LoadFinancialStatements(id); //This is the resouce intensive operation
CacheItemPolicy policy = new CacheItemPolicy();
policy.AbsoluteExpiration = DateTime.Now + TimeSpan.FromMinutes(1);

Cache.Add(new CacheItem("FinancialStatements" + id, financialStatements), policy);
}
else
{
//If the object is already in the memory, simply load it from the
//cache which is a quick low impact operation.
financialStatements = Cache["FinancialStatements" + id];
}

Thus the first request will run resource intensive operation, but subsequent requests made within 60 seconds will simply be loaded from the memory.

Now running the LOIC attack again results in the below CPU resouce utilisation:

Note that CPU now only reaches a manageable 31% and only one of the cores is under significant load.

WebSite Performance Optimisation – Core Concepts

When it comes to performance tuning a site, there are a multitude of possible optimisations so I thought it best to distill these down to several core concepts.

Central to most of these concepts is an oveview of how a web page is loaded in a user’s browser. The below ‘waterfall’ diagram shows the loading process for the page. The bars represent the total time to load each page asset, the first request is for the page and then subsequent separate requests are made for each asset referenced page (such as image file, css etc, javascript file etc).

Note that in this article I will focus solely on front-end optimisations since for most web-pages  server processing typically only accounts for between 10% – 15% of the total page load.

Distribute Requests
Older browsers were only permitted to open a maximum of two simultaneous connections to a domain. Thus if the page contained references to numerous assets (such as images, css and javascript files) these would be queued and loaded two at a time. Thus a quick optimization was to distribute the assets on subdomains (eg images could be on img.mydomain.com) which would allow for more files to be loaded simultaneously.
Modern browsers allow for more files to be loaded concurrently (Chrome for example allows for six), but this is still a very powerful optimization although you should now use a Content Delivery Network (CDN) to host static files. A CDN is a globally disbtributed network of servers which caches static files and serves these to a user from the closest physical server (or ‘edge location’). Using a CDN has the benefit of reducing the network latency in loading static files, since the site visitor will be served the static files (such a page images) from a server close to them.

One caveat in using a CDN is it can be problematic when working with CSS and Javascript files which may need to be updated. A CDN caches files at its various edge locations around the world and even after a file is updated, the old cached file may still be served until the TTL (Time to Live) expires which can often be days or even weeks. Thus site visitors may be served out of date CSS and Javascript files after a site update. One remedy to this issue is to version the files (eg adopting a naming convention such as ‘myjsfile1_001.js’) and so create a fresh file name for each update for which no cache exists.

Reduce The Number of Requests
In the belowexcerpt of the waterfall diagram of the page load, the yellow portion of each bar is the time taken to to download the file, the blue portion at the start is the time to open the connection to the server. Note that for the bottom three files the largest portion of the load time is opening the connection.

The time to open and close each connection is unrelated to the file size and so the time to open a connection to download a large file is the same as for a small file. Thus it would save a lot of load time if numerous small files where combined into large files. The most obvious candidates for this are CSS and Javascript files. When working with templates and frameworks dozens of these files are often requested, each of which requires separate load-time overhead.
Combining these files is often problematic in development since working with a single huge javascript would be very inefficient, however there are numerous solutions for combining these files upon deployment. ASP.NET MVC 4 ships with inbuilt bundling or other open source solutions such as SquishIt   could also be used.

 

Reduce The Size of Files
Reducing the size of the files served to the user is an obvious and necessary step in performance optimisation. The first step is to look at the html itself – ensure there are no inline styles in the html, these are not only inefficient for development purposes but they impair performance since styles in external stylesheets are cached by the user’s browser and so do not need to be loaded on each page request.

Images files should be in an appropriate format. In general jpg files are larger and should only be used for pictures or graphics which make heavy use of gradients. A quality setting of above 80 is almost always overkill (although this may change with retina displays), typically a setting of jpg quality setting of 60-70 is the sweet spot for the quality/size trade-off. Simpler graphics such as logos or screenshots should be either gif or png formats, png is certainly the preferred format now for images of any complexity since it offers very good image quality (the screenshots in this article are in png format). The simplest page elements such as arrows, pointers, lines etc should normally be gif since this format is capable of the smallest image file sizes (note that these elements can be combined into a single larger image using CSS sprites  ).

Static files such as CSS and Javascript files benefit from minification (which is typically removing whitespace and replacing long variable names with shorter ones). There are several open-source minifiers such as the YUI compressor.

 

Perceived Performance

Web pages load sequentially so placing large Javascript files at the top of the page can block the html below it from rendering until the file is fully loaded. Placing  Javascript file at the bottom of the page does not affect the total page load time but allows the Html content to be shown to the user and then the Javascript file to be downloaded. Note that this technique should not be used for CSS files which are generally integral to the page being displayed.