Five Reasons Not To Be On The Cloud

Hosting applications on the ‘Cloud’ was often viewed as a no-brainer, however after a mini counter revolution of companies moving applications off the cloud I thought I would compile a listing of the primary reasons to be cautious before moving your application to the cloud.


All  major cloud  providers use generic hardware – that is low-spec IDE drives, slow memory and old processors. As well as the low-spec servers you will be sharing system resources with other customers and virtualisation canot fully isolate users, hard drives in particular cannot be dedicated to a single user.

The result of all this is a sluggish performance. Performance issues can of course be addressed by applying several techniques such as utilising Content Delivery Networks, employing a variety of caching techniques and aggressive front-end optimisation, but the issues remains that out-of-the-box a cloud setup will almost certainly be less responsive and performant than a dedicated setup.

Another persistant issue is variable performance. Cloud customers repeatedly report performance ‘black holes’ when system response times sudden dip for several hours for no apparent reason. The cause of this is likely a ‘noisy neighbour’ (ie another cloud user hogging resources before being shutdown) or system maintenance/updates, but it remains a constant frustration as it is impossible to predict when performance will suddenly drop.

Cloud providers  usually quote the CPU in standard units which are not comparable to the cores on CPUs of modern servers.  Amazon Web Services, for example, quotes a ‘Compute Unit’ as equivalent to an “early-2006 1.7 GHz Xeon processor”. Thus even a Large Instance with 4 compute units will be significantly inferior to a 2012 quad-core processor. Adding more cloud instances (ie servers) or increasing the size of the instance   may seem a solution to this, however this will only be a solution if the application can take advantage of running work in multiple threads. Web sites are usually good candidates for splitting work out across multiple threads (as each request for a page can handled by a different  thread). However, more processor intensive work such as statistical analysis require significant development effort to efficiently run on multiple threads.



The issue isn’t the high cost of cloud hosting, it is that it is so difficult to estimate. For starters it is difficult to purchase a server with specs resembling the ‘instances’ offered by cloud providers. As noted above, cloud providers spec their offerings with CPU ‘units’ corresponding to a five or six year-old processor. How to compare these CPU units with what will be offered on a dedicated server today? At a guess I’d say 8 compute units would  be the equivalent a quad-core Intel ‘Ivy Bridge’ Xeon – but it is impossible to know without benchmarking the actual application you plan to run.

A dedicated host server bundled with most of the features needed to get an application live – storage, a bandwidth allocation, IP addresses etc. Cloud providers have an a-la-carte menu of a myriad of options and charges so it is difficult  to estimate the final monthly cost. There are several third-party services such as the Cloud Comparison Tool for estimating cloud costs.

An important cost which is often overlooked is support. For a regular provider of dedicated physical servers support is included in the price (although premium support is additional) , however most cloud providers offer no support and charge for even the basic email support.

A major benefit of the cloud is that testing and development servers can be spun up and down on-demand and so minimise the cost of having servers dedicated to these functions which only uses the server resources for a small portion of the month. In practice, cloud servers often get spun up and then forgotten about, silently running up the cloud bill. Cloud providers offer only the minimal tooling to track and prevent this and so purchasing third party monitoring tools  is often essential.


The cloud is often touted as a packaged solution to scaling out applications, in reality most cloud solutions currently provide little in the way of in-built scaling beyond an easy method to spin up servers on-demand. All the tricky scaling issues such as maintaining user state remain as these are part of the application architecture and have little to do with the infrastructure the cloud provides.

On the contrary the cloud can often force developers to handle scaling issues before they would otherwise need to. Both Amazon and Microsoft require that a minimum of two instances to be running before their SLA (Service Level Agreement) becomes valid. In addition virtual severs on clouds are brittle and will regularly be destroyed and automatically rebuilt. The result of this is that an application needs to be designed for running across multiple servers from the moment it lands on the cloud. It could be a benefit to force developers to consider multi-server scaling early but it is worth noting that a decent specified server can cope with an impressive load – a content focused site such as a news/magazine site should be able to comfortably serve 10 million page views a month from a single physical server.



Your site is down and it looks like a server issue, so who to call? For a lot of cloud providers there is no-one and you are left to the online forums to troubleshoot issues. Amazon (AWS) acquired a notorious reputation for referring all but the largest customers (who payed a princely $15k per month for support) to their online forums for assistance. The only worthwhile support from AWS starts at 10% of the monthly bill. A notable exception to this is RackSpace who provide 24/7 online (IM/Chat) support at no charge.

Support is also trickier with cloud services. A physical server is isolated from other users and identifying issues is comparatively straightforward. If an application is running on the cloud it can be difficult to distinguish between an issue caused by the underlying network or the application itself.


One Size Fits All

Different types of applications have very different hardware requirements. A Content management System which powers a news website would not require a lot CPU resources but would benefit from a large amount of memory (since a massive performance gain can be had from keeping the content in memory as opposed to on disk). By contrast a statistical analysis app would be a heavy user of processing power.

Specifying servers for different user cases is possible when physical servers are purchased/rented as each server can be custom configured with suitable processors and memory. Cloud providers only provide a small set of generic configurations designed, AWS is the only major provider to offer instances targeting both processor and memory intensive applications but it is a limited range with no opportunity for customization.

Cloud  providers built their networks using only generic commodity hardware and so do not offer access to cutting edge hardware. A current issue is the access to SSD drives which have been available on physical severs for several years but is still not available for standard cloud setups. This performance benchmark shows the enormous benefit of hosting a database on an SSD.