Skip to the content

Making Applications Scaleable

A great deal is made of the word "scaling" by application developers and engineers but for most people it's shrouded in mystery.

Today we're going to try and make this easy to understand!

All applications need to meet the demands of their customers: we want the performance to be good enough but not waste resources (and money!).  So scaling is important.

In the past the options were rather limited.  We either over-provisioned a system (so we had capacity for the highest peak demand) or we consciously understood that peak periods would be a poorer experience for customers or we manually added more computing resources as required (with the inevitable delay and costs this brings).  

These days we use cloud services to keep costs as low as possible while making the system (automatically) flexible.  We keep the main performance at the absolute minimum and then allow it to "scale" as needed and, more importantly perhaps, scale down (i.e. remove cost and resource) as soon as peak demand subsides.

There's no mystery to it but this is the power of the cloud: you pay more than each base machine (although arguably the enhanced security and much reduced maintenance / engineer costs offset it) but you only pay for what you use.

Let's take a hypothetical example and assume we run an ecommerce store that's busy around new product launches and Christmas but lower volumes the remainder of the year.

Using that example let's map the two potential ways of working:

- Traditional - We know that we need to manage 1000 users per second at our peak load.  This means we need to run 20 "machines" to handle that load.  We buy 20 machines and we pay for the software, and other costs alongside it, safe in the knowledge that'll handle our peak load.

- Cloud - we provision 2 "machines" and set it to scale up (i.e. add more machines) and scale down (i.e. remove unneeded machines/resource) as quickly as possible.  For peak periods we know are coming we can also pre-scale-up to add capacity before the torrent starts!

Let's also pick some rough number to illustrate the difference:

- Traditional - a cost of £10,000 per machine and £10,000 per software installation for the traditional model.

- Cloud - a cost of £2,000 per machine per month (that's important) including the software rental costs.

Over a 1 year period the traditional is £10,000 + £10,000 x 20 machines - that's an initial price of £400,000. 

The cloud is harder to calculate but let's presume that 1 month requires peak load (probably over-generous!).  Our yearly cost then becomes £2,000 x 11 (months) x 2 (machines) + £2,000 x 1 (month) x 20 (machines) - that's a cost of £44,000 + £40,000 = £84,000.

In year 2 and 3 the traditional cost barely moves but software licensing is 10% of the total.  So for years 2 and 3 (together) we need to add another £10,000 x 10% (license cost) x 20 (machines) x 2 (years) - that's another £40,000.

For cloud years 2 and 3 are the same so we add another £84,000 x 2.

What this shows is a total cost over 3 years of:

- Traditional - £400,000 + £20,000 + £20,000 => £440,000*

- Cloud - £84,000 + £84,000 + £84,000 => £252,000

In this way we can show a cost saving but there's more benefits to a cloud first approach. 

What if the business is doing fantastically well and the peak period doubles?  For traditional that's a lot of work and requires purchasing another batch of £400,000 (assuming scaling is linear across the system of course).  For cloud we add another £84,000 per year.

What if business shrinks?  In that case traditional stays constant and the cloud pricing decreases as scaling is lessened.

What if we calculate over a 5 year period?  In that case we could argue parity of costs.  However can we predict demand over 5 years?  Or is flexibility and agility key to the business?  Will the machines last 5 years?  


Hopefully this brief article has helped but we'd love feedback or an opportunity to discuss it further.  If so get in touch!


NOTE: * - we've ignored hardware failures/refresh, network equipment, electricity, cooling and rental / purchase of racking, DDoS protection and load balancing for those servers - obviously these will incur additional costs but adding further complicates the example so they're ignored!

About the author

Stuart Muckley

Stuart Muckley

I’ve been a programmer and IT enthusiast for 30 years (since the zx spectrum) and concentrated on AI (neural nets & genetic algorithms) at University. My principle skills are concentrated on Enterprise and Solution Architecture and managing effective developer teams.

I enjoy the mix between technical and business aspects; how technology enables and how that (hopefully) improves profit/EBITDA & reduces cost-per-transaction, the impact upon staff and how to remediate go-live and handover, and risk identification and mitigation. My guiding principle is “Occams Razor” that simplicity is almost always the best option by reducing complexity, time to build, organisational stress and longer term costs.

comments powered by Disqus

We're Honestly Here to Help!

Get in Touch

Got questions?  We're always happy to (at least) try and help.  Give us a call, email, tweet, FB post or just shout if you have questions.

Code Wizards provide an extremely rare mix blending business and technology expertise. This enables their service/technology designs and implementation to add to business strategy and service objectives. The contribution to TheGivingMachine's mission, social impact as well as service implementation has been amazing - thank you!

Richard Morris, Founder and CEO TheGivingMachine

Working In Partnership

We're up for (almost) anything.

Our partner network is continuing to grow with like-minded companies that can add value to our mutual client offering. If you would like to learn more about how we can grow together get in touch.