26 Jun 2018

...new features

We've rolled out a bunch of new features designed to make it quicker and easier to monitor large numbers of websites - you can now add, edit and delete website monitors in bulk.

Thanks to everyone who have submitted feature requests - this helps us focus the development of Downtime Monkey on the areas that you want.

bulk imports

Bulk Add/Import Monitors

There are two ways to add website monitors in bulk - you can import from a spreadsheet (as a CSV file) or add manually as a comma separated list.

Import from a CSV file

You can add hundreds of monitors in a seconds by importing from a CSV file. It's the best way to add monitors if you have a spreadsheet of all your websites.

Simply save the spreadsheet as a .csv file and upload.

Add Manually as a Comma Separated List

It's also possible to add multiple monitors manually. Simply follow each website with a comma in the input form... piece of cake!

bulk add website monitors

Don't worry about duplicates...

Downtime Monkey removes any duplicate URLs before creating new monitors - therefore you'll only ever have one monitor per webpage, making things easier to keep track of.

...or mistakes

Downtime Monkey checks that all URLs are valid. Invalid URLs aren't added as monitors but are shown as invalid in the results, so that you can find and correct them easily.

...or other text

Plain text (or any text that isn't a URL) is ignored too - this means that you can confidently import from a spreadsheet that contains other text as well as website URLs. Only the valid URLs will be imported.

...or dead websites

Every URL that you add is visited to check that the webpage is a real and operational. If the page is redirected or there is no response then the monitor won't be created - you'll be informed in the import results.

bulk import website monitors from CSV file

Bulk Edit Settings

You can now update the settings of all your website monitors at once. Sign up and login, go to your monitors (Pro) and select 'Edit All'.

bulk edit website monitors

You can turn email and SMS alerts on or off, select the email address and phone number for alerts and customise when alerts should be sent - click the button and all your monitors will be updated to the new settings.

It's still possible to apply individual settings to specific monitors - simply edit the settings of the individual monitor as before.

Bulk Delete Monitors

Much the same as bulk edit you can now delete all your monitors at once.

Note that when you delete a monitor there's no going back and all uptime stats will be deleted along with the monitor. If in doubt, we'd recommend keeping the monitors but turning alerts off.

bulk delete website monitors

Pro Features

These features will be most useful for power users who monitor lots of websites and for this reason we've rolled them out to Pro users. We intend to develop some more features for power users in the coming weeks and months and also something for our Free users.

If you have any feature requests, please let us know via the feedback form on the help and support page.

28 May 2018

...it's GDPR

Last week, we'll bet you've received an onslaught of "we've updated our privacy policy" emails.

If you're a website manager maybe you've been writing those emails and ensuring your site is compliant with the new regulations.

It's been interesting (for us anyway) to listen to reactions to GDPR. It seems that people are split between "what a nightmare - so much paperwork" and "this is great - it protects our privacy" and there is no doubt that we have felt a bit of both.

tired of paperwork


There is an entire section of the tech industry that has a business model of: provide a free app, get people's personal information and sell it to anyone who'll pay.

You know... that app that wants access to your location, all the details of your contacts, your emails and browsing history so that you can play virtual ping-pong.

But free ping-pong is awesome and what's the harm anyway?

If the Cambridge Analytica scandal is anything to go by, then enough to threaten the fabric of democracy in the developed world!


At Downtime Monkey we've always considered privacy and security important and right from the start we've put a fundamental principle at the heart of our service:

We don't ask for personal information unless we truly need it.

We keep third parties to a minimum but use some to enable us to provide our service and we apply the same principle to them.

Here's an example:

We use a text message API to send downtime alerts because the service is reliable worldwide and it saves us having to reinvent the wheel.

If your website goes down and you've set up SMS alerts Downtime Monkey relays your phone number and the website URL to the API and a text message is sent to your phone. We don't include your name or any other details - just the information needed to get the job done.


Following this principle means that there has been very little for us to change for GDPR. We haven't had to make any changes to the functionality of our application.

However, we have had to check our systems, ensure that all the third parties that we use are GDPR compliant, and put some documentation in place.

We dedicated a few days' work to this and although we'd rather have spent the time developing our services, we're happy that privacy is being taken seriously by (some) regulators.

So will GDPR fix the tech industry's privacy problem?

Although GDPR may help, it's unlikely to fix the situation completely. It's predictable that the organisations whose income is made from collecting their users' personal information and selling it to the highest bidder will find a way to get round the regulation, or just accept the fines that come their way.

One way or another we can't see this section of the tech industry disappearing overnight.

On a positive note though, maybe the regulation will encourage more developers and startups to use a model of business that puts customer privacy first.

To support our privacy-centric model of business Sign Up and then upgrade to a Pro Account. If you're already a Pro user thanks for your support - you make it all possible!

Oh yes... and we've updated our privacy policy.

25 Apr 2018

Downtime occurs. It's an unfortunate fact of online life.

No website is able to provide 100% uptime - even tech giants like Google suffer downtime, albeit very occasionally.

So, some amount of downtime is inevitable, but how much is acceptable?

This question is obviously subjective - downtime that's acceptable for one person may be intolerable for another. Therefore, we undertook a little research...

apollo reliability quote

The Survey

We ran polls across 14 different Google+ communities, asking the question "What's the minimum level of acceptable uptime for a website?"

The options for answering were: 99%, 99.9%, 99.99%, 99.999% and 'other'.

A big thanks to everyone who took the time to respond!

A community was selected for the poll if it was active, responsive, welcoming and if the topic of website uptime was considered relevant to the community.

website offline quote

Overall Results

Here are the combined results from all communities that were polled:

Total Votes:












acceptable uptime results

Average Result

We can see that, although the most popular result was 99.999% there was no 'runaway winner'. Therefore, we calculated an average result that would take all votes into consideration.

Note that simply taking the mean of all the results would have led to an average that was skewed towards the lowest option of 99%. To avoid this we calculated a meaningful average that allocated all votes an equal weight - you can see the method used at the end of the article.

Here is the average result as a percentage and as actual downtime:








4min 50sec


21min 2sec


4hr 12min

acceptable uptime results

Top Comments

"For e-commerce 5 nines for sure, but for a personal blog 99% would be acceptable."

"How many nines can you afford?"

"How much does it cost if the site is down?"

"99.999% (or even more) is pretty doable as long as you have the right architecture. I highly recommend reading the book 'Site Reliability Engineering: How Google Runs Production Systems'"

"Fun fact, the Apollo space program had 99.9% reliability as a goal while airlines today achieve 99.99999% reliability"

"It depends on when the downtime happens"

"It's not about what is acceptable, it's about 'what-it-is'"

"If a site goes offline on the web and no one is around to see it, does it make a 503?"

Results By Community

Here are the results broken down by community... or to go straight to the conclusions click here.

From the communities with more than 20 responses, 'Cloud Computing' had the highest result (99.977% average acceptable uptime) and 'Web Design' had the lowest result (99.898% average acceptable uptime):


Number of Votes:


Acceptable Uptime:


programming acceptable uptime results

Computer Programmers

Number of Votes:


Acceptable Uptime:


 computer programmers acceptable uptime results

PHP Programmers

Number of Votes:


Acceptable Uptime:


PHP programmers acceptable uptime results

Web Development

Number of Votes:


Acceptable Uptime:


web development acceptable uptime results

Computer Science

Number of Votes:


Acceptable Uptime:


computer science acceptable uptime results

Cloud Computing

Number of Votes:


Acceptable Uptime:


cloud computing acceptable uptime results

Web Design

Number of Votes:


Acceptable Uptime:


web design acceptable uptime results

Web Performance

Number of Votes:


Acceptable Uptime:


web performance acceptable uptime results

Web Designers

Number of Votes:


Acceptable Uptime:


web designers acceptable uptime results


Number of Votes:


Acceptable Uptime:


web performance acceptable uptime results

Enterpreneurs, Self-Employed & Small Business

Number of Votes:


Acceptable Uptime:


enterpreneurs business acceptable uptime results

Internet Marketing

Number of Votes:


Acceptable Uptime:


internet marketing acceptable uptime results

Enterpreneurs/Self-Employed Community

Number of Votes:


Acceptable Uptime:


enterpreneurs self employed acceptable uptime results

Blog Community

Number of Votes:


Acceptable Uptime:


blog community acceptable uptime results


Uptime of 99.95% was the average result from the survey and this seems like a reasonable value, allowing just over 4 hours of downtime per year.

However, not all websites are the same. Busy sites for businesses will require higher availability while 99% uptime is acceptable for casual sites with few visitors.

Not All Downtime Is The Same

Two hours of downtime at 4am on a Sunday may affect fewer users than 5 minutes of downtime on a Tuesday afternoon. So, if downtime is inevitable, say for essential maintenance, it makes sense to schedule it during off-peak hours.

Accurate Monitoring

Setting an uptime goal is one thing but making sure you achieve it is another. You should monitor your site and check the uptime stats regularly. Our free accounts have 3-minute checks and uptime stats to 1 decimal place and our pro accounts have checks every minute and uptime stats to 3 decimal places.

When To Be Alerted

Consider your acceptable uptime when setting custom alert times. For a website that requires 99.99% uptime or more you will probably want to be alerted the instant the site goes down, but for a website that requires 99% uptime you could schedule alerts to be sent when the site has been down for 10 minutes.

Appendix - The Meaningful Average

To avoid an average that was skewed towards the lowest option of 99% each option was allocated a weighted value on a linear scale. 99% was given a value of x, 99.9% a value of 2x, 99.99% a value of 3x and 99.999% a value of 4x.

A curve was then plotted of the weighted value against the percentage uptime.

The mean weighted value was calculated by multiplying the number of votes for each option by their weighted value, adding the products together and dividing the total by the total number of votes.

The mean weighted value was then applied to the curve and the corresponding percentage uptime was found.

21 Mar 2018

Not all websites are the same. From personal blogs to business websites, online shops to community forums, SaaS applications to video streaming services, websites come in all shapes, sizes and flavours.

It follows that not all websites have the same uptime requirements. If a personal blog goes down for 20 minutes it might not be a big problem, but the same downtime for a popular online shop could be a major concern.

With this in mind we developed a feature which enables the timing of alerts to be customised specifically for each website monitor.

lots of penguins

Customise Your Alert Times

For each monitor, you can set a custom alert time. This is the time that the website must remain down, before an alert is sent.

What are the options?

The alert time can be set to instant, 1, 2, 3, 5, 10, 15, 30 or 60 minutes. If a monitor's alert time is set to instant, an alert would be sent as soon as the website goes down. If it's set to 5 minutes, an alert would be sent if the site stays down for longer than 5 minutes.

Set different times for different websites

If you monitor multiple websites you can set different thresholds for each monitor. You could set your personal blog's alert time to 15 minutes and your business website's to 3 minutes.

Set different times for email and SMS alerts

It's possible to set different alert times for email and SMS alerts on the same monitor. For example, the email alert time could be set to 3 minutes while the SMS alert time is set at 10 minutes. If the website stays down for 5 minutes then you'd receive an email alert but no SMS alert. However, if the site stays down for 12 minutes, you'd receive both email and SMS alerts.

Do alert times affect stats & logs?

No. Alert times only affect when alerts are sent. Every downtime, no matter how short, is recorded and can be viewed in the monitor's logs and stats.

For example, if you set an alert time to 5 minutes and the website goes down for 4 minutes, no alert would be sent but the details of the downtime would be logged and the 24 hour stats would show as 99.722% uptime.

Great For Slow Websites

Custom alert times are really useful for monitoring websites on servers that are slow. There are many reasons that a website can be slow including hosting on overloaded servers or using bulky content management systems.

In an ideal world we'd all have super-fast websites but in reality, budget and time constraints mean that a lot of websites run slowly - 30% of the web now runs on Wordpress!


When a website fails to return a response to Downtime Monkey within 30 seconds it is marked as down due to timeout. 30 seconds is a long time for a site to respond (we're not talking total page load time here - just the time for the page to respond) and for a well optimised site this should almost never occur (see how to speed your website up).

However, for slow websites this can occur fairly regularly with the result that they may receive a lot of alerts.

The default alert time for email alerts is 1 minute so alerts will be sent if a website times out for two consecutive checks, one minute apart. For SMS alerts, the default setting is instant so every time a site times out an alert will be sent.

With a custom alert time in place, you can decide how long a site should be allowed to be down for before you're alerted. Setting a slightly longer period of, say, 5 minutes can really cut down on the number of alerts reecived from slow sites but means you'll still be informed if a prolonged downtime takes place.

Ideal For Bulk Monitoring

If you monitor several websites then setting a custom alert time is a good way to cut down on the number of alerts that you recieve, while still getting notified if a site goes down for a longer period.

For users who manage a lot of sites we also recommend that you rate limit your SMS alerts.

How To Set A Custom Alert Time

This is an advanced feature and is only available to Pro users.

Step 1 (when adding a new monitor)

Login and navigate to Add Monitor.

Step 1 (when editing an existing monitor)

Login, navigate to Monitors, and select the gear icon to edit the monitor's settings.

Step 2 - Set Email Alert Time

Select a time from the dropdown menu under 'Downtime before sending email'. Here the alert time is set to 5 minutes.

custom email alert time

Step 3 - Set SMS Alert Time

Select a time from the dropdown menu under 'Downtime before sending SMS'. Here the alert time is set to 10 minutes.

custom SMS alert time

Step 4

Click 'Add Monitor' or 'Update Monitor'

update monitor

If you'd like to propose an additional feature for Downtime Monkey, login and submit a feature request via the feedback form.

12 Mar 2018

Update 19 Mar 2018:

The Downtime Monkey server upgrade was successfully completed over the weekend. The process was started on Saturday at 8pm as scheduled, although the actual migration took place overnight on Sunday. There were two short periods when the site was offline around midnight and 3am UTC.

Uptime stats for users' websites were affected for 24 hours after the upgrade but all inaccuracies have now been corrected and all website stats are now accurate. Overall, the transition was smooth and all our users can now enjoy the benefits of a faster, more powerful server.

We'll start with a big thanks to everyone who has signed-up recently - we really appreciate it!

The Downtime Monkey server now requires an upgrade and we've decided that this should take place during off-peak hours this coming weekend. The upgrade is scheduled for Saturday, 17th March beginning some time after 8pm UTC.

server upgrade

Downtime Monkey will be offline for a short period while the maintenance is carried out. We loathe downtime and although we are aware that occasionally it is unavoidable we're sorry for any inconvenience.

The good news is that all users will enjoy the benefits of a new faster, more powerful server after the changes have been made.

Thanks again and see you on the other side!

21 Feb 2018

A couple of weeks ago we launched a new feature for Pro users: rate limiting SMS alerts. Back then we promised that free users wouldn't be left out and we've just rolled out the planned upgrade to all free accounts: we've increased the available number of monitors from 20 to 60.

That's right, you can now monitor up to 60 websites completely free.

hot air balloons

How do I get the upgrade?

If you already have a free account you don't need to do anything - we've upgraded your account and you can take advantage of the extra monitors right away.

If you don't have an account yet, simply sign up here. All new users will be on the upgraded plan with 60 monitors.

I'm a Pro user - do I get an upgrade?

Although this upgrade is just for free accounts we are planning to release some different upgrades for Pro users over the next few months. We aim to continually develop and improve Downtime Monkey - standing still is not an option!

We are aware that free accounts now come with more monitors than some Pro accounts - the Pro30 and Pro50 plans have 30 and 50 monitors. At first this may seem strange and we considered removing the Pro30 and Pro50 options and making the Pro100 the entry-level Pro account.

However, after some thought, we decided that keeping all Pro options was best because some Pro users only monitor a few websites but want access to advanced Pro features. Features like fine tuning of alert settings, detailed uptime stats, reasons for every downtime, 1 minute monitoring intervals, and alerts to multiple emails and phones are some of the reasons that users choose a Pro account when monitoring fewer than 60 websites.

By keeping the Pro30 and Pro50 accounts we continue to provide a cost-effective Pro plan for users who only need to monitor a few websites but who want access to powerful monitoring features.

If you'd like to propose an additional feature for Downtime Monkey, login and submit a feature request via the feedback form.

06 Feb 2018

We've just rolled out a brand new feature: the ability to set rate limits on SMS alerts.

Before we tell you all about it, a big thank you to everyone who has provided feedback or submitted feature requests - it helps us focus on making the improvements that people really want.

The video gives a short overview with full details of the feature below:

Rate Limiting SMS Alerts

You can now set an SMS rate limit on your account - this rate limit sets the maximum number of 'down' text message alerts that will be sent in an hour.

What are the options?

The options are: 2, 3, 4, 5, 10, 20 or unlimited 'down' alerts/hour

Are 'up' alerts limited?

Yes. For every down alert that is sent, a corresponding 'up' alert will always be sent when the website goes back up.

So if you set the rate limit at 3/hour, you'll receive a maximum of 3 'down' alerts and 3 corresponding 'up' alerts - a maximum of 6 text messages in total. This way you'll always be kept informed of each website's status.

lots of penguins

Monitoring multiple webpages

Rate limits will be really useful for people who monitor a lot of webpages, for example:

1) Web Agencies or Webmasters who manage multiple websites that are hosted with the same hosting provider or on the same server.

2) Website Owners or Managers who monitor multiple pages on a single website.


If a web server has a problem, all the websites on that server usually go down around the same time.

If you monitor 30, 100 or 1000 websites and they all go down at once, without rate limiting you'd be bombarded with text messages.

With a rate limit in place, a few alerts would be sent before the limit is reached so you'll still know that there is a major issue. You'll be able to take action quickly, whether this is a phone call to your hosting provider or a reboot of your own server. However, you'll not get flooded with alerts.

When each site comes back up you'll receive an 'up' message - you can then relax knowing that your sites are online.

How To Set The Rate limit

This feature is only available to Pro users but to make sure Free account holders don't feel left out we're rolling out a different upgrade for free accounts very soon - to stay in the loop follow us on Twitter @DowntimeMonkey).

It's really simple to set a rate limit for your account:

Step 1

Login and navigate to SMS Alert Settings

Step 2

Select a rate limit from the dropdown menu, then click 'Set Rate Limit'.

rate limit SMS alerts

That's it - if you were expecting something more complicated we're sorry to disappoint :)

If you'd like to propose an additional feature for Downtime Monkey, login and submit a feature request via the feedback form.

Main photo: 'King penguins' by Brian Gratwicke under Creative Commons Licence

17 Jan 2018

...and how you can too!

Over the past couple of weeks we've been optimising the Downtime Monkey website to reduce page load time. We've had some excellent results: in the best case scenario we cut page load time by 58% and even in the worst case, the page load was still 9% faster.

All of the changes that were made are straightforward and we've provided in-depth details of the optimisations so that you can apply them to your own website.

If you have any questions, ask us on Twitter @DowntimeMonkey.

Starting Point

Our starting point was a site that was already developed to be light. All the code was hand-written from scratch using: our own CSS framework as opposed to Bootstrap, vanilla JavaScipt as opposed to jQuery, no PHP framework and no CMS.

Images had been used sparingly and the filesizes kept small.

However, we'd not optimised specifically for page speed so we were hoping for some improved performance.

Selected Pages

To find our starting point we chose two different pages to use as benchmarks:

The Plans page was selected because it was a good benchmark for many pages on the site: it had an average amount of text, fonts and images.

The Home page was chosen because it is the main landing page and therefore is important from the perspectives of 'first page load' and SEO. It's also one of the heaviest pages on the site with a large header image and embedded YouTube video.

Also note that the Plans page is specifically for the US (see /en-us/ in the URL) and the Home page is specifically for the UK (/en-gb/) - we wanted to benchmark from different locations across the world so it is vital to use the correct landing page for each location.

Note that 'hreflang' is used so that Google search results serve the correct page for the location therefore removing the need for users to be redirected to their country specific page. This is important from a page speed perspective since redirects cost time and are best kept to a minimum.

To get detailed load time data we went to Web Page Test, entered the URL and hit 'Start Test'. For the Plans page test location, we selected a US location and for the Home page we chose a UK location.

Benchmark Load Times

These benchmarks are the load times before optimisation took place - the load times after the optimisations are shown in the results section at the end of the post.

Plans Page: 2.105 seconds, Home Page: 2.712 seconds

The waterfall diagrams show the page load times broken into different requests - this helped identify the slowest parts of the pages to load:

1) Font Awesome is slow to load on both pages - taking over 600ms.

2) There are too many CSS files. Because there are multiple files and each one requires its own request to the server they really slows things down.

3) Images reduce speed on the home page, especially the large header image.

4) The slowest aspect of page load is the embedded YouTube video, however, it turns out that this is not a major issue (find out why later in the post).

Recommended Optimisations

With the bottlenecks identified we headed over to Google Page Speed Insites, entered the URL and hit 'Analyze'.

Page Speed Insites gives recommendations for ways to increase page speed. Read on to see the actions that were taken...

Consolidate and Minify CSS: saving 100-200ms

The page load breakdowns helped us identify that we had too many CSS files. There were separate CSS files for the responsive grid, modifiers, content, forms and media queries. This organisation makes life easy when you need to adjust the CSS of a website but for speed it is better to consolidate all these into one file.

To do this we simply copied and pasted the CSS from each individual file into one single file.

The important thing to remember when consolidating multiple CSS files is: the order that you add the code from each file must be the same as the order that the original files were linked from the webpage.

For example, originally the head section of each webpage linked to the stylesheets like this:

<!-- Styles -->
<link href="framework.css" rel="stylesheet">
<link href="general-modifiers.css" rel="stylesheet">
<link href="content.css" rel="stylesheet">
<link href="form.css" rel="stylesheet">
<link href="media-queries.css" rel="stylesheet">

We created a file called all-styles.css and copied the code from the framework.css file first, then after it add the code from general-modifiers.css file, then content.css etc., finishing with media-queries.css.

With the stylesheets consolidated it was time to minify the code.

Well written code is organised to be easy to read so it will include spacing, formatting and comments that are useful for the developer. Minifying simply removes everything that isn't actual code to be executed in order to reduce the filesize.

For example, this code:

/* registration form */

.registration-form {
	width: 100%;

.registration-form input[type="text"] {
	height: 36px;

When minified will become:

.registration-form{width: 100%;}.registration-form input[type="text"]{height: 36px;}

To minify our code we went over to CSS Minifier, pasted the code from the file into the Input CSS field and hit Minify. We then created a file called all-styles.min.css and pasted the minified code into it.

A word of warning here - many minifiers don't handle media-queries well. CSS Minifier dose convert normal media-queries well but it struggles with complex media-queries such as:

@media only screen and (-webkit-min-device-pixel-ratio: 2)

The minifier removes the spaces next to "only" and "and" which are essential. If your code contains complex media queries like this you'll need to add this spacing in manually. This is not a big problem and with search and replace just takes a couple of minutes.

With the CSS consolidated and minified it was just a case of updating the head of the webpages and uploading the new files to the server:

<!-- Styles -->
<link href="all-styles.min.css" rel="stylesheet">

Serve Font Awesome From CDN: saving 200-550ms

Font Awesome is a great way to provide icons for a website. It's quick and easy to work with and makes for a lighter, faster site. However, the font itself is quite large and was slow to load.

The solution was to serve Font Awesome from a content delivery network (CDN) - a network of servers across the globe that each have the Font Awesome files ready to serve to users closest to their location.

Using a CDN not only decreases load time of Font Awesome but also increases the likelihood of a user already having the Font cached on their computer even if they haven't visited the our site - if they've visited another site which uses the same CDN then Font Awesome should be on their computer already.

Their are several free CDNs that serve Font Awesome. In the past we've used Font Awesome's own CDN but found it unreliable (it regularly went down for several hours) so this time we opted for the Bootstrap CDN.

We went to https://www.bootstrapcdn.com/fontawesome/ and copied the link into the head of our webpages. We replaced this:

<!-- Icon Fonts -->
<link href="path-xxx/font-awesome/css/font-awesome.min.css" rel="stylesheet">	

With this:

<!-- Icon Fonts -->
<link href="https://maxcdn.bootstrapcdn.com/font-awesome/4.7.0/css/font-awesome.min.css" rel="stylesheet" integrity="sha384-wvfXpqpZZVQGK6TAh5PVlGOfQNHSoD2xbE+QkPxCAFlNEevoEH3Sl0sibVcOQVnN" crossorigin="anonymous">

There is a downside to using a CDN and that is that if the CDN itself suffers downtime your site will be effected - in our case this would mean all the icons on the Downtime Monkey site disappearing! As ever though there is a solution: if the CDN fails, we'll serve a fallback copy.

In other words, if Font Awesome fails to load from the CDN then the webpage will revert to the copy of Font Awesome that is stored on our own server.

This is achieved by adding a few lines of JavaScript to the .js file which is included at the bottom of every web page:

// fallback function for Font Awesome
function ensureCssFileInclusion(cssFileToCheck) {
    var styleSheets = document.styleSheets;
    for (var i = 0, max = styleSheets.length; i < max; i++) {
        if (styleSheets[i].href == cssFileToCheck) {
    // because no matching stylesheets were found, we will add a new HTML link element to the HEAD section of the page.
    var link = document.createElement("link");
    link.rel = "stylesheet";
    link.href = "https://downtimmonkey.com/path-xxx/font-awesome/css/font-awesome.min.css";

//call function

The function loops through the stylesheets that are included and if a stylesheet with a matching path to the Bootstrap CDN is present then no more code runs.

If however, there is no matching stylesheet that means that the CDN has failed and a link to the locally stored version of Font Awesome is added to the head of the webpage.

You can just copy and paste the code making sure to replace the URL of the fallback with the URL of your own fallback.

Async Load Google Fonts : saving 150-300ms

We use a Google Font called 'Cabin' for most of the text on Downtime Monkey.

Google Fonts enables websites to deliver varied typography without having to worry about compatibility - gone are the bad old days when websites were tied to Times New Roman and Arial.

The 'Cabin' font is loaded from Google's CDN and in the unlikely event that the Google CDN goes down (it's possibly the most reliable CDN on the planet), text is displayed in 'Sans Serif' instead... so no need for a JavaScript fallback.

However, 'Page Speed Insites' flagged up one issue that related to our use of Google fonts: Eliminate render-blocking JavaScript and CSS in above-the-fold content" was the recommendation.

The issue was that we linked the to Google Font from the head of the webpages like any other CSS file:

<!-- custom fonts -->
<link href="https://fonts.googleapis.com/css?family=Cabin" rel="stylesheet">	

All the styles in the head of a webpage are loaded before the the page itself renders and each of the stylesheets effectively blocks the rendering of the page.

Asynchronous loading allows the page to begin rendering before a stylesheet is loaded - the page starts to load quicker but the downside is that all the styles aren't ready.

For this reason asynchronous loading is not good for many stylesheets - for example, if the styles for the Top Menu were loaded asynchronously the menu would first appear as a mess before changing to the correctly styled menu.

In many cases this makes for a bad user experience, however for a font the only visible effect will be a quick (0.2 second) flash of unstyled text as the page loads initially. In this case asynchronous loading gives a better user experience than a delay in page load.

To load the Google font asynchronously we used Web Font Loader which is a library developed jointly by Google and Typekit.

Web Font Loader is also hosted on the Google CDN - we headed to the Github Repository which has all the latest information and simply copy/pasted the following JavaScript into our .js file. The only edit we had to make was adding our font family, 'Cabin':

WebFontConfig = {
    google: { families: ['Cabin'] }

(function(d) {
	var wf = d.createElement('script'), s = d.scripts[0];
	wf.src = 'https://ajax.googleapis.com/ajax/libs/webfont/1.6.26/webfont.js';
	wf.async = true;
	s.parentNode.insertBefore(wf, s);

Enable Gzip Compression: saving 100-400ms

When a file is compressed the information within the file is encoded using fewer bits and the filesize is reduced and smaller filesizes mean faster load times.

In fact, for speed gains vs effort required this was the best performing optimisation we made.

Compression is enabled at server level and there are several methods that can be used:

1) If you have a VPS or a dedicated server that runs Apache you can edit the Apache configuration http.conf file.

2)If you're on shared hosting you can edit the .htaccess file at the root of your website.

Simply add this code to either file:

<IfModule mod_filter.c>
    AddOutputFilterByType DEFLATE "application/atom+xml" \
                                  "application/javascript" \
                                  "application/json" \
                                  "application/ld+json" \
                                  "application/manifest+json" \
                                  "application/rdf+xml" \
                                  "application/rss+xml" \
                                  "application/schema+json" \
                                  "application/vnd.geo+json" \
                                  "application/vnd.ms-fontobject" \
                                  "application/x-font-ttf" \
                                  "application/x-javascript" \
                                  "application/x-web-app-manifest+json" \
                                  "application/xhtml+xml" \
                                  "application/xml" \
                                  "font/collection" \
                                  "font/eot" \
                                  "font/opentype" \
                                  "font/otf" \
                                  "font/ttf" \
                                  "image/bmp" \
                                  "image/svg+xml" \
                                  "image/vnd.microsoft.icon" \
                                  "image/x-icon" \
                                  "text/cache-manifest" \
                                  "text/calendar" \
                                  "text/css" \
                                  "text/html" \
                                  "text/javascript" \
                                  "text/plain" \
                                  "text/markdown" \
                                  "text/vcard" \
                                  "text/vnd.rim.location.xloc" \
                                  "text/vtt" \
                                  "text/x-component" \
                                  "text/x-cross-domain-policy" \

We got this code from Github again - this time from the HTML5 Boiler Plate Server Configs repository which provides "a collection of boilerplate configurations that can help your server improve the web site's performance and security".

There are a whole range of config settings for many different types of server. These particular settings were taken from the Apache server configs, in the folder: 'src' > 'webperformance' > 'compression.conf'.

The repository is kept up to date so it is worth checking out to make sure you have the latest settings.

Note that another method is available to cPanel users. If the idea of editing code makes you squirm or you don't know where to find your .htaccess file then this is the best option: Login to cPanel and select 'Optimise website'. Choose 'Compress the specified MIME types', input the list of MIME types (e.g. text/html etc.) shown in the above code and hit 'update settings'.

Note that this will compress most website files but not PHP files. Compression of PHP files was enabled by editing the PHP INI settings. This is easily done in cPanel: login and select 'MultiPHP INI Editor' and turn 'php_flag zlib.output_compression' to 'On'.

Optimise Images: saving 20-100ms

Modern cameras with tens of Megapixels take pictures that have massive filesizes - think megabytes rather than kilobytes. Trying to load a 3MB header image over the internet every time a webpage loads is a bad idea and can reduce page load to the speed of a sloth!

Our starting point was pretty good though: all images were already under 100kB, and most under 10kB. However, Page Speed Insites still flagged up that some of the images could be smaller.

There were two parts to our image optimisations - sizing and compression.

Image Sizing

Images with smaller dimensions have smaller filesizes. With that in mind we followed two rules:

1) Never place oversized images on the server and scale them down in the browser - this will cause slow page loads.

2) Never place undersized images on the server and scale them up in the browser - this will cause blurry images.

Our sizing optimisations involved finding the dimensions (in pixels) of the image when displayed on the webpage and making sure that the actual image that was stored on the server had the same dimensions.

When finding the display dimensions of responsive images, we used the image dimensions as displayed for desktop screens because these were the biggest dimensions needed for the particular image.

We were surprised to make some gains by resizing as we thought all our images had been sized correctly.

However, we'd used our logo on multiple pages through the website and on some pages only a small logo was needed (maximum size 26x26 pixels) but we loaded a larger one on the server (180x180 pixels). This was easily fixed by creating another image for the small logo that was just 26x26 pixels.

Image Compression

Our compression optimisation involved reducing the filesize of each image while maintaining the dimensions. Note that .jpg compression is 'lossy' so caused a reduction in visual quality, while .png compression is 'lossless' and visually the compressed image was identical to the uncompressed version.

A nice feature of Page Speed Insites is that when it runs, it creates a folder for download which contains optimised versions of all images that is deems too large. We looked at these images to see the filesize that we needed to achieve for each image.

Note that we didn't just go ahead and use the images produced by Page Speed Insites - better visual results can be achieved when a human looks at every image.

Image optimisation is a balance between visual quality and file size - images need to be as small as possible but still look sharp to the human eye. We compressed to the point just before an image lost visual quality.

To optimise our .jpg images we used Adobe Fireworks for both resizing and compression - it gives fine-grained control over image compression and quality. There are plenty of alternatives though - http://jpeg-optimizer.com is a good online optimiser that is easy to use.

To optimise .png images we used Fireworks for resizing and Pngyu for compression.

Pyngu is open source software which allows batch compression - we used this to compress a folder of images of flags of all countries in the world with one click.

After compression all our images were under 45kB, with most under 7kB.

Minify JavaScript: saving 0-10ms

Minifying JavaScript is basically the same as minifying CSS - removing all the whitespace, comments and other non-essentials from the code.

Our starting point for JavaScript was pretty good: a single .js file of just 4.5kB was included at the bottom of every webpage. Compared with writing scripts directly on the page this has the advantage of being cacheable.

Minifying the .js was really easy, in fact Page Speed Insites did it for us. When we analysed a page with the non-minified .js file a minified version was produced for download. Thanks!

Time wise this didn't gain us much because the file was already very small. However, for sites with a lot of JavaScript gains can be huge.

Enabling Browser Cacheing: saving 0 or 500-850ms

When browser cacheing is enabled on a server and a user visits the page, their computer will store some of the files that make up the webpage in the cache on their computer. If the user visits the site again then the files are loaded directly from the users computer saving a round trip to the server.

Therefore for 'first view' there are no gains in page speed by enabling cache. However, gains are huge when the user revisits the webpage or visits another webpage that uses the same files (i.e. any other page on the same website).

Similar to enabling compression, browser cache is enabled by file type at server level by editing the http.conf or the .htaccess files (assuming Apache).

We added this code to the http.conf file:

# Cache Control One year for image files
<filesMatch ".(jpg|jpeg|png|gif)$">
Header set Cache-Control "max-age=31536000, public"

# Cache Control One month for css and js
<filesMatch ".(css|js)$">
Header set Cache-Control "max-age=2628000, public"

# Cache Control One week for favicon cant be renamed
<filesMatch ".(ico)$">
Header set Cache-Control "max-age=604800, public"

#Cache Expires for Fonts 1 month
<IfModule mod_expires.c>
ExpiresActive on
# Embedded OpenType (EOT)
ExpiresByType application/vnd.ms-fontobject         "access plus 1 month"
ExpiresByType font/eot                              "access plus 1 month"

# OpenType
ExpiresByType font/opentype                         "access plus 1 month"
ExpiresByType font/otf                              "access plus 1 month"

# TrueType
ExpiresByType application/x-font-ttf                "access plus 1 month"
ExpiresByType font/ttf                              "access plus 1 month"

# Web Open Font Format (WOFF) 1.0
ExpiresByType application/font-woff                 "access plus 1 month"
ExpiresByType application/x-font-woff               "access plus 1 month"
ExpiresByType font/woff                             "access plus 1 month"

# Web Open Font Format (WOFF) 2.0
ExpiresByType application/font-woff2                "access plus 1 month"
ExpiresByType font/woff2                            "access plus 1 month"

The first section contains 3 blocks of code which use Cache Control to set the length of time that specific filetypes are cached for. We set images to be cached for a year, CSS and JavaScript files to be cached for a month and favicons to be cached for just a week.

Note that if we do change the CSS or JavaScript on the site and it is essential that all users see the new version we can get round caching by changing the name of the file. For example we could rename all-styles.min.css to all-styles-2.min-css.

Although we don't intend on changing our favicon, a favicon file can't be renamed (it's always favicon.ico) so we used a shorter expiry time.

The second section of code uses Expires By Type to set the length of time that fonts (set by MIME type) should be cached for - in all cases we set fonts to be cached for a month.

For more information check out this webpage on Cache Control or visit the HTML5 Boiler Plate Server Configs repository on Github, specifically: Apache server configs, in the folder: 'src' > 'webperformance' > 'expires_headers.conf'.

YouTube Video: No Change

The embedded YouTube video was the slowest aspect of the home page but we didn't change it.

It is possible to 'lazy load' YouTube videos by replacing the video with an image and loading the video when the user clicks the image.

But we didn't do it - we must be the lazy ones, right?

No! The first, and most important reason we didn't lazy load the video is that embedded videos provide SEO juice but there is no SEO benefit if 'lazy loading' is used. The second reason is that the video is below the fold and by the time a user has scrolled down the video will be completely loaded.


After making all these changes we ran our benchmark pages through Web Page Test again - this time we selected 'First View and Repeat View' from the Advanced Settings, so that we could see the effects of cacheing.

Before Optimisation:

Plans Page: 2.105 seconds, Home Page: 2.712 seconds.

After Optimisation, First View:

Plans Page: 1.288 seconds, Home Page: 2.466 seconds.

After Optimisation, With Cache:

Plans Page: 0.876 seconds, Home Page: 1.664 seconds.

The best result was a 58% improvement (repeat view of the Plans page) and the worst result was a 9% improvement (first view of the Home page).

We're pretty happy with the first view results for the Plans page - the 39% improvement in page speed is reflected in many of the pages across the site.

We're also happy with improvements in the home page - everything except for the YouTube video is loaded in under 1.3 seconds.

Here are the 'first view' waterfall diagrams after optimisation:

19 Dec 2017

In an ideal world every site would have 100% uptime, 24/7, 365. However, the reality is not so perfect – hardware failures, DNS issues, DDoS attacks, server maintenance, software problems and poor hosting are among the many causes of downtime.

It’s not all doom and gloom though – by following a few practical steps you can really cut down on your downtime:

website downtime 503 error

Avoid poor hosting

Poor hosting is the most common cause of downtime - it is simple to rectify by using a better quality host and although it can be a hassle to move a website it's worth making the effort if your site suffers from regular downtime.

Traditional shared servers, although cheap are usually quite susceptible to downtime while private servers and cloud servers with self healing technology can maintain uptime round the clock if managed properly.

A good web host should guarantee uptime through an SLA (service level agreement). The percentages can be misleading though – a guarantee of 99% uptime may sound good at first but actually allows over 7 hours of downtime each month! Aim for at least a 99.99% uptime guarantee and find out what compensation is provided if the guarantee is broken.

If you manage your own servers then the quality of the hardware and the team that manages them will be paramount.

Monitor your website

Monitoring your website is a very important step towards reducing website downtime.

A good service will send an alert if your website goes down. Ideally, alerts will be customizable – if your business has support staff in place day and night then email alerts are a good solution. SMS alerts may be more useful for small businesses where text messages can be sent to an emergency phone number outside office hours.

It’s useful to be able to view the details of each downtime as well as uptime statistics so that the performance of your site can be reviewed. This will enable the cause of the downtime to be examined, addressed and fixed.

Website monitoring needn't cost the earth – you can get a free account and start monitoring your websites right away.

Take backups and test restores

Taking regular backups is something we all know is important – these should be automated so that they aren’t missed. It’s also good to test your backups and be familiar with the restore procedure so that if the worst happens you can restore your site quickly and with confidence.

Update CMSs with care

If your website uses a content management system it is very important to keep it up to date – keeping your CMS up to date with the latest version is one of the most important steps that can be taken to avoid leaving your site vulnerable to exploits.

However, another common cause of website downtime is automated updates of the CMS – incompatibilities of new versions with plugins and themes are known to be a problem and can bring a website down. So it’s best to schedule updates for off peak times and to be on hand when they take place... and always be ready to roll back to the last working version if there are problems.

Keep an eye on bandwidth

It’s a good idea to monitor the bandwidth that is used by visitors to your site. If an unusually large amount of bandwidth is being used it may be a spike in legitimate traffic but it may also be traffic from bad bots.

Comment spam bots are a common problem and although easily avoided by disabling or requiring moderation of comments, if they manage to post successfully they can inundate a site, slowing it to a halt. They often continue to bombard a site even after comments have been disabled.

DDoS attacks have become more prevalent and sophisticated in the last few years – large botnets have been used to bombard sites with traffic and have brought some high profile sites down.

Prevention is better than cure when dealing with both DDoS and spam bots. If a content delivery network (CDN) is a good fit for your site and is affordable, it is a tried and tested solution to both problems.

Have a plan

If your website goes down, having a plan of action in place will reduce the time it takes to get the site live again and is also likely to make life less stressful for those involved. Having clearly defined roles is important – know who will be alerted if the site goes down and have a checklist of actions that they need to take.