12 Mar 2018

Update 19 Mar 2018:

The Downtime Monkey server upgrade was successfully completed over the weekend. The process was started on Saturday at 8pm as scheduled, although the actual migration took place overnight on Sunday. There were two short periods when the site was offline around midnight and 3am UTC.

Uptime stats for users' websites were affected for 24 hours after the upgrade but all inaccuracies have now been corrected and all website stats are now accurate. Overall, the transition was smooth and all our users can now enjoy the benefits of a faster, more powerful server.

We'll start with a big thanks to everyone who has signed-up recently - we really appreciate it!

The Downtime Monkey server now requires an upgrade and we've decided that this should take place during off-peak hours this coming weekend. The upgrade is scheduled for Saturday, 17th March beginning some time after 8pm UTC.

server upgrade

Downtime Monkey will be offline for a short period while the maintenance is carried out. We loathe downtime and although we are aware that occasionally it is unavoidable we're sorry for any inconvenience.

The good news is that all users will enjoy the benefits of a new faster, more powerful server after the changes have been made.

Thanks again and see you on the other side!

21 Feb 2018

A couple of weeks ago we launched a new feature for Pro users: rate limiting SMS alerts. Back then we promised that free users wouldn't be left out and we've just rolled out the planned upgrade to all free accounts: we've increased the available number of monitors from 20 to 60.

That's right, you can now monitor up to 60 websites completely free.

hot air balloons

How do I get the upgrade?

If you already have a free account you don't need to do anything - we've upgraded your account and you can take advantage of the extra monitors right away.

If you don't have an account yet, simply sign up here. All new users will be on the upgraded plan with 60 monitors.

I'm a Pro user - do I get an upgrade?

Although this upgrade is just for free accounts we are planning to release some different upgrades for Pro users over the next few months. We aim to continually develop and improve Downtime Monkey - standing still is not an option!

We are aware that free accounts now come with more monitors than some Pro accounts - the Pro30 and Pro50 plans have 30 and 50 monitors. At first this may seem strange and we considered removing the Pro30 and Pro50 options and making the Pro100 the entry-level Pro account.

However, after some thought, we decided that keeping all Pro options was best because some Pro users only monitor a few websites but want access to advanced Pro features. Features like fine tuning of alert settings, detailed uptime stats, reasons for every downtime, 1 minute monitoring intervals, and alerts to multiple emails and phones are some of the reasons that users choose a Pro account when monitoring fewer than 60 websites.

By keeping the Pro30 and Pro50 accounts we continue to provide a cost-effective Pro plan for users who only need to monitor a few websites but who want access to powerful monitoring features.

If you'd like to propose an additional feature for Downtime Monkey, login and submit a feature request via the feedback form.

06 Feb 2018

We've just rolled out a brand new feature: the ability to set rate limits on SMS alerts.

Before we tell you all about it, a big thank you to everyone who has provided feedback or submitted feature requests - it helps us focus on making the improvements that people really want.

The video gives a short overview with full details of the feature below:

Rate Limiting SMS Alerts

You can now set an SMS rate limit on your account - this rate limit sets the maximum number of 'down' text message alerts that will be sent in an hour.

What are the options?

The options are: 2, 3, 4, 5, 10, 20 or unlimited 'down' alerts/hour

Are 'up' alerts limited?

Yes. For every down alert that is sent, a corresponding 'up' alert will always be sent when the website goes back up.

So if you set the rate limit at 3/hour, you'll receive a maximum of 3 'down' alerts and 3 corresponding 'up' alerts - a maximum of 6 text messages in total. This way you'll always be kept informed of each website's status.

lots of penguins

Monitoring multiple webpages

Rate limits will be really useful for people who monitor a lot of webpages, for example:

1) Web Agencies or Webmasters who manage multiple websites that are hosted with the same hosting provider or on the same server.

2) Website Owners or Managers who monitor multiple pages on a single website.


If a web server has a problem, all the websites on that server usually go down around the same time.

If you monitor 30, 100 or 1000 websites and they all go down at once, without rate limiting you'd be bombarded with text messages.

With a rate limit in place, a few alerts would be sent before the limit is reached so you'll still know that there is a major issue. You'll be able to take action quickly, whether this is a phone call to your hosting provider or a reboot of your own server. However, you'll not get flooded with alerts.

When each site comes back up you'll receive an 'up' message - you can then relax knowing that your sites are online.

How To Set The Rate limit

This feature is only available to Pro users but to make sure Free account holders don't feel left out we're rolling out a different upgrade for free accounts very soon - to stay in the loop follow us on Twitter @DowntimeMonkey).

It's really simple to set a rate limit for your account:

Step 1

Login and navigate to SMS Alert Settings

Step 2

Select a rate limit from the dropdown menu, then click 'Set Rate Limit'.

rate limit SMS alerts

That's it - if you were expecting something more complicated we're sorry to disappoint :)

If you'd like to propose an additional feature for Downtime Monkey, login and submit a feature request via the feedback form.

Main photo: 'King penguins' by Brian Gratwicke under Creative Commons Licence

17 Jan 2018

...and how you can too!

Over the past couple of weeks we've been optimising the Downtime Monkey website to reduce page load time. We've had some excellent results: in the best case scenario we cut page load time by 58% and even in the worst case, the page load was still 9% faster.

All of the changes that were made are straightforward and we've provided in-depth details of the optimisations so that you can apply them to your own website.

If you have any questions, ask us on Twitter @DowntimeMonkey.

Starting Point

Our starting point was a site that was already developed to be light. All the code was hand-written from scratch using: our own CSS framework as opposed to Bootstrap, vanilla JavaScipt as opposed to jQuery, no PHP framework and no CMS.

Images had been used sparingly and the filesizes kept small.

However, we'd not optimised specifically for page speed so we were hoping for some improved performance.

Selected Pages

To find our starting point we chose two different pages to use as benchmarks:

The Plans page was selected because it was a good benchmark for many pages on the site: it had an average amount of text, fonts and images.

The Home page was chosen because it is the main landing page and therefore is important from the perspectives of 'first page load' and SEO. It's also one of the heaviest pages on the site with a large header image and embedded YouTube video.

Also note that the Plans page is specifically for the US (see /en-us/ in the URL) and the Home page is specifically for the UK (/en-gb/) - we wanted to benchmark from different locations across the world so it is vital to use the correct landing page for each location.

Note that 'hreflang' is used so that Google search results serve the correct page for the location therefore removing the need for users to be redirected to their country specific page. This is important from a page speed perspective since redirects cost time and are best kept to a minimum.

To get detailed load time data we went to Web Page Test, entered the URL and hit 'Start Test'. For the Plans page test location, we selected a US location and for the Home page we chose a UK location.

Benchmark Load Times

These benchmarks are the load times before optimisation took place - the load times after the optimisations are shown in the results section at the end of the post.

Plans Page: 2.105 seconds, Home Page: 2.712 seconds

The waterfall diagrams show the page load times broken into different requests - this helped identify the slowest parts of the pages to load:

1) Font Awesome is slow to load on both pages - taking over 600ms.

2) There are too many CSS files. Because there are multiple files and each one requires its own request to the server they really slows things down.

3) Images reduce speed on the home page, especially the large header image.

4) The slowest aspect of page load is the embedded YouTube video, however, it turns out that this is not a major issue (find out why later in the post).

Recommended Optimisations

With the bottlenecks identified we headed over to Google Page Speed Insites, entered the URL and hit 'Analyze'.

Page Speed Insites gives recommendations for ways to increase page speed. Read on to see the actions that were taken...

Consolidate and Minify CSS: saving 100-200ms

The page load breakdowns helped us identify that we had too many CSS files. There were separate CSS files for the responsive grid, modifiers, content, forms and media queries. This organisation makes life easy when you need to adjust the CSS of a website but for speed it is better to consolidate all these into one file.

To do this we simply copied and pasted the CSS from each individual file into one single file.

The important thing to remember when consolidating multiple CSS files is: the order that you add the code from each file must be the same as the order that the original files were linked from the webpage.

For example, originally the head section of each webpage linked to the stylesheets like this:

<!-- Styles -->
<link href="framework.css" rel="stylesheet">
<link href="general-modifiers.css" rel="stylesheet">
<link href="content.css" rel="stylesheet">
<link href="form.css" rel="stylesheet">
<link href="media-queries.css" rel="stylesheet">

We created a file called all-styles.css and copied the code from the framework.css file first, then after it add the code from general-modifiers.css file, then content.css etc., finishing with media-queries.css.

With the stylesheets consolidated it was time to minify the code.

Well written code is organised to be easy to read so it will include spacing, formatting and comments that are useful for the developer. Minifying simply removes everything that isn't actual code to be executed in order to reduce the filesize.

For example, this code:

/* registration form */

.registration-form {
	width: 100%;

.registration-form input[type="text"] {
	height: 36px;

When minified will become:

.registration-form{width: 100%;}.registration-form input[type="text"]{height: 36px;}

To minify our code we went over to CSS Minifier, pasted the code from the file into the Input CSS field and hit Minify. We then created a file called all-styles.min.css and pasted the minified code into it.

A word of warning here - many minifiers don't handle media-queries well. CSS Minifier dose convert normal media-queries well but it struggles with complex media-queries such as:

@media only screen and (-webkit-min-device-pixel-ratio: 2)

The minifier removes the spaces next to "only" and "and" which are essential. If your code contains complex media queries like this you'll need to add this spacing in manually. This is not a big problem and with search and replace just takes a couple of minutes.

With the CSS consolidated and minified it was just a case of updating the head of the webpages and uploading the new files to the server:

<!-- Styles -->
<link href="all-styles.min.css" rel="stylesheet">

Serve Font Awesome From CDN: saving 200-550ms

Font Awesome is a great way to provide icons for a website. It's quick and easy to work with and makes for a lighter, faster site. However, the font itself is quite large and was slow to load.

The solution was to serve Font Awesome from a content delivery network (CDN) - a network of servers across the globe that each have the Font Awesome files ready to serve to users closest to their location.

Using a CDN not only decreases load time of Font Awesome but also increases the likelihood of a user already having the Font cached on their computer even if they haven't visited the our site - if they've visited another site which uses the same CDN then Font Awesome should be on their computer already.

Their are several free CDNs that serve Font Awesome. In the past we've used Font Awesome's own CDN but found it unreliable (it regularly went down for several hours) so this time we opted for the Bootstrap CDN.

We went to https://www.bootstrapcdn.com/fontawesome/ and copied the link into the head of our webpages. We replaced this:

<!-- Icon Fonts -->
<link href="path-xxx/font-awesome/css/font-awesome.min.css" rel="stylesheet">	

With this:

<!-- Icon Fonts -->
<link href="https://maxcdn.bootstrapcdn.com/font-awesome/4.7.0/css/font-awesome.min.css" rel="stylesheet" integrity="sha384-wvfXpqpZZVQGK6TAh5PVlGOfQNHSoD2xbE+QkPxCAFlNEevoEH3Sl0sibVcOQVnN" crossorigin="anonymous">

There is a downside to using a CDN and that is that if the CDN itself suffers downtime your site will be effected - in our case this would mean all the icons on the Downtime Monkey site disappearing! As ever though there is a solution: if the CDN fails, we'll serve a fallback copy.

In other words, if Font Awesome fails to load from the CDN then the webpage will revert to the copy of Font Awesome that is stored on our own server.

This is achieved by adding a few lines of JavaScript to the .js file which is included at the bottom of every web page:

// fallback function for Font Awesome
function ensureCssFileInclusion(cssFileToCheck) {
    var styleSheets = document.styleSheets;
    for (var i = 0, max = styleSheets.length; i < max; i++) {
        if (styleSheets[i].href == cssFileToCheck) {
    // because no matching stylesheets were found, we will add a new HTML link element to the HEAD section of the page.
    var link = document.createElement("link");
    link.rel = "stylesheet";
    link.href = "https://downtimmonkey.com/path-xxx/font-awesome/css/font-awesome.min.css";

//call function

The function loops through the stylesheets that are included and if a stylesheet with a matching path to the Bootstrap CDN is present then no more code runs.

If however, there is no matching stylesheet that means that the CDN has failed and a link to the locally stored version of Font Awesome is added to the head of the webpage.

You can just copy and paste the code making sure to replace the URL of the fallback with the URL of your own fallback.

Async Load Google Fonts : saving 150-300ms

We use a Google Font called 'Cabin' for most of the text on Downtime Monkey.

Google Fonts enables websites to deliver varied typography without having to worry about compatibility - gone are the bad old days when websites were tied to Times New Roman and Arial.

The 'Cabin' font is loaded from Google's CDN and in the unlikely event that the Google CDN goes down (it's possibly the most reliable CDN on the planet), text is displayed in 'Sans Serif' instead... so no need for a JavaScript fallback.

However, 'Page Speed Insites' flagged up one issue that related to our use of Google fonts: Eliminate render-blocking JavaScript and CSS in above-the-fold content" was the recommendation.

The issue was that we linked the to Google Font from the head of the webpages like any other CSS file:

<!-- custom fonts -->
<link href="https://fonts.googleapis.com/css?family=Cabin" rel="stylesheet">	

All the styles in the head of a webpage are loaded before the the page itself renders and each of the stylesheets effectively blocks the rendering of the page.

Asynchronous loading allows the page to begin rendering before a stylesheet is loaded - the page starts to load quicker but the downside is that all the styles aren't ready.

For this reason asynchronous loading is not good for many stylesheets - for example, if the styles for the Top Menu were loaded asynchronously the menu would first appear as a mess before changing to the correctly styled menu.

In many cases this makes for a bad user experience, however for a font the only visible effect will be a quick (0.2 second) flash of unstyled text as the page loads initially. In this case asynchronous loading gives a better user experience than a delay in page load.

To load the Google font asynchronously we used Web Font Loader which is a library developed jointly by Google and Typekit.

Web Font Loader is also hosted on the Google CDN - we headed to the Github Repository which has all the latest information and simply copy/pasted the following JavaScript into our .js file. The only edit we had to make was adding our font family, 'Cabin':

WebFontConfig = {
    google: { families: ['Cabin'] }

(function(d) {
	var wf = d.createElement('script'), s = d.scripts[0];
	wf.src = 'https://ajax.googleapis.com/ajax/libs/webfont/1.6.26/webfont.js';
	wf.async = true;
	s.parentNode.insertBefore(wf, s);

Enable Gzip Compression: saving 100-400ms

When a file is compressed the information within the file is encoded using fewer bits and the filesize is reduced and smaller filesizes mean faster load times.

In fact, for speed gains vs effort required this was the best performing optimisation we made.

Compression is enabled at server level and there are several methods that can be used:

1) If you have a VPS or a dedicated server that runs Apache you can edit the Apache configuration http.conf file.

2)If you're on shared hosting you can edit the .htaccess file at the root of your website.

Simply add this code to either file:

<IfModule mod_filter.c>
    AddOutputFilterByType DEFLATE "application/atom+xml" \
                                  "application/javascript" \
                                  "application/json" \
                                  "application/ld+json" \
                                  "application/manifest+json" \
                                  "application/rdf+xml" \
                                  "application/rss+xml" \
                                  "application/schema+json" \
                                  "application/vnd.geo+json" \
                                  "application/vnd.ms-fontobject" \
                                  "application/x-font-ttf" \
                                  "application/x-javascript" \
                                  "application/x-web-app-manifest+json" \
                                  "application/xhtml+xml" \
                                  "application/xml" \
                                  "font/collection" \
                                  "font/eot" \
                                  "font/opentype" \
                                  "font/otf" \
                                  "font/ttf" \
                                  "image/bmp" \
                                  "image/svg+xml" \
                                  "image/vnd.microsoft.icon" \
                                  "image/x-icon" \
                                  "text/cache-manifest" \
                                  "text/calendar" \
                                  "text/css" \
                                  "text/html" \
                                  "text/javascript" \
                                  "text/plain" \
                                  "text/markdown" \
                                  "text/vcard" \
                                  "text/vnd.rim.location.xloc" \
                                  "text/vtt" \
                                  "text/x-component" \
                                  "text/x-cross-domain-policy" \

We got this code from Github again - this time from the HTML5 Boiler Plate Server Configs repository which provides "a collection of boilerplate configurations that can help your server improve the web site's performance and security".

There are a whole range of config settings for many different types of server. These particular settings were taken from the Apache server configs, in the folder: 'src' > 'webperformance' > 'compression.conf'.

The repository is kept up to date so it is worth checking out to make sure you have the latest settings.

Note that another method is available to cPanel users. If the idea of editing code makes you squirm or you don't know where to find your .htaccess file then this is the best option: Login to cPanel and select 'Optimise website'. Choose 'Compress the specified MIME types', input the list of MIME types (e.g. text/html etc.) shown in the above code and hit 'update settings'.

Note that this will compress most website files but not PHP files. Compression of PHP files was enabled by editing the PHP INI settings. This is easily done in cPanel: login and select 'MultiPHP INI Editor' and turn 'php_flag zlib.output_compression' to 'On'.

Optimise Images: saving 20-100ms

Modern cameras with tens of Megapixels take pictures that have massive filesizes - think megabytes rather than kilobytes. Trying to load a 3MB header image over the internet every time a webpage loads is a bad idea and can reduce page load to the speed of a sloth!

Our starting point was pretty good though: all images were already under 100kB, and most under 10kB. However, Page Speed Insites still flagged up that some of the images could be smaller.

There were two parts to our image optimisations - sizing and compression.

Image Sizing

Images with smaller dimensions have smaller filesizes. With that in mind we followed two rules:

1) Never place oversized images on the server and scale them down in the browser - this will cause slow page loads.

2) Never place undersized images on the server and scale them up in the browser - this will cause blurry images.

Our sizing optimisations involved finding the dimensions (in pixels) of the image when displayed on the webpage and making sure that the actual image that was stored on the server had the same dimensions.

When finding the display dimensions of responsive images, we used the image dimensions as displayed for desktop screens because these were the biggest dimensions needed for the particular image.

We were surprised to make some gains by resizing as we thought all our images had been sized correctly.

However, we'd used our logo on multiple pages through the website and on some pages only a small logo was needed (maximum size 26x26 pixels) but we loaded a larger one on the server (180x180 pixels). This was easily fixed by creating another image for the small logo that was just 26x26 pixels.

Image Compression

Our compression optimisation involved reducing the filesize of each image while maintaining the dimensions. Note that .jpg compression is 'lossy' so caused a reduction in visual quality, while .png compression is 'lossless' and visually the compressed image was identical to the uncompressed version.

A nice feature of Page Speed Insites is that when it runs, it creates a folder for download which contains optimised versions of all images that is deems too large. We looked at these images to see the filesize that we needed to achieve for each image.

Note that we didn't just go ahead and use the images produced by Page Speed Insites - better visual results can be achieved when a human looks at every image.

Image optimisation is a balance between visual quality and file size - images need to be as small as possible but still look sharp to the human eye. We compressed to the point just before an image lost visual quality.

To optimise our .jpg images we used Adobe Fireworks for both resizing and compression - it gives fine-grained control over image compression and quality. There are plenty of alternatives though - http://jpeg-optimizer.com is a good online optimiser that is easy to use.

To optimise .png images we used Fireworks for resizing and Pngyu for compression.

Pyngu is open source software which allows batch compression - we used this to compress a folder of images of flags of all countries in the world with one click.

After compression all our images were under 45kB, with most under 7kB.

Minify JavaScript: saving 0-10ms

Minifying JavaScript is basically the same as minifying CSS - removing all the whitespace, comments and other non-essentials from the code.

Our starting point for JavaScript was pretty good: a single .js file of just 4.5kB was included at the bottom of every webpage. Compared with writing scripts directly on the page this has the advantage of being cacheable.

Minifying the .js was really easy, in fact Page Speed Insites did it for us. When we analysed a page with the non-minified .js file a minified version was produced for download. Thanks!

Time wise this didn't gain us much because the file was already very small. However, for sites with a lot of JavaScript gains can be huge.

Enabling Browser Cacheing: saving 0 or 500-850ms

When browser cacheing is enabled on a server and a user visits the page, their computer will store some of the files that make up the webpage in the cache on their computer. If the user visits the site again then the files are loaded directly from the users computer saving a round trip to the server.

Therefore for 'first view' there are no gains in page speed by enabling cache. However, gains are huge when the user revisits the webpage or visits another webpage that uses the same files (i.e. any other page on the same website).

Similar to enabling compression, browser cache is enabled by file type at server level by editing the http.conf or the .htaccess files (assuming Apache).

We added this code to the http.conf file:

# Cache Control One year for image files
<filesMatch ".(jpg|jpeg|png|gif)$">
Header set Cache-Control "max-age=31536000, public"

# Cache Control One month for css and js
<filesMatch ".(css|js)$">
Header set Cache-Control "max-age=2628000, public"

# Cache Control One week for favicon cant be renamed
<filesMatch ".(ico)$">
Header set Cache-Control "max-age=604800, public"

#Cache Expires for Fonts 1 month
<IfModule mod_expires.c>
ExpiresActive on
# Embedded OpenType (EOT)
ExpiresByType application/vnd.ms-fontobject         "access plus 1 month"
ExpiresByType font/eot                              "access plus 1 month"

# OpenType
ExpiresByType font/opentype                         "access plus 1 month"
ExpiresByType font/otf                              "access plus 1 month"

# TrueType
ExpiresByType application/x-font-ttf                "access plus 1 month"
ExpiresByType font/ttf                              "access plus 1 month"

# Web Open Font Format (WOFF) 1.0
ExpiresByType application/font-woff                 "access plus 1 month"
ExpiresByType application/x-font-woff               "access plus 1 month"
ExpiresByType font/woff                             "access plus 1 month"

# Web Open Font Format (WOFF) 2.0
ExpiresByType application/font-woff2                "access plus 1 month"
ExpiresByType font/woff2                            "access plus 1 month"

The first section contains 3 blocks of code which use Cache Control to set the length of time that specific filetypes are cached for. We set images to be cached for a year, CSS and JavaScript files to be cached for a month and favicons to be cached for just a week.

Note that if we do change the CSS or JavaScript on the site and it is essential that all users see the new version we can get round caching by changing the name of the file. For example we could rename all-styles.min.css to all-styles-2.min-css.

Although we don't intend on changing our favicon, a favicon file can't be renamed (it's always favicon.ico) so we used a shorter expiry time.

The second section of code uses Expires By Type to set the length of time that fonts (set by MIME type) should be cached for - in all cases we set fonts to be cached for a month.

For more information check out this webpage on Cache Control or visit the HTML5 Boiler Plate Server Configs repository on Github, specifically: Apache server configs, in the folder: 'src' > 'webperformance' > 'expires_headers.conf'.

YouTube Video: No Change

The embedded YouTube video was the slowest aspect of the home page but we didn't change it.

It is possible to 'lazy load' YouTube videos by replacing the video with an image and loading the video when the user clicks the image.

But we didn't do it - we must be the lazy ones, right?

No! The first, and most important reason we didn't lazy load the video is that embedded videos provide SEO juice but there is no SEO benefit if 'lazy loading' is used. The second reason is that the video is below the fold and by the time a user has scrolled down the video will be completely loaded.


After making all these changes we ran our benchmark pages through Web Page Test again - this time we selected 'First View and Repeat View' from the Advanced Settings, so that we could see the effects of cacheing.

Before Optimisation:

Plans Page: 2.105 seconds, Home Page: 2.712 seconds.

After Optimisation, First View:

Plans Page: 1.288 seconds, Home Page: 2.466 seconds.

After Optimisation, With Cache:

Plans Page: 0.876 seconds, Home Page: 1.664 seconds.

The best result was a 58% improvement (repeat view of the Plans page) and the worst result was a 9% improvement (first view of the Home page).

We're pretty happy with the first view results for the Plans page - the 39% improvement in page speed is reflected in many of the pages across the site.

We're also happy with improvements in the home page - everything except for the YouTube video is loaded in under 1.3 seconds.

Here are the 'first view' waterfall diagrams after optimisation:

19 Dec 2017

In an ideal world every site would have 100% uptime, 24/7, 365. However, the reality is not so perfect – hardware failures, DNS issues, DDoS attacks, server maintenance, software problems and poor hosting are among the many causes of downtime.

It’s not all doom and gloom though – by following a few practical steps you can really cut down on your downtime:

website downtime 503 error

Avoid poor hosting

Poor hosting is the most common cause of downtime - it is simple to rectify by using a better quality host and although it can be a hassle to move a website it's worth making the effort if your site suffers from regular downtime.

Traditional shared servers, although cheap are usually quite susceptible to downtime while private servers and cloud servers with self healing technology can maintain uptime round the clock if managed properly.

A good web host should guarantee uptime through an SLA (service level agreement). The percentages can be misleading though – a guarantee of 99% uptime may sound good at first but actually allows over 7 hours of downtime each month! Aim for at least a 99.99% uptime guarantee and find out what compensation is provided if the guarantee is broken.

If you manage your own servers then the quality of the hardware and the team that manages them will be paramount.

Monitor your website

Monitoring your website is a very important step towards reducing website downtime.

A good service will send an alert if your website goes down. Ideally, alerts will be customizable – if your business has support staff in place day and night then email alerts are a good solution. SMS alerts may be more useful for small businesses where text messages can be sent to an emergency phone number outside office hours.

It’s useful to be able to view the details of each downtime as well as uptime statistics so that the performance of your site can be reviewed. This will enable the cause of the downtime to be examined, addressed and fixed.

Website monitoring needn't cost the earth – you can get a free account and start monitoring your websites right away.

Take backups and test restores

Taking regular backups is something we all know is important – these should be automated so that they aren’t missed. It’s also good to test your backups and be familiar with the restore procedure so that if the worst happens you can restore your site quickly and with confidence.

Update CMSs with care

If your website uses a content management system it is very important to keep it up to date – keeping your CMS up to date with the latest version is one of the most important steps that can be taken to avoid leaving your site vulnerable to exploits.

However, another common cause of website downtime is automated updates of the CMS – incompatibilities of new versions with plugins and themes are known to be a problem and can bring a website down. So it’s best to schedule updates for off peak times and to be on hand when they take place... and always be ready to roll back to the last working version if there are problems.

Keep an eye on bandwidth

It’s a good idea to monitor the bandwidth that is used by visitors to your site. If an unusually large amount of bandwidth is being used it may be a spike in legitimate traffic but it may also be traffic from bad bots.

Comment spam bots are a common problem and although easily avoided by disabling or requiring moderation of comments, if they manage to post successfully they can inundate a site, slowing it to a halt. They often continue to bombard a site even after comments have been disabled.

DDoS attacks have become more prevalent and sophisticated in the last few years – large botnets have been used to bombard sites with traffic and have brought some high profile sites down.

Prevention is better than cure when dealing with both DDoS and spam bots. If a content delivery network (CDN) is a good fit for your site and is affordable, it is a tried and tested solution to both problems.

Have a plan

If your website goes down, having a plan of action in place will reduce the time it takes to get the site live again and is also likely to make life less stressful for those involved. Having clearly defined roles is important – know who will be alerted if the site goes down and have a checklist of actions that they need to take.

07 Dec 2017

A short interview where Ryan Glass, lead developer on Downtime Monkey and director of Big Toe Web Design, talks about taking Downtime Monkey from idea to reality.

Ryan Glass, Web Developer

Where did the idea for Downtime Monkey come from?

The idea came in 2016 when we had issues with a server and several of the websites that we managed went down. Once we were aware of the situation it was really easy to sort the problem and we were able to get the websites back up and running in just a few minutes. However, the sites had been down for nearly an hour before we became aware the problem - in fact the first we knew of the issue was when one of our clients called us to let us know that their site had gone down - not good! It was quite embarrassing, a little stressful and at the time I remember thinking, 'this can't happen again!'

So then you decided to develop a website monitoring service?

Not right away - at first I looked for ways to be notified when one of our websites went down.

I asked our hosting provider if they would call us if a website went down (in theory the issue was with them as it was a server problem). Although they were a good quality web host with guaranteed 99.9% uptime the best we could get from them was that if they noticed there was a problem they'd email us. In reality, most web hosts don't monitor the websites that are on their servers and if they detect a server problem they try to fix it before their customers notice.

Next, I looked for third party monitoring services but didn't have much luck. There were several options out there but none did quite what we needed. Some were expensive - so expensive that they didn't show prices on their websites! We found some free options and tried out a bunch: a few just didn't work and others looked like they weren't maintained. Some worked for emails but text messages were either unavailable, unreliable or it wasn't clear what costs were involved in receiving texts. It was often difficult to know the costs of text messages if the monitoring service was overseas.

Are text message alerts a big factor?

Very much so. I like to be able to switch off from emails outside of work: if I'm out to dinner I don't want to have to look at my emails but I need to know ASAP if a bunch of websites have gone down. Also, if I'm out of the city I get a phone signal almost everywhere but 3G is much less widespread. I figured text alerts to my phone were the best option.

So what next?

After drawing a blank I decided to write some scripts, just for us to use in house. The scripts monitored our websites and sent out emails and text messages if a site timed-out or went down. It worked really well and we refined things so that uptime stats were recorded and we could view the uptime of each site.

Then, I thought that if I need this then there are probably others in my position too.

And how long did it take to develop Downtime Monkey?

From the start it has been just under a year - when we took the plunge I expected it to take about three months but it took eight in total. Once development was completed we tested the site for about six weeks in house before going live and then carried out several months of public beta testing which went smoothly.

We added some nice features for Pro users such as being able to customise when an alert gets triggered. We found that some websites (maybe those that use cheap hosting providers or bulky content management systems) can run slowly and that time-outs would trigger alerts quite often. Nobody wants to get 50 emails every day saying that their site has dropped for one minute so we made it possible for users to delay alerts until a site has been confirmed down for a length of time that they specify, say 5 minutes. We also made more detailed stats so that people can view the reason for each downtime - it's useful to be able to see if a downtime was recorded because of a timeout or because of a 404.

We've just started promoting the site and it feels great to get to this stage but I realise that now is not the time to celebrate and have a holiday! There are a few other features that we want to implement but the most important thing now is to get the news out that Downtime Monkey is here and to get as many people as possible to sign up and start using our free service.

20 Nov 2017

Setting up a new website monitor is quick and easy - we ran some tests and found that on average it takes just 5-10 seconds. Here is a very short video with instructions for adding a new website monitor using a Downtime Monkey Free account. There are also written instructions below:

Step 1. Login and navigate to 'Your Monitors' (from the homepage you can find this in drop-down menu at the top right of the page)

select website monitors

Step 2. Select 'Add Monitor' from the main menu

Step 3. Type the web address of the webpage that you want to monitor - be sure to include the whole web address including 'http://' or 'https://'

how to monitor website

Step 4. Choose whether to turn email alerts on or off

Step 5. Choose whether to turn SMS alerts on or off

Step 6. Select 'Add Monitor'

That's it! Now that your monitor is set up you'll receive alerts if it goes down and we'll record any downtime that takes place so you can view uptime stats for the website you've added. Pro users have more settings to choose from and receive more detailed stats. Setting up Pro monitors is still quick though, our tests show it usually takes under 30 seconds.

24 Oct 2017

One of the main challenges we faced when developing Downtime Monkey was internationalization (or depending on your point of view: internationalisation, internacionalización, internasjonalisering or 国際化). The aim was to provide our services to as many people as possible across the world - we learned a few things in the process and we thought we'd share some tips...

website internationalisation

Email - it's easy

Email is universal and isn't affected by country boundaries - an email sent locally can be treated the same way as one sent across the world. The language the email is read in may be different though, which takes us to the next point.

Languages - there are more than you think

At the time of writing there are 6909 living languages - that's over 35 times the total number of countries. We won't go as far as saying that delivering content in all these languages is impossible - we'll go with infinitely improbable. This illustrates that even with a worldwide service it is important to target your audience. We decided on English as the only language because attempting multiple translations would be a lot of work at a stage when we needed to focus on developing our services. In the future we intend to roll out content in Spanish, French and German as these languages are widely spoken throughout the world.

Content - tailor where possible, adapt where essential

Even though it's not practical to provide translations for everyone, it is possible to tailor content to users in specific countries. Downtime Monkey has specific pages with unique content for (in alphabetical order) Australia, Canada, Ireland, New Zealand, United Kingdom and the United States. We also have adaptive content for almost every other country in the world so that users in Switzerland, for example, will see prices in Swiss Francs, and users in India will see prices in Rupees.

Payments - use the customer's local currency

People don't like hidden charges and if a customer pays in a foreign currency (for them) their credit card provider will probably charge a foreign currency transaction fee. Also, if a potential customer sees prices in a foreign currency they won't know the exchange rate that will be applied by their card provider, so they won't know the real cost of their purchase. Downtime Monkey supports payments in over 100 local currencies so that users all over the world can buy in their own currency.

SMS - make sure they get through

Unlike email, there can be geographical and network barriers to text messages. When sending texts to users is a part of the service they need to know that they'll receive them. Having a list of countries (and mobile phone networks) that can receive texts and letting customers know whether they can receive texts is important. Our SMS Alerts page shows how we do this. Also, there is no substitute for testing... providing a way for customers to send a test SMS to make sure they receive it is a useful feature.

28 Sep 2017

If you're reading this then Downtime Monkey is live! We're kicking-off with a short period where the site is in beta. Everything has been tested in-house so we're not expecting any surprises but there is nothing better than the experience of real users to help us improve.

If you'd like to get involved the best way is to sign-up, use the service and give us some feedback (via the feedback form on the Help & Support page). We'd be really happy to receive feature requests, questions that we can add to the FAQs or information about any problems or bugs that you come across. If you send us some really useful feedback we'll upgrade your account for free.