There’s singly no better way to destroy conversion rate than maintaining a slow website.
An endless number of studies have been published on the effects of a slow website on conversion rate and, if you’re a webmaster who likes to make a profit, the conclusions tell you that failing to address page speed is a bit like walking around wearing trousers with holes for pockets.
What’s more, and if your interests are directed at appealing to our new internet overlord (Google), page speed is a ranking factor and Google has recently split its index between a mobile-index and a desktop-index.
It has also recently deployed (and discontinued) new technologies like SPDY (a compliment to the HTML protocol) and AMP (accelerated mobile pages).
These terms may sound like gobbledygook but don’t worry, we explain them below!
HTTP Archive has been analysing page size since 1996 and it takes a sample from Alexa’s top 1 million websites.
According to HTTP Archive, and data from January 2018, the average size of a desktop web page from their sample of 400,000+ websites is about 3,545kB (which is over 3MB).
Worryingly, according to the same sample and over the same timeframe, the average size of a mobile web page is 3,069Kb.
What’s particularly apparent is that even over the last year – between 2017 and the beginning of 2018 – the average size of a web page has increased by over 1MB (from 2,476Kb to 3,558Kb)
If you want to identify the biggest culprit for the increase in download times, look no further than imagery.
As a webmaster, it’s probably your most important objective. Imagine you own a retail store – if your customers can’t access your store you make less money (or none at all).
The easiest way to articulate this is by taking a trip into the past and looking at Aberdeen Group’s highly popular report titled: ‘The Performance of Web Applications: Customers are Won or Lost in One Second.’
This report was written in 2008, and detailed, even ten years ago, how important a factor speed is in influencing conversions.
The pressure to provide quality information in a timely fashion has only become more apparent with the advent of a seemingly endless range of new portable devices like tablets, smartphones, smart watches and smart speakers.
Users want more data portability and they want to access information wherever and whenever they need it.
Somewhere between small image land and big image land is optimised image land – a utopian location full of fast and attractive web pages.
It’s not always easy to reach optimised image land. The terrain can be rough, the journey can be long and the motivation to reach you destination is hard to come by when you don’t understand how you’ll be rewarded when you get there.
What’s more, big image land is full of party animals pursuing a carefree existence while ferociously asserting their rights to artistic integrity.
The temptation to join the party is strong and many succumb but with age they come to realise the party can’t go on forever if no-one can afford the entrance fee.
Okay, I’ve taken the metaphor too far but you get the point.
Alongside changing web host, image optimisation is singly the best way to improve page load speed. If you take a look at HTTP archive you’ll quickly discover that images accounts for over 51% of the total size of all assets downloaded to render a web page.
The total size of all image requests is 1,818Kb compared to total size of all requests which is 3,545Kb.
Imagine the performance gains you could achieve if you reduced your image file sizes by half or even 75%
When it comes to image optimization there are a few things you should know, starting with the difference between raster and vector images.
Before you can use an image you need to decide whether you need a vector image or a raster image, and there are benefits and drawbacks to each.
Vector images are popularly represented by filetypes like SVG and AI, while raster images are popularly represented by a range of filetypes including PNG, JPEG, GIF and PSD.
It’s important to understand that every raster is simply a grid of ‘squares’ (pixels). Every square is assigned an RGBA (red, green, blue, alpha) value and each pixel is comprised of 4 bytes.
On this basis, it’s relatively easy to work out that a 100 x 100 image will comprise 10,000 pixels and 39Kb ((10,000 x 4)/1024).
However, a larger 1,000 x 1,000 image will comprise 3,906 Kb (which is close to 4MB, eg larger than the total size of requests for the average web page!)
This is where image compression comes in.
A number of image editing packages, including Adobe Photoshop, give you the option to sacrifice some quality for significant gains in performance by applying compression settings to raster images.
Take the example of the Wikipedia image on this page. At the highest quality setting the image size is 382Kb. However, at the 3rd lowest quality setting the image size reduces to 85.3Kb (we’ve scaled this image and further reduced its size beyond this).
The key to compression is reducing the number of bits occupied by each RGBA channel. A standard pixel is made up of 4 channels (RGBA) each comprising 8 bits (8 bits x 4 = 4 bytes per pixel). If we reduce the number of bits per pixel we can dramatically reduce the filesize of the image and we do this by ‘compressing’ the image.
Image compression offers the user an artificial ‘representation’ of the original image and there are two forms of compression:
According to HTTP Archive, JPEG is the image type you’re most likely to find on websites.
JPEG is a lossy format filetype. A number of image editing packages, including Adobe Photoshop, give you the option to sacrifice some quality for significant gains in performance by applying lossy compression to JPEG images.
PNG was developed as an alternative to the much lower quality GIF format. PNGs are usually utilised for logos, drawings and icons.
One of the benefits to PNG is that it supports transparency (this is best related to by thinking of the background of a logo – PNG allows a transparent background layer to ensure a graphic blends seamless with the background is placed on).
PNG is also a lossless compression format, however this comes with a drawback – PNG filesize is typically a lot larger than JPEG.
The GIF format typically accounts for a much small filesize than JPEG, largely because a GIF image can only store 256 colors. GIF is a lossless format and is used predominantly for uncomplicated imagery, logos, line drawings etc. Another advantage to GIF is that it accepts animation.
This is a blog post in itself however you have a number of options:
We don’t typically recommend installing plugins unless you absolutely have to, however in the case of image optimization, and weighing up the relative benefits and drawbacks, it’s wholly worth it if the outcome is faster page load time.
At this stage it’s worth noting that different audiences require different image dimensions. For example, while an image width of 400px may be suitable for users on mobile devices, it’s not going to be the most appropriate choice for desktop users with wider screens.
This is where srcset, an HTML attribute, comes in. Srcset allows you to specify multiple images in your HTML and vary the image served dependent on the width and resolution of the user’s viewport.
WordPress automatically creates a number of versions of your uploaded image to serve to users of different device types (using srcset). However, it’s not foolproof and it can cause additional problems – it’s also well beyond the scope of this article (we’ll cover responsive images in full in a future blog post).
There are a host of WordPress plugins which will ‘automate’ image optimization, however here are two options we’ve tried in the past:
If you have a copy of Photoshop then it’s remarkably easy to compress your own images (Photoshop costs about $19 per month). Photoshop features a compression slider with values 1-12 which enables you to specify the ‘quality’ of an image.
The save for web feature has been marked as ‘legacy’ in the latest version of Photoshop (‘save for web’ allowed you to specify a value of between 1 and 100).
To adjust the size of a JPEG file simply click file, ‘save as’ and then ‘save’ and, before the image is saved, Photoshop will ask you to select the level of compression.
You can also use the ‘save for web’ feature which can be found under file >> export >> ‘save for web’ (or hit ctrl + alt + shift + s on your keyboard).
If you don’t have a copy of Photoshop, and you don’t want to use a plugin, there are a host of free image editors available online – one of the more popular options is gimp.org.
A content delivery network is a collection of servers dedicated to distributing cached content to local audiences.
Traditional shared web hosting relies on a single server to facilitate all incoming requests; with a content delivery network, responses are delivered from servers local to the user making the request.
One of the primary advantages to a CDN is that a local, cached request will typically be facilitated a lot faster than a request routed to a server thousands of miles away.
One of the disadvantages to a CDN is that the content served isn’t always the most up to date version – a cached version may not refresh for hours at a time.
CDNs used to be the preserve of big corporations; however there are now a number of free ‘CDNs’ dedicated not only to distributing content but to facilitating more secure web presences for websites of all sizes.
CloudFlare is probably your best option. It’s free and seamlessly integrated with a number of web hosts.
CloudFlare doesn’t merely distribute you content across multiple servers, it also protects your website against a range of threats (primarily DDoS). They have also recently updated their free package to facilitate HTTPS (this service used to only be available to premium subscribers).
Every time you want to visit a website, something called a DNS lookup is performed.
The DNS (domain name system) is responsible for assigning IP addresses (the physical location of a server) to a human-readable domain name (without DNS we’d all have to type IP addresses to reach websites and it would become very confusing very quickly).
When you point your nameservers to CloudFlare (you can change nameservers with your web host) the initial request for your website will be routed to a CloudFlare IP address local to the user’s request.
Prior to this, CloudFlare will ‘scan’ to understand the nature of the requesting IP ie to ensure it’s not malicious. Providing everything is okay it will reroute the user to a local version of your content.
While this sounds like a fairly complicated process, it all happens in a split second.
To find out more about implementing CloudFlare just visit this guide on their website.
It’s worth noting some complaints have emerged online about the drawbacks of CloudFlare’s service.
While feedback is typically overwhelmingly positive, some people have expressed concerns about website content being served from ‘bad neighbourhoods’.
With CloudFlare you no longer have control over which IP address your content is being served from, and the concern from SEOs is that Google may ‘penalise’ websites based on association (ie other CloudFlare websites being served from the same IP).
There are a number of case studies from people who have tested pre- and post-Cloudflare traffic levels, a small number of which have reported significant gains after disabling CloudFlare.
A script execution occurs when server-side code – eg PHP – runs through an interpreter on a server.
An interpreter is a module – software – on a web server that ‘translates’ programming languages into output (HTML).
Most small websites are hosted on a shared hosting server; a shared server, unlike a dedicated server, facilitates hosting for upwards of hundreds of websites (good web hosts limit the number of websites hosted on a single server; bad web hosts pile websites high to maximise short-term profits).
Under a platform called CloudLinux (something most web hosts use but very rarely discuss), all shared hosting exist within an LVE (lightweight virtual environment); in other words, they are ‘compartmentalised’ on the server.
Under CloudLinux, each shared hosting is granted a limited amount of server resource (eg CPU, hard disk, memory, I/O etc) – if it exceeds its allocated resource then the website can become temporarily unavailable.
If this is something which seems abstract and divorced from reality, just check out the number of websites which become unavailable due to exceeding their restrictions:
One of these restrictions is ‘entry processes’, or rather the number of simultaneous script executions a web host will permit. An entry process occurs when a script is run and resets after the process is facilitated.
Web hosts restrict entry processes to between 5 and 100 and plugins are a primary reason why users exceed their entry processes quotas. Using too many plugins doesn’t merely harm page load time, it can also affect the availability of your website.
There’s no strict limit on the number of plugins you should utilise, and using plugins isn’t in anyway a wrong thing to per se, however it’s always best to accept that what you gain in functionality you may lose in page speed and even uptime.
This is particularly the case if the plugin is reliant on downloading assets like scripts or stylesheets or making additional database queries.
In short, if you don’t absolutely need it then don’t use it.
HTTP/2 is the new version of the HTTP protocol. HTTP stands for Hypertext Transfer Protocol and a protocol is basically an agreed upon communication standard between connected devices.
You’ll likely have come across HTTP in your daily browsing, and while an update to HTTP may not sound like a big deal, it really is.
HTTP/2 was the first update to the HTTP/1.1 protocol since 1999 and it was long overdue; it was also influenced by SPDY, a protocol developed by Google in response to the numerous problems it encountered with HTTP/1.1.
Well a lot really, however a full overview is beyond the scope of this article so we’ll give you the most important points.
Traditional HTTP requests are synchronous, which means they are responded to (by the web server) in the order they are made (eg a request for an image is made, the server delivers the image, a request for a CSS file is made, the server delivers the CSS file, etc).
When a web browser requests a file from the server under HTTP/1.1 it will need to make all of these requests one-by-one.
If a web page makes 100 requests (according to HTTP Archive the average number of requests per page in January 2018 was 108), and every file takes 100ms to download, that’s a page load time of 10 seconds.
Modern browsers can counteract this problem by opening multiple HTTP/1.1 connections to a server, however HTTP/2 has more elegant and efficient solutions.
HTTP/2 facilitates multiplexing which means multiple requests can be made over the same connection.
The server can also respond asynchronously which means it can deliver files in whatever order it chooses (this means one file can’t block every other file from being downloaded as it would under HTTP/1.1).
What are some of the other features of HTTP/2?
Implementing HTTP/2 is typically very simple.
Minification is the process of removing meaningless bits of data like whitespace, comments or superfluous characters from a file.
While minification shouldn’t affect the execution of a script, it can create additional complications dependent on how you minify and what you’re minifying.
This said, if you want to eek out every performance gain then it’s usually worth doing and considering the time investment is minimal, there’s nothing to be lost.
If you choose to minify manually then be careful with filenames – scripts and stylesheets in WordPress are typically processed using functions like wp_enqueue_scripts() and wp_enqueue_style(). These functions will expect you files to be in the location specified in functions.php.
Alternatively, you can utilise W3 Total Cache (see below).
GZip compression is another option you can utilise to compress files transferred from your server.
GZip compression can ‘reduce’ file size by upwards of 70%. Compatible browsers will send a header (message) to the server when they request a page stating they’ll accept a compressed version of the page if it’s available.
The server checks to see if a compressed version is available and, if it is, it will send the compressed version for the browser to unzip.
The two easiest options are to enable compression via cPanel (your web host’s admin panel) or edit your .htaccess file (a configuration file which usually sits at the root of your server).
With Siteground – our web host – enabling GZip is as simple as clicking a button, however if you need to modify your htaccess file it’s worth searching your web host’s help pages for further information on what you need to do.
To test whether GZip is implemented correctly you can use the GZip testing tool from GIDNetwork.
Expires headers tell a browser how long it should store a particular file in the browser cache.
In other words, if a browser has already visited your website and downloaded a number of linked-to assets, an expires header will set a timeframe for when those assets need to be downloaded again (eg you can specify a browser shouldn’t attempt to download an image again for another month).
Needless to say, this has no impact on first time visits but it will have an impact on repeat visits – if the browser pulls image downloads from its cache rather than the server then it will obviously have a substantial impact on the total file size of the accumulated server responses.
To enable expires headers you’ll need to modify your htaccess file – don’t worry, it’s really easy. As above, your htaccess file is typically found at the root of your server. All it takes is to add a few lines to the file to specify what files should only be downloaded over a particular timeframe.
See this article for more information.
You’ve probably heard of caching – it’s simply storing a copy of a file locally to make subsequent requests faster.
For example, rather than relying on new HTTP requests to serve all of the files a page needs, a browser can simply look to its cache – if a user has already visited the website, the browser has already done the hard work of downloading large files like images from the server, and by leveraging its cache it doesn’t need to do so again (thus significantly speeding up page load).
Smaller, cached versions of pages can also be delivered from servers (see Cloudflare above). Whether or not a browser reverts to a cached version of a file or relays a new request to the server is dependent on the server response and your HTTP caching configuration.
HTTP caching is a substantial topic and could take up an entire blog posts on its own.
If you want to get started with caching then we recommend looking at W3 Total Cache – it’s a very popular caching plugin for WordPress, however it can be complicated for new users to get to grips with (we’ll be publishing a blog post on this too).
Hot link refers to send an external HTTP request download an image from someone else’s server. Needless to say, if other webmasters are serving images from your web server then it depletes the resources you’ve been assigned.
Most cPanel configurations allow you to enable hot link protection and prevent other websites from serving your images.