Slides from talk at NGINX summit |
March 6th, 2014 |
nginx, pagespeed, tech |
A major reason to use NGINX is because it's fast. NGINX really is very fast at what it does: moving bytes to the client. But that's not the only thing, or even the main thing, that makes for a fast experience from the perspective of a visitor to your site. Instead it's a matter of what you put on your site, how many bytes that is, how many round trips the client needs in order to download it, and how much of that work the browser can do in parallel.
The most common approach here is manual optimization. To compress your images so they require fewer bytes to download you "save them for the web".
To speed up page loads when visitors load other pages on your site or come back later you can set up "longcaching".
This means setting Cache-Control
headers that allow the
browser to keep a copy of the thing it just downloaded in its cache
for a long time.
When you're using this approach, however, you need to make sure that when you do change something visitors will see the new version. This means changing the url whenever you change the content.
There are also a lot of bytes transferred that the browser just doesn't need. JavaScript and CSS, as written by humans, are full of whitespace that the browser will simply ignore. Removing this by hand, character by character is just too annoying, but setting up a custom processing pipeline is also a pain.
Cutting out round trips is another good way to speed up a site. One way to do this is to include small resources directly in the html. This is very efficient, but it can also be a maintainability nightmare, with little bits of css and javascript scattered throughout your codebase.
Another way to reduce round trips is to combine all your images into one larger image, with spriting. Again, this is a big hassle to do on your own.
PageSpeed can help with this.
The idea is you write your page naturally, not worrying about caching headers, minifying your css, or optimizing your images, and then use PageSpeed to apply these optimizations on the fly.
There are also really powerful things you can do once your front-end optimization is consolidated into one optimized tool. A big one is that you can now run experiments to figure out exactly the optimizations that are right for your site. While some changes (minifying css) are just always good other changes (inlining, spriting) are generally good but depend on the site. Running an A/B test to figure out if a given optimization improves your loading times can answer this for your site, and PageSpeed makes this relatively easy.
Another advantage to a dynamic optimization tool is that some
optimizations can only be performed dynamically. Say you currently
use one big stylesheet, main.css
, for your whole site, but
most pages only use a few rules from it. PageSpeed can inline
just those rules, avoiding a round trip that delays rendering.
To do this it injects javascript into the page as it passes by, which then runs on the client to analyze the page.
That client-side code determines which selectors applied, which it can then beacon back to the server to use to optimize future pages. Without actually loading up the page in a real browser you don't generally know which rules are needed and which aren't, but in a dynamic setup we can get the client to help out.
The takeaway: use ngx_pagespeed and stop optimizing by hand.
Comment via: google plus, facebook