5 min read

Speedy Mondays (Super Mondays July 2013)

To be honest, the mood at the Beehive at Newcastle University was anything but speedy, as the region basked in the ongoing heatwave. Regardless, the July Supermondays had a need for speed, from optimisation to psychology. There was also mention of a new usergroup (JSNortheast, meeting the first Monday of every month).

Richard Powell: Speed and Front End Development

Richard Powell gave a three-prong attack for considering speed in development: load time, perception of speed, and back end development. Of these, load time is the most important: apparently 80-90% of website load time is on the front end. (Even tumblr doesn’t get this right).

Much like Stephen Jones at a recent WP meetup, he recommended optimising and minifying files (for CSS using sprites/icon fonts/base 64 encoding, and concatenating JS and CSS files), obfuscating (JS variables), gzip txt files (70% file reduction). He also discussed being defensive about plugins (e.g. rather than using a tap navigation plugin, it could be done in 3-4 lines of code) loading JS last (even with Async, it doesn’t always work), and using lazy loading. And think about coding efficiently!

In regards to runtime, his analogy for DOM Interaction was memorable: “think of it like taking the dog for a walk and it making a mess: you have to touch it, but you don’t really want to.”

He pointed out that CSS positioning can be expensive: opacity, transforms, and surprisingly static positioning (browser has to recalculate on each load). One nice way to stop something being slow is to give it a rotate position of 0 (it gives it its own engine).

He gave a plea for API devs to think about how they serve up the content, and also not send people off on a wild goose chase to other files for more info.

He made notes in relation to perception (a theme carried on in the following talk): a site with progressive loading feels faster than one without, and to not block the API.

Out of the three, he emphasised the need for load time most (which requires collaboration).

Finally, he points out that it’s worth thinking about the tools we can use for testing: Chrome Dev Tools are good, and sites such as JSLint that let you compare code.

Graham Morely: the Psychology of Speed

Being in a car can be fast, but nothing feels quite so fast as being in a go-kart careening down a hill. Morely focused on how web designers and developers can make a site feel faster.

As it turns out, the speed of a site can have serious business impact. Examples Morely cited included Amazon tests that 100ms of extra load time=1% drop of sales, a page on Yahoo being 400ms slower causing 3-9% increase ‘black clicks’, and Mozilla getting 60m more downloads by increasing speed of download on their page for IE (and conversely the cost of 1s delay: 7% conversions, 11% page views , 16% decrease satisfaction).

Interestingly, speed isn’t always important: ATMs that dispensed money too quickly weren’t trusted.

He quoted Souders’ rule that  satisfaction = perception – expectations and used it as a guideline for work: people are happier with a site that feels faster than they’d expect it to. That said this can be done with information tricks e.g. if you’re in a search sites: going beyond “search hotels” to “search 52,420 hotels” with a loader looks faster.

He cited Neilsen’s studies on page times (though noting they’re perhaps 10 years old):

  • 0.1-0.2s =  instantaneous
  • 0.5-1s = immediate
  • 2-5s = flow, as it takes about 2s for a person to turn a page and find their position.
  • 7-10s (has to be) captivating

You should only spend 10s or more if it’s a natural break in user flow (you may wish to have alternate solutions such as let the user leave the page and email them when the task is done).

He helpfully gave a number of resources to investigate:

Mobile design guru Luke Wobrewski has also just written about perception of speed on mobile.

Oli Wood: Optimising Canddi

Oli Wood spoke from a recent project (Canddi) and his trials and tribulations attempting to optimise it. Above all, his key messages were to measure for what’s important (for them is how many inbound customer requests can be processed) and to just attach it (“Back of a fag packet calculations can be good enough”).

There are no silver bullets (they got expensive machines, hosting, PHP-FDM, all sorts of things, none really worked)

… aim for a silver shotgun cartridge (lots of little small things that can be nailed).

There are no silver bullets… aim for a silver shotgun cartridge.

More practically, he pointed out the importance of testing somewhere not live (as the team use AWS, they can clone and get a ‘good enough’ results) and to use realistic data (get enough on the test site to be good enough, no more) with defined test scenarios.

Build a pipeline view (find where the bottlenecks are).

Identify symptoms (what you can see) but solve problems (your 100% CPU usage could be that you need more machines, or just that you write crappy code).

Do less big things less often (doing big commands only when you have to).

Do frequent things much faster, avoid waiting, pull less data (“who writes MySQL? Who writes SELECT *?”). Hunt for collisions

Cache the painful thing: in memory (can be very effective, even in PHP), with tools such as redis (“almost one-click install, insanely quick”)/memcache (may be slightly better as it spreads across machine), url/browser cache

Use the tools:

  • Ab (“install with one command on Apache, and does a quick and dirty hit”. It’s useful for testing in background) +  Seige (far more detailed in regards to flags which can help to pinpoint where breakpoints are) and EC2 instances. “Install 2 or 3 ubuntu boxes, add EC2 on them and then test”
  • Use iostat (will tell you pretty much everything) or sar (-p is also very useful as it can tell you how busy multiple machines are), strace (“terrifies the life out of me” as it tells you what happens inside the processor “run it, get the text file, google the crap out of it”), iftop (for networks), xdebug + webgrind (don’t run on a live server!) as well as mongo tools such as mongosniff (‘terrifying, powerful, but go to google groups for it’. nginx is faster than apache (sadly)
  • They were on PHP, then moved to nodejs and with regis (if you can get it)

Your aim is to create a loosely coupled components which are horizontally scalable to make the business work (much like the 80/20 rule, beyond a certain point, “it just turns into geekery”). They managed to get the site 10x faster.

Elixr: Paul Callaghan

Paul Callaghan discussed Elixr, a new programming language that runs off Erlang. It’s still in its early stages but has been adopted by Soundcloud amongst other companies, and looks to be to ruby programmers what Coffeescript is for python devs. He pointed out a few useful concepts from the language such as creating a pipeline (a series of actions that can then be tracked in various places).

 Stefan Dantchev: Birthday Attack when randomisation probably helps

The night finished with high stakes of cryptography breaking. Well not quite. Dantchev’s examples were more theory than practical, but an interesting exploration of what we need to be aware of when it comes to code breaking and hashes. He used The Birthday Attack scenario—given a room of people, who many need to be there before it’s likely two share a birthday?—as a means of showing how this works, namely that you use a recursive (factorial) function of the likelihood of it not happening to figure it out. (For birthdays, that number therefore comes up as 23 people, at which point it’s just under 50%).