The tough truth of reality

I make no secret of the fact I’m a huge progressive enhancement believer. The fundamental reason why I believe the vast majority of web sites (yes, and web apps) should be written using progressive enhancement principles is that we just don’t know how they will be accessed.

We live in a time where many of us have powerful multi-core computers in our pockets, ridiculously fast Internet connections, high resolution screens, advanced web browsers supporting the latest web technologies. But even if we are one of those luck few, things go wrong.

Networks drop packets, servers go down, DNS fails. Resources on pages (most notably JavaScript) can fail to load or fail to run because of all manner of errors – both within and outside our control. Why would you not want to build your site in such a way that makes it as robust as possible?

This is how many developers believe the modern world looks:

spectrum-lies

The vertical axis is the capabilities of networks, devices and browsers. The horizontal access is time. As time goes on networks, devices and browsers get better, faster, more fully-featured. That’s true, these things do make progress.

But this belief gives developers a false sense of security, thinking they don’t need to worry about the same things they worried about in the past: crappy network connections, devices with hardly any RAM, browsers that don’t support modern JavaScript APIs etc. They put their trust in The Blessed Void Of Obsolescence.

But the harsh reality is more like this:

spectrum-truth

We should think about the massive possible spectrum of circumstances under which a user could interact with our sites.

Because even with a modern device things still go wrong. A user with the latest version of Google Chrome will get frustrated if your site doesn’t work because a CDN fails to deliver a crucial JavaScript file. All they will know is that your site doesn’t work.

Progressive enhancement is not about “what if users turn JavaScript off” but “what happens when the page is loaded in sub-optimal circumstances”.

Progressive enhancement matters

I’m a bit of a progressive enhancement nut. Some people think it doesn’t matter. It does. A lot. Here’s a very recent example.

I just tried to pay for a quick weekend break next year, booked through TripAdvisor. The “pay now “button didn’t work.

TripAdvisor fail

Why did it fail? Because it’s not a submit button:

<input class="ftlPaymentButtonInner sprite-btn-yellow-43-active" value="Continue to PayPal" type="button" name="submit" onclick=" clearAllGhostText();   ftl.sendActionRecord('VR_CLICK_SUBMIT_PMT_PP_REDIRECT');  return payment.submitPayPalPayment(this);" onmouseout="$(this).addClass('sprite-btn-yellow-43-active'); $(this).removeClass('sprite-btn-yellow-43-hover');" onmouseover="$(this).addClass('sprite-btn-yellow-43-hover'); $(this).removeClass('sprite-btn-yellow-43-active');" >

And how could this be fixed? Probably as simply as this:

<input class="ftlPaymentButtonInner sprite-btn-yellow-43-active" value="Continue to PayPal" type="submit" name="submit" onclick=" clearAllGhostText();   ftl.sendActionRecord('VR_CLICK_SUBMIT_PMT_PP_REDIRECT');  return payment.submitPayPalPayment(this);" onmouseout="$(this).addClass('sprite-btn-yellow-43-active'); $(this).removeClass('sprite-btn-yellow-43-hover');" onmouseover="$(this).addClass('sprite-btn-yellow-43-hover'); $(this).removeClass('sprite-btn-yellow-43-active');" >

Hardly advanced web development.

By the way this happened on a Windows 8 laptop with 12GB RAM, using the latest version of Chrome.

To most people this failure would have completely put them off paying, and it certainly didn’t impress me. However I was determined to pay, so I switched to my phone and paid on there. Job done, but it certainly wasn’t a good experience.

So, you still don’t think progressive enhancement matters?

Spotlight: jQuery plugin

Recently I worked on a website help system, the main feature of which was to highlight particular elements on areas of the page. You know the kind of thing: ‘Click the highlighted search button to search your data’. The designs I was given showed the web page covered by a semi-transparent grey overlay, except for the areas that needed highlighting.

Here’s the problem. The shapes of the un-highlighted bits weren’t just rectangles; they were circles. So my immediate idea of using a bunch of absolutely-positioned <div> elements with opacity:0.6 wasn’t going to cut it.

I decided to use the <canvas> element, and after some digging found this excellent page on the Mozilla developer docs site that explains the different modes available for compositing multiple shapes in a single canvas element. This was the answer.

<canvas> is supported by IE9 and above, which was acceptable for the project I was working on. If you need support for older IEs then this looks like a good solution.

Anyway, I thought this kind of approach might be useful for others so I’ve written a small jQuery plugin called Spotlight which allows you to put a spotlight on elements on your page. See a quick demo here.

See the plugin on GitHub.

Progressive Enhancement

I recently sent this as email to a colleague. You’ll be glad to know we sorted things out :0)

The little chat we had last week about AngularJS has been playing on my mind. As you know I’m not against JavaScript (I love it, and have even written an Open Source JavaScript library) and I have a personal project partly planned to try out AngularJS, which looks awesome.

However, your comment that “progressive enhancement is one way to do it” (emphasis mine) bothers me. A lot. I’ve heard this attitude from a lot of developers, and I believe it’s wrong. Not because I believe every website or app (more on the difference between those two things later) should never use JavaScript, but because it ignores the fundamental layers of the web which cry out for a progressive enhancement approach. These layers are:

image004

Here’s how a browser works (more detail).

  1. It requests a URL and receives an HTML document
  2. It parses that document and finds the assets it needs to download: images and other media, CSS, JavaScript
  3. It downloads these assets (there are rules and constraints about how this happens which vary slightly from browser to browser, but here’s the basics):
    1. CSS files get downloaded and parsed generally very quickly, meaning the page is styled
    2. JavaScript files get downloaded one-by-one, parsed then executed in turn
    3. Images and other media files are downloaded

Let’s look at the absolute fundamental layer: the HTML document.

HTML

Way back in the beginning of the web there was only the HTML document on a web page; no CSS, no JavaScript (very early HTML didn’t even have images). In fact without HTML there is no web page at all: it’s the purpose of HTML to make a web page real. And a set of URLs, for example the URLs under a domain like my-site.com, which doesn’t return at least one HTML document is arguably not a website.

Yes, URLs can also be used to download other resources such as images, PDF files, videos etc. But a website is a website because it serves HTML documents, even if those document just act as indexes of the URLs of other (types of) resources.

HTML is the fundamental building block of the web, and software which can’t parse and render HTML (we’ve not even got to CSS or JavaScript yet) can’t call itself a web browser. That would be like software which can’t open a .txt file calling itself a text editor, or software which doesn’t understand .jpg files calling itself an image editor. So we have software – web browsers – which use URLs to parse and render HTML. That is the fundamental, non-negotiable, base layer of a web page, and therefore the web.

One of the great things about all HTML parsing engines is they are very forgiving about the HTML they read. If you make a mistake in the HTML (leave out a closing tag, whatever) they will do their best to render the entire page anyway. It’s not like XML where one mistake will make the whole document invalid. In fact that’s one of the main reasons why XHTML lost in favour of the looser HTML5 standard – because when XHTML was served with its proper MIME type a single syntax mistake would make the page invalid.

And if a web browser encounters elements or attributes it doesn’t recognise it will just carry on. This is the basis on which Web Components are built.

So even serving a broken HTML page, or one with unknown elements or attributes, will probably result in a readable document.

CSS

The next layer is CSS. At this point we’ve left the fundamentals and are onto optional components. That’s right – for a working web page CSS is optional! The page might not *look* very nice, but a well-structured page of HTML will be entirely usable even with no styling, in the same way that water is entirely drinkable without any flavourings.

But most – but not every – browser supports CSS (the basics of it, at least), so why do I treat it as optional? There are a few reasons. Firstly CSS might be turned off (which is very unlikely, but possible). But more importantly the CSS file might not be available or parsable:

  • The server hosting the CSS file may be offline
  • The file may be temporarily unreadable
  • The URL for the file may be wrong
  • DNS settings may be incorrect
  • The CDN may be down
  • The file may be empty
  • The file contains something that isn’t CSS or a syntax error

There may be lots of other problems, I’m sure you can think of some.

Now, any one of those errors could also be a problem with an HTML document. In fact if the document is being served from a CMS then there are a lot more things that could go wrong. But if any of those errors happen for an HTML document then the browser doesn’t have a web page to parse and render at all. Browsers have mechanisms to handle that, because everyone knows that URLs change all the time (even if they shouldn’t):

image007

So CSS is optional; it is layered on top of the fundamental layer (HTML) to provide additional benefits to the user – a nicely styled page.

And in the same way that web browsers are forgiving about the HTML they parse, they are also forgiving about CSS. You can serve a CSS file with syntax errors and the parts the rendering engine *can* parse correctly it will, and will ignore the rest.

So if an HTML document links to a CSS file which contains partially broken syntax the page will still be partially styled.

JavaScript

Now we come to the top layer, the client-side script.

I don’t need to tell you that JavaScript is sexy at the moment, and rightly so – it is a powerful and fun language. And the fact it’s built into every modern web browser arguably gives it a reach far wider than any other development platform. Even Microsoft are betting their house on it, with Windows 8 apps built on HTML and JavaScript.

But what happens when JavaScript Goes Bad on a web page? Here’s the same quick list of errors for a CSS file I wrote about above:

  • The server hosting the JS file may be offline
  • The file may be temporarily unreadable
  • The URL for the file may be wrong
  • DNS settings may be incorrect
  • The CDN may be down
  • The file may be empty
  • The file contains something that isn’t JavaScript or a syntax error

Let’s stop right there and look at the final error. What happens if a browser is parsing JavaScript and finds a syntax error (not quite all errors, but a lot) or something it can’t execute? It dies, that’s what happens:

“JavaScript has a surprisingly simple way of dealing with errors ? it just gives up and fails silently”

Browser JavaScript parsing engines are still pretty forgiving, but *way* less forgiving than HTML and CSS parsers. And different browsers forgive different things. Now, you might think “Just don’t have syntax errors” but the reality is bugs happen. Even something as innocuous as a trailing comma in an array will cause older Internet Explorer to fail, but not other browsers.

So JavaScript, while powerful and cool, is also brittle and can easily break a page. Therefore it *has to be optional*.

Web apps

You might be thinking “Yes, but my site is a Web Application, so I need JavaScript.” OK, but what is a web app exactly? I can’t say it better than Jeremy Keith, so I’ll just link to his article.

Progressive enhancement

This is the crux of progressive enhancement. Here’s the recipe:

  • Start with a basic web page that functions with nothing but the HTML, using standard semantic mark-up. Yes, it will require full-page GET or POST requests to perform actions, but we’re not going to stop there – we’re going to enhance the page.
  • Add CSS to style the page nicely; go to town with CSS3 animations if you want
  • Add JavaScript to enhance the UI and provide all the modern goodies: AJAX, client-side models, client-side validation etc

The benefits are obvious:

  • If the JavaScript or CSS files (or both) fail for any reason whatsoever the page still works
  • The use of semantic HTML means the page is fully understandable by search engine spiders
  • Because everything is rendered in HTML, not built up in JavaScript, it is understandable immediately by assistive devices
  • Serving fully-rendered HTML is quicker than building that same HTML client-side
  • Built-in support for older – and newer – browsers and devices

The best web developers on the planet all argue that progressive enhancement is the best way to approach web development. I honestly have no idea why anyone would think otherwise. There’s a good article (it’s actually the first chapter of Filament Group’s “Designing With Progressive Enhancement” book) on the case for progressive enhancement here.

Modern JavaScript

There are some people who use juicy headlines (this is called “link-baiting”) which doesn’t help those developers who are trying to promote progressive enhancement, instead causing JavaScript-loving developers to proclaim “you hate JavaScript!”. You know what developers are like: they get hot-headed. It’s much better to try to think clearly and objectively about development and come up with solutions based on real data.

The reality is that most modern JavaScript libraries don’t support progressive enhancement out of the box: AngularJS included. If there were a way to render standard HTML with real links and forms and then enhance that HTML with Angular I would be all over it, but unfortunately I haven’t found anything that explains how to do it yet.

This is something which I’ve been thinking about a lot, and I did a little proof of concept for a basic data-binding system. I wonder if it would be possible to apply the same techniques to Angular.

For me personally I’m a big believer in progressive enhancement, not just for accessibility reasons but for front-end performance as well. I do recognise it will probably add time to development, however the same can be said for Test Driven Development. The goal for progressive enhancement and TDD is the same: a better, more stable foundation for systems.

New WordPress plugins

I’ve released a couple of new WordPress plugins recently which I thought I’d waffle on about.

Theme Reset

I had a situation not too long ago on a WordPress MultiSite site I run where I had deleted some themes but there were still some sites using those themes. I needed to reset all the sites to use the same theme, but there wasn’t an easy way to do it. So I made a plugin.

And here it is: Theme Reset. There’s not much to it; you have to be a network admin to get the option, and you can choose any installed theme. That’s it.

Child Themes

The other plugin I released is also theme related. This one allows you to create a child theme from any installed theme. Just click “Create child theme” on the theme you want to be a parent, fill in a simple form and boom – the new child theme is created and installed.

screenshot-1

screenshot-2

screenshot-3

screenshot-4

I’m no designer (as you can probably tell) but this seemed like a good idea that could save people some time.