Support

A conversation at work last week got me thinking about what we, as web developers and designers, mean when we talk about websites supporting certain browsers. I feel that the word ‘support’ is misunderstood, and has a number of meanings depending on the context in which it is used.

Does it work?

In my experience, people mostly use ‘support’ to describe in which browsers a website will work. However, ‘work’ isn’t an adequate word. Websites rarely do or don’t work in their entirety. Websites are complex collections of dozens, often hundreds of, different commands and API calls. Any mixture of them may or may not be available in a browser accessing the website, depending on the browser type and version. It’s rarely a binary situation where the site works or doesn’t work.

We need to take a more nuanced approach, and realise there are levels of ‘working’ that may or may not make a difference to the user of the website. Those users are the ones for whom the website exists, after all.

For example, many years ago when rounded corners were being introduced in CSS, some pragmatic web developers added the code for rounded corners to their CSS styles knowing that if a browser didn’t understand that code it would ignore it. The corners would be square, but no error would be thrown. Users, unless they were eagle-eyed and knew that the corners were meant to be round, wouldn’t even know the difference.

That was made possible because of the declarative nature of CSS. HTML works the same way – if a browser doesn’t understand a particular element it will render the contents of the tag as text and move on. No error will be thrown. Here’s an example of that flexibility:

<audio src="postman-pat-grime-remix.mp3">This will be displayed in browsers that don't understand the 'audio' element<audio>

JavaScript, on the other hand, doesn’t work like that; it’s imperative. This means that if the browser doesn’t understand a particular piece of JavaScript code it is beng asked to run, an error is thrown. That error may stop further JavaScript being executed on the page. So there’s a big difference in how developers should approach the use of CSS/HTML, and JavaScript. Nuance is the key.

This nuanced approach understands that not all functionality is created equal. For example, for some sites the ability to re-order a table of data instantly (i.e. without a trip to the server and back) is crucial to the functionality of the site. Or, perhaps a particular site absolutely cannot function without CSS grid layout. But these cases are, in my experience, rare. Most sites – not all, but most – require only basic functionality to work, even if they get nicer to use with additional ‘bells and whistles’.

We have to ask tough questions about what our bells and whistles are, and whether the bells and whistles we are adding to a site are really required: especially if they stop users of some browsers using that site.

Can we test it?

The other context people use the word ‘supports’ is when talking about which browsers we are going to test. This is a difficult subject, as we don’t have a hope of testing the huge range of combinations of browsers, operating systems, devices etc out there in ‘the wild’.

Here, we have to be pragmatic. We should look at the site statistics to determine the browsers, operating systems, and devices people are using. But we should bear in mind that if a particular browser or device doesn’t seem to be used much, it might be due to parts of the site not working well for those users – even if they want to use it!

We should also pay attention to global browser usage trends, particularly in the region or demographics our site is aimed at.

So, rather than asking what browsers we choose support, let’s ask what functionality do we need to use. We should make tough choices about the functionality our site actually needs – right the way down to code level; individual JavaScript API calls, CSS properties and values, HTML elements. Let’s remember there are often many ways to achieve a particular outcome, and that users just want to do the job for which they visited the website.

We’ll then find that, rather than just ‘supporting’ a narrow range of browsers, we allow users with a much wider range of browsers, operating systems, devices – yes, and assistive technologies – to use our sites. Accessibility for all is a fundamental principle of the web. Let’s not break it.

Bells and whistles are great, but if they get in the way of the user accomplishing their task then they are nothing but a waste of time and effort.

Mind the gap

Most modern software solutions consist of multiple layers or tiers. Each of these has responsibility for processing inputs and outputs in different ways. For web applications you’ll find a user interface, one oe more APIs which serve data, and probably multiple tiers handling data on the server.

These tiers can be completely separate, as in the case of a web UI and its API. Sometimes they are very closely tied together, for example data access and repository pattern layers in a single component. In all cases these tiers have to – and I realise how much I’m stating the obvious – communicate with each other. So they have to know how to communicate with each other.

Nothing ground-breaking there. But I’ve found that this is exactly where software projects can fail. There’s lots of thought and information gone into how each of the components work, but not so much thought gone into how they will communicate. Here’s an analogy:

Towbar fitted to the back of a car

The humble towbar. Doesn’t look much, does it? A curved bit of metal, with just enough of a shape to allow something to be fixed loosely to the end of it.

Yet this simple bit of technology is responsible for joining two huge components – a car and a caravan or trailer. In many cases the two components it joins are hugely expensive and complicated pieces of machinery – but this simple hook of metal means they can work together.

It’s not glamorous or highly technical, but the specification of this hook had to be known to both of the components it was joining. Without a known and agreed specification there was little hope of successful communication between the components.

Let’s translate that to software. Imagine you’re on a team who need to deliver a web app. The web app must call an API to get some data crucial to the app. The API is being built by a different team, over who you have no control. An architect may put together a diagram explaining when the components should communicate:

Example sequence diagram showing a client app, API, and business rules serverBut without the how this is little use to the development team. The how is the actual specification of those request and response messages – what gets sent, what gets returned.

This detail is crucial and must be discussed and agreed early in the project. This detail is the system. Without known, agreed specifications for all of the communication points between the different components you’re in grave danger of building a bunch of cogs which don’t quite work together.

The specification of those messages allows a number of important things to be discussed and checked-off:

  • What data do I need to send?
    • What is optional? What is required?
    • What are the bounds of the data? Are there any value contraints?
  • What data do I get back?
    • Is everything there that I need?
    • What are the bounds of the data? Are there any value contraints?
    • Are there values which I need to translate in any way?
  • What about errors?
    • What possible error response could I get?
    • What if no response is returned?
    • Is there a timeout I need to cater for?
  • Is this even right?
    • Does this request even need to be made? Am I requesting data I already have?
    • Is there a more efficient or robus mechanism to do the same thing?

These things make the difference between a system which is deployed riddled with potential runtime bugs, and one which you have prepared for as many scenarios as possible.

OK, how do we agree on and document this specification so everyone is on the same page? Or at least, in the same book.

(Thanks to Craig Milner for that last line. He took it further: “I’ve known teams who were not just not on the same page, they weren’t even in the same library.”)

I think there’s a lack of what I’m going to call “3D software architecture documentation”, or at least I’m not aware of any. What I mean is documents which, like the sequence diagram above which shows two axes, also allow the viewer to go deeper into more detail. Imagine if you could click any of those request/response arrows and view the specification for that message. Then click “back” and go back to the more zoomed-out level.

I guess what I’m describing is a web page. Yes, I’ve just invented links. Go me!

And the specification for messages? That’s easy: for REST APIs (which a lot of the time is what we’re talking about) you should use OpenAPI – a standard for describing APIs.

This is what I used when defining the API for a large automotive data company. I wrote the specification for the API using OpenAPI, it could then be “navigated” using an OpenAPI viewer, and discussed by the team before we built anything. Once a part of the API was built we could then compare the output with the original spec.

Sometimes, pragmatic changes had to be made so the real API was slightly different to the specification – these changes were always discussed during development. But more often than not, because adequate thought had gone into a high-enough resolution specification, the developers knew exactly what to build.

This approach is “design-first API development” as you design what the API is going to look like up-front before you break ground writing any code. This same approach can be used for different types of components – GraphQL APIs, SOAP, even code-level interfaces.

So the takeaway here is to spend some time early in a project to talk about and document how components will communicate. That’s the detail which can make or break a solution.

I’m learning React

Some people who know me well may not quite have heard me correctly, so I’ll repeat:

I’m learning React.

Does this mean I’m turning my back on the principle of Progressive Enhancement? Does it mean I’m going to start building websites which are 99% JavaScript and 1% everything else? Have I given in?

No, dear reader, my opinions – shaped by luminaries like Jeremy Keith, Alex Russell, Tim Kadlec and many others – have not changed. The web is neck-deep in JavaScript and sinking deeper all the time, and organisations drinking the SPA framework kool-aid think that browsers are waving not drowning.

There are two reasons I’m learning this framework – and I’ll also be looking at Vue as well (I already have some commercial experience with Angular).

Firstly, when talking with teams and encouraging them to reduce their reliance on JavaScript for core functionality I’ve repeatedly received a “but you don’t understand it!” response. I do understand it, my 20+ years building websites hasn’t been spent hiding under a rock. JavaScript is cool, I get it,

This “but we’re building a web app, not a static site” is a common fallacy, and in a large part fuelling the current untenable position. I’ve not yet found anyone who can explain the difference between an “app” and a “site”, and most grudgingly accept there’s a big grey area between the extremes of a rarely-updated content site and, say, GMail. Most projects involve a mixture of slow and fast-moving information.

If I learn React then I can counter the lack-of-understanding argument. I can speak in the language of die-hard Reactians (is that the right word) and – hopefully – put across some reasons why core functionality should be delivered using the simplest technology possible (generally server-side generated HTML).

Secondly, I expect to fail to convince many people to use less JavaScript. So I want to have some practical examples of apps that use React (or any JavaScript framework) in a less all-or-nothing way.

After all, I don’t see any reason why this should be delivered to a browser:

<!doctype html>
<html>
<head>
<title>MY 'app'</title>
<script src="my-huge-bundle.js"></script>
</head>
<body>
<div id="app"></div>
</body>
</html>

If I learn React maybe I can implement some new patterns that will incorporate a Progressive Enhancement mindset.

I may fail at both those aims, of course. In which case I will still have learnt a new technology.

Designing in the open

It’s been a while since I last redesigned (or should I say, realigned) this site. Six years, in fact. My regular visitor, if they are still regular, will have noticed that this site has been somewhat broked for a week or so.

I’m not sure what I did, but I clearly mangled something. Anyway, it’s an excuse to realign.

This time I have some simple requirements for myself:

  1. Mobile first. The reality is that most browsing is done on a mobile device of some kind, so I want to primarily cater to those constraints. That means mobile-first CSS, Service Workers, small images only where necessary etc etc.
  2. Performance second. Closely related to the mobile thing, good performance is a must. I’m aiming for sub-second render times. I also want to use no JavaScript. This is a content site, why would I need it?
  3. More emphasis on the IndieWeb. I’ve started doing this, by pulling in my tweets. But I want to go much further down that road.

And I’m doing all this in the open, live on the site. I may fail completely, in which case it will be a public humiliation. But maybe it will force me to get on with it!

Server-side rendering is only half a solution

We live in a fallen world. We are surrounded by faults, some of which we may not notice – others stop us in our tracks. The severity of some faults is dependent on context.

For example, a crack in the windscreen of a toy car is unlikely to cause much consternation. A crack in the windscreen of a space shuttle is more serious. When cooking, a little too much chilli in your con carne and a fussy child won’t eat it. A little too much nut in a supposedly “nut-free” factory can lead to many people being badly affected. Context matters.

On the web we have these three technologies:

  • HTML
  • CSS
  • JavaScript

One of these is not like the others. HTML and CSS are declarative: they are just hints to the browser about how content should be displayed. If the browser hits a fault of any kind it tries to recover itself, and for the most part succeeds. Syntax errors, missing files, DNS issues, unknown properties and elements; all of these faults and more are soaked up by the forgiving nature of HTML and CSS parsers.

Not so with JavaScript. With great power comes great responsibility, and an imperitive technology like JavaScript – which dictates to the execution environment exactly what it should do – is designed to fail if any faults are encountered. This is right and proper; it would be hard to use a programming language which continued merrily on its way whenever a fault occurred.

So, we use Progressive Enhancement principles to ensure we’re creating web sites which are not brittle and will be resilient to the faults which they will inevitably encounter. You’ve heard me preach about this stuff many, many times before.

One of those principles is to use server-side rendering, which means that the initial response for a web site should be a populated HTML document, not just an empty shell. This is a no-no:

<!doctyle html>
<html>
	<head>
		<title>My Cool App!</title>
	</head>
	<body>
		<div id="app"></div>
		<script src="app-all-the-things.js" />
	</body>
</html>

Server-side rendering is a win for performance, as well as ensuring your web site isn’t entirely dependent on JavaScript for its initial render. But there’s a danger here; that we treat server-side rendering as a complete solution to protect us against ALL possible JavaScript failures. Believing server-side rendering to be a panacea is a mistake.

Progressive Enhancement isn’t just about the initial response, it applies to the entire lifecycle of a page: whether that’s a traditional page of content, or a view of a “Single Page App”. Because, in a runtime environment you as a developer don’t control, errors can happen at any time. Not just in the initial render, but even while the user is interacting with the page.

This is often because of 3rd party scripts, but can also be caused by problems caused by your own code. For example, a line of JavaScript being executed which the browser doesn’t understand, or a failure of an API request. As professionals we try to mitigate against such faults, but they will happen anyway despite our best efforts because we don’t control the runtime environment of the browser.

So as these on-page faults will happen, what can we do? In the words of Stefan Tilkov:

…build a classic web application, including rendering server-side HTML, and use JavaScript only sparingly, to enhance browser functionality where possible.

Yes, we go old-school. We use <form> and <a> elements just as if JavaScript doesn’t exist. We handle form submissions and routing on the server, just as if JavaScript doesn’t exist. Because – when an on-page fault occurs – JavaScript doesn’t exist for that interaction.

So, render your content server-side; it’s a sensible thing to do. But don’t forget that the rendered HTML must be functional even if everything else breaks. Going back to our example page above, you could server-side render content like this (truncated) example:

<!doctyle html>
<html>
	<head>
		<title>My Cool App!</title>
	</head>
	<body>
		<div id="app">
			<h1>My Cool App!</h1>
			<p>Choose a filter and upload your image below for fun and good times!</p>
			<div id="filters"></div>
			<div id="image"></div>
		</div>
		<script src="app-all-the-things.js" />
	</body>
</html>

Yes, the content is rendered, but the app still isn’t usable unless all the JavaScript downloads, parses and executes correctly. What you’ve provided the user is not nothing, as in the previous example, but it’s not functional either.

If you provided a server-side rendered HTML page containing a form that was functional irrespective of whether any additional resources on the page worked correctly (and I’m including images, CSS as well as JavaScript) then you’ve implemented your functionality in the simplest possible technology and protected yourself against unforeseen faults. Like this:

<!doctyle html>
<html>
	<head>
		<title>My Cool App!</title>
	</head>
	<body>
		<div id="app">
			<h1>My Cool App!</h1>
			<p>Choose a filter and upload your image below for fun and good times!</p>
			<form action="/imagify" method="post">
				<p>
					<label for="filters">Choose a filter</label>
					<select id="filters" name="filters">
						<option>Catify</option>
						<option>Dogify</option>
						<option>Horsify</option>
					</select>
				</p>
				<p>
					<label for="image">Choose an image</label>
					<input id="image" name="image" type="file" />
				</p>
			</form>
		</div>
		<script src="app-all-the-things.js" />
	</body>
</html>

The great news about this approach is it doesn’t prevent you going absolutely crazy with the very latest bells and whistles! You can use all the modern JavaScript techniques you like (checking that the browser supports them, of course) while knowing that your trusty HTML and server-side logic is the safety net. It bakes resilience into your app at the foundational level.

I hope I’ve given you some food for thought, and demonstrated that while server-side rendering is a good thing to do it’s not the be-all-and-end-all of Progressive Enhancement. You, the developer, should think about ALL the ways in which faults could affect your users throughout the entire lifecycle of the page.