Mind the gap

Most modern software solutions consist of multiple layers or tiers. Each of these has responsibility for processing inputs and outputs in different ways. For web applications you’ll find a user interface, one oe more APIs which serve data, and probably multiple tiers handling data on the server.

These tiers can be completely separate, as in the case of a web UI and its API. Sometimes they are very closely tied together, for example data access and repository pattern layers in a single component. In all cases these tiers have to – and I realise how much I’m stating the obvious – communicate with each other. So they have to know how to communicate with each other.

Nothing ground-breaking there. But I’ve found that this is exactly where software projects can fail. There’s lots of thought and information gone into how each of the components work, but not so much thought gone into how they will communicate. Here’s an analogy:

Towbar fitted to the back of a car

The humble towbar. Doesn’t look much, does it? A curved bit of metal, with just enough of a shape to allow something to be fixed loosely to the end of it.

Yet this simple bit of technology is responsible for joining two huge components – a car and a caravan or trailer. In many cases the two components it joins are hugely expensive and complicated pieces of machinery – but this simple hook of metal means they can work together.

It’s not glamorous or highly technical, but the specification of this hook had to be known to both of the components it was joining. Without a known and agreed specification there was little hope of successful communication between the components.

Let’s translate that to software. Imagine you’re on a team who need to deliver a web app. The web app must call an API to get some data crucial to the app. The API is being built by a different team, over who you have no control. An architect may put together a diagram explaining when the components should communicate:

Example sequence diagram showing a client app, API, and business rules serverBut without the how this is little use to the development team. The how is the actual specification of those request and response messages – what gets sent, what gets returned.

This detail is crucial and must be discussed and agreed early in the project. This detail is the system. Without known, agreed specifications for all of the communication points between the different components you’re in grave danger of building a bunch of cogs which don’t quite work together.

The specification of those messages allows a number of important things to be discussed and checked-off:

  • What data do I need to send?
    • What is optional? What is required?
    • What are the bounds of the data? Are there any value contraints?
  • What data do I get back?
    • Is everything there that I need?
    • What are the bounds of the data? Are there any value contraints?
    • Are there values which I need to translate in any way?
  • What about errors?
    • What possible error response could I get?
    • What if no response is returned?
    • Is there a timeout I need to cater for?
  • Is this even right?
    • Does this request even need to be made? Am I requesting data I already have?
    • Is there a more efficient or robus mechanism to do the same thing?

These things make the difference between a system which is deployed riddled with potential runtime bugs, and one which you have prepared for as many scenarios as possible.

OK, how do we agree on and document this specification so everyone is on the same page? Or at least, in the same book.

(Thanks to Craig Milner for that last line. He took it further: “I’ve known teams who were not just not on the same page, they weren’t even in the same library.”)

I think there’s a lack of what I’m going to call “3D software architecture documentation”, or at least I’m not aware of any. What I mean is documents which, like the sequence diagram above which shows two axes, also allow the viewer to go deeper into more detail. Imagine if you could click any of those request/response arrows and view the specification for that message. Then click “back” and go back to the more zoomed-out level.

I guess what I’m describing is a web page. Yes, I’ve just invented links. Go me!

And the specification for messages? That’s easy: for REST APIs (which a lot of the time is what we’re talking about) you should use OpenAPI – a standard for describing APIs.

This is what I used when defining the API for a large automotive data company. I wrote the specification for the API using OpenAPI, it could then be “navigated” using an OpenAPI viewer, and discussed by the team before we built anything. Once a part of the API was built we could then compare the output with the original spec.

Sometimes, pragmatic changes had to be made so the real API was slightly different to the specification – these changes were always discussed during development. But more often than not, because adequate thought had gone into a high-enough resolution specification, the developers knew exactly what to build.

This approach is “design-first API development” as you design what the API is going to look like up-front before you break ground writing any code. This same approach can be used for different types of components – GraphQL APIs, SOAP, even code-level interfaces.

So the takeaway here is to spend some time early in a project to talk about and document how components will communicate. That’s the detail which can make or break a solution.

I’m learning React

Some people who know me well may not quite have heard me correctly, so I’ll repeat:

I’m learning React.

Does this mean I’m turning my back on the principle of Progressive Enhancement? Does it mean I’m going to start building websites which are 99% JavaScript and 1% everything else? Have I given in?

No, dear reader, my opinions – shaped by luminaries like Jeremy Keith, Alex Russell, Tim Kadlec and many others – have not changed. The web is neck-deep in JavaScript and sinking deeper all the time, and organisations drinking the SPA framework kool-aid think that browsers are waving not drowning.

There are two reasons I’m learning this framework – and I’ll also be looking at Vue as well (I already have some commercial experience with Angular).

Firstly, when talking with teams and encouraging them to reduce their reliance on JavaScript for core functionality I’ve repeatedly received a “but you don’t understand it!” response. I do understand it, my 20+ years building websites hasn’t been spent hiding under a rock. JavaScript is cool, I get it,

This “but we’re building a web app, not a static site” is a common fallacy, and in a large part fuelling the current untenable position. I’ve not yet found anyone who can explain the difference between an “app” and a “site”, and most grudgingly accept there’s a big grey area between the extremes of a rarely-updated content site and, say, GMail. Most projects involve a mixture of slow and fast-moving information.

If I learn React then I can counter the lack-of-understanding argument. I can speak in the language of die-hard Reactians (is that the right word) and – hopefully – put across some reasons why core functionality should be delivered using the simplest technology possible (generally server-side generated HTML).

Secondly, I expect to fail to convince many people to use less JavaScript. So I want to have some practical examples of apps that use React (or any JavaScript framework) in a less all-or-nothing way.

After all, I don’t see any reason why this should be delivered to a browser:

<!doctype html>
<html>
<head>
<title>MY 'app'</title>
<script src="my-huge-bundle.js"></script>
</head>
<body>
<div id="app"></div>
</body>
</html>

If I learn React maybe I can implement some new patterns that will incorporate a Progressive Enhancement mindset.

I may fail at both those aims, of course. In which case I will still have learnt a new technology.

Designing in the open

It’s been a while since I last redesigned (or should I say, realigned) this site. Six years, in fact. My regular visitor, if they are still regular, will have noticed that this site has been somewhat broked for a week or so.

I’m not sure what I did, but I clearly mangled something. Anyway, it’s an excuse to realign.

This time I have some simple requirements for myself:

  1. Mobile first. The reality is that most browsing is done on a mobile device of some kind, so I want to primarily cater to those constraints. That means mobile-first CSS, Service Workers, small images only where necessary etc etc.
  2. Performance second. Closely related to the mobile thing, good performance is a must. I’m aiming for sub-second render times. I also want to use no JavaScript. This is a content site, why would I need it?
  3. More emphasis on the IndieWeb. I’ve started doing this, by pulling in my tweets. But I want to go much further down that road.

And I’m doing all this in the open, live on the site. I may fail completely, in which case it will be a public humiliation. But maybe it will force me to get on with it!

Server-side rendering is only half a solution

We live in a fallen world. We are surrounded by faults, some of which we may not notice – others stop us in our tracks. The severity of some faults is dependent on context.

For example, a crack in the windscreen of a toy car is unlikely to cause much consternation. A crack in the windscreen of a space shuttle is more serious. When cooking, a little too much chilli in your con carne and a fussy child won’t eat it. A little too much nut in a supposedly “nut-free” factory can lead to many people being badly affected. Context matters.

On the web we have these three technologies:

  • HTML
  • CSS
  • JavaScript

One of these is not like the others. HTML and CSS are declarative: they are just hints to the browser about how content should be displayed. If the browser hits a fault of any kind it tries to recover itself, and for the most part succeeds. Syntax errors, missing files, DNS issues, unknown properties and elements; all of these faults and more are soaked up by the forgiving nature of HTML and CSS parsers.

Not so with JavaScript. With great power comes great responsibility, and an imperitive technology like JavaScript – which dictates to the execution environment exactly what it should do – is designed to fail if any faults are encountered. This is right and proper; it would be hard to use a programming language which continued merrily on its way whenever a fault occurred.

So, we use Progressive Enhancement principles to ensure we’re creating web sites which are not brittle and will be resilient to the faults which they will inevitably encounter. You’ve heard me preach about this stuff many, many times before.

One of those principles is to use server-side rendering, which means that the initial response for a web site should be a populated HTML document, not just an empty shell. This is a no-no:

<!doctyle html>
<html>
	<head>
		<title>My Cool App!</title>
	</head>
	<body>
		<div id="app"></div>
		<script src="app-all-the-things.js" />
	</body>
</html>

Server-side rendering is a win for performance, as well as ensuring your web site isn’t entirely dependent on JavaScript for its initial render. But there’s a danger here; that we treat server-side rendering as a complete solution to protect us against ALL possible JavaScript failures. Believing server-side rendering to be a panacea is a mistake.

Progressive Enhancement isn’t just about the initial response, it applies to the entire lifecycle of a page: whether that’s a traditional page of content, or a view of a “Single Page App”. Because, in a runtime environment you as a developer don’t control, errors can happen at any time. Not just in the initial render, but even while the user is interacting with the page.

This is often because of 3rd party scripts, but can also be caused by problems caused by your own code. For example, a line of JavaScript being executed which the browser doesn’t understand, or a failure of an API request. As professionals we try to mitigate against such faults, but they will happen anyway despite our best efforts because we don’t control the runtime environment of the browser.

So as these on-page faults will happen, what can we do? In the words of Stefan Tilkov:

…build a classic web application, including rendering server-side HTML, and use JavaScript only sparingly, to enhance browser functionality where possible.

Yes, we go old-school. We use <form> and <a> elements just as if JavaScript doesn’t exist. We handle form submissions and routing on the server, just as if JavaScript doesn’t exist. Because – when an on-page fault occurs – JavaScript doesn’t exist for that interaction.

So, render your content server-side; it’s a sensible thing to do. But don’t forget that the rendered HTML must be functional even if everything else breaks. Going back to our example page above, you could server-side render content like this (truncated) example:

<!doctyle html>
<html>
	<head>
		<title>My Cool App!</title>
	</head>
	<body>
		<div id="app">
			<h1>My Cool App!</h1>
			<p>Choose a filter and upload your image below for fun and good times!</p>
			<div id="filters"></div>
			<div id="image"></div>
		</div>
		<script src="app-all-the-things.js" />
	</body>
</html>

Yes, the content is rendered, but the app still isn’t usable unless all the JavaScript downloads, parses and executes correctly. What you’ve provided the user is not nothing, as in the previous example, but it’s not functional either.

If you provided a server-side rendered HTML page containing a form that was functional irrespective of whether any additional resources on the page worked correctly (and I’m including images, CSS as well as JavaScript) then you’ve implemented your functionality in the simplest possible technology and protected yourself against unforeseen faults. Like this:

<!doctyle html>
<html>
	<head>
		<title>My Cool App!</title>
	</head>
	<body>
		<div id="app">
			<h1>My Cool App!</h1>
			<p>Choose a filter and upload your image below for fun and good times!</p>
			<form action="/imagify" method="post">
				<p>
					<label for="filters">Choose a filter</label>
					<select id="filters" name="filters">
						<option>Catify</option>
						<option>Dogify</option>
						<option>Horsify</option>
					</select>
				</p>
				<p>
					<label for="image">Choose an image</label>
					<input id="image" name="image" type="file" />
				</p>
			</form>
		</div>
		<script src="app-all-the-things.js" />
	</body>
</html>

The great news about this approach is it doesn’t prevent you going absolutely crazy with the very latest bells and whistles! You can use all the modern JavaScript techniques you like (checking that the browser supports them, of course) while knowing that your trusty HTML and server-side logic is the safety net. It bakes resilience into your app at the foundational level.

I hope I’ve given you some food for thought, and demonstrated that while server-side rendering is a good thing to do it’s not the be-all-and-end-all of Progressive Enhancement. You, the developer, should think about ALL the ways in which faults could affect your users throughout the entire lifecycle of the page.

Technical Credit

There’s a well-known concept in programming that refers to the negative effects poorly-made decisions can have on the quality of software over time: Technical Debt. The Wikipedia article gives some examples of the causes and the consequences of technical debt.

This financial analogy is a useful one, as it nicely describes the long-term impact of debt – the longer you have it, the worse the problem becomes. Conversely you can have credit (e.g. savings) in your account for a long time, waiting for the proverbial “rainy day” to take advantage of your good planning. Want to splash out on a new pair of sparkly galoshes? No problem!

At the An Event Apart conference in Orlando in October 2016, Jeremy Keith spoke about "Resilience: Building a Robust Web That Lasts" which was a talk about progressive enhancement cleverly disguised as it didn’t use the phrase ‘progressive enhancement’. In that talk Jeremy dropped a knowledge bomb, calling building sofware using the principles of progressive enhancement like building ‘technical credit’.

This, in my opinion, is genius. It’s a gloriously positive spin on technical debt, which is too often seen as the product of bad developers. It’s saying “you, developer, can make a better future”. I love that.

It appears there is little online which talks about this “technical credit” concept. In fact, the only decent resource I could find is a 2014 paper from the Conference on Systems Engineering Research entitled ‘On Technical Credit’. The author, Brian Berenbach, gives a brief but eloquent introduction to the idea that we should concentrate on what should be done, rather than what shouldn’t be done to make a system better.

From the abstract:

"Technical Debt" … refers to the accruing debt or downstream cost that happens when short term priorities trump long term lifecycle costs… technical debt is discussed mostly in the context of bad practices; the author contends that the focus should be on system principles that preclude the introduction, either anticipated or unanticipated, of negative lifecycle impacts.

Sounds great; let’s stop bad things happening. How? The abstract continues:

A set of heuristics is presented that describes what should be done rather than what should not be done. From these heuristics, some emergent trends will be identified. Such trends may be leveraged to design systems with reduced long term lifecycle costs and, on occasion, unexpected benefits.

Emphasis mine. I’ll wait here while you to read the rest of the document.

At this point hopefully you can see the clear link to the principles of progressive enhancement. Let’s look at a few examples emergent trends – which I’ll call ‘properties’ as the paper uses this term – and the (un)expected benefits that progressive enhancement may give. But first, a quick refresher on what progressive enhancement is.

The principles of progressive enhancement

I can’t put progressive enhancement in a neater nutshell than Jeremy does in his talk ‘Enhance!’:

  1. Identify the core functionality
  2. Implement it using the simplest technology possible
  3. Enhance!

For websites this boils down to practical principles like these:

But there’s no hard-and-fast set of rules for progressive enhancement, because every site has different functionality. That’s why it’s considered a philosophy rather than a checklist. As Christian Heilmann said, progressive enhancement is about asking "if" a lot.

Emergent properties

Someone once said words to the effect of "the only constant is change", meaning that the only thing you can rely on is that things will not stay the same. That’s good! Progress is positive and brings with it new opportunities.

These opportunities can be seen as emergent properties – new or existing attributes of things which emerge as time goes on. For example, the increasing uses of mobile computing devices and fast home connection speeds are emerging properties leading to opportunities for new types of business. Likewise, the prevalent use of social media and its unprecedented bulk collection of data about its users is allowing new models for advertising – and, unfortunately, more nefarious uses – to emerge.

These emerging properties are often very difficult if not impossible to predict. Progress can lead to unexpected outcomes. Technology in particular is often put to unanticipated uses and exhibits unexpected behaviour when used at scale.

Who, for example, could have predicted the explosion of new devices and form factors just a few years ago. Devices once the domain of science fiction are now commonplace, and the range of new input types – notably touch and voice – is revolutionising how people interact with technology.

While fixed line download speeds are increasing many in developing nations, who arguably are the ones who could benefit the most from widespread Internet access, are stuck with very slow speeds, if any at all. Clearly we have a long way to go to achieve parity in global access to the Internet.

(Un)expected benefits

With such a wide array of both expected and unexpected properties of the current technological revolution, building our systems in such a way to both be resilient to potential failures and benefit from unanticipated events surely is a no-brainer. The ‘On Technical Credit’ paper defines this approach as Technical Credit:

Technical Credit is the investment in the engineering, designing and constructing of software or systems over and above the minimum necessary effort, in anticipation of emergent properties paying dividends at a later date.

This is Progressive Enhancement. It’s about putting some thought in up-front to ask those tricky "what if" questions. Questions such as:

Thinking about these, and many other, potential problems leads you to follow the recipe given by Jeremy which I quoted above:

  1. Identify the core functionality
  2. Implement it using the simplest technology possible
  3. Enhance!

Implementing core functionality using the simplest technology possible – in the case of a website by using well-structured semantic HTML markup generated on the server – gives some expected benefits:

Plus it provides a strong foundation to take advantage of unexpected occurrences; those emerging properties mentioned earlier.

From brand new browsers to old browsers working in a new way, your well-structured HTML will deliver your content even if everything else fails. Support for new input types on a myriad of unimagined devices will be taken care of by the browser – rather than you having to find Yet Another JavaScript Component™ that adds the support you need. And as new APIs are added you can layer on support for these knowing the foundation of your site is rock solid.

Spending Technical Credit

So you’ve built your system carefully, thinking about the many ways in which it could fail. You’ve done ‘over and above the minimum necessary effort’ and can now sit back, confident in the hope that should a rainy day come you’ve accrued enough technical credit to weather the storm.