Clara’s Den, Filey – holiday apartment

I’ve been building websites for a long time. Websites of all shapes and sizes, and sometimes for bricks and mortar businesses. My latest website is for a physical place, but this time – it’s MY place! Clara’s Den, a holiday apartment on an award-winning holiday village just south of Filey on the beautiful Yorkshire coast.

If you’ve not stayed on the Yorkshire coast, sampling the delights of Whitby and Scarborough, Filey and Bridlington, Staithes and Sandsend, then you’re missing out. There are loads of details on the website about what to find in The Bay village, along Filey Bay from the Brigg to Bempton Cliffs, and further afield along the east coast.

If you’re looking for a self-catering holiday apartment in walking distance of the beach then we’d love to welcome you to Clara’s Den.

Introducing Web Matters

Following the end of World War Two the austerity measures which had been in place throughout the war years remained, causing considerable hardship for the British public. This austerity also caused hardship for British business, partly due to legislative restrictions which were slow to be lifted, and as they found it difficult to get hold of the materials needed for their trades.

Once such industry adversely affected by belt-tightening was upholstery – the makers of “small furniture and soft furnishings”. Finding it difficult to continue their craft, they banded together to form a group that would represent their interests: the Association of Master Upholsterers and Soft Furnishers (AMUSF).

Their website continues the tale:

They felt the need for a group voice and mutual support to tackle the difficulties this [post-war austerity] caused and, in the highly regulated environment of the time, to ensure that they could influence local and national government in the framing and enforcement of laws which affected them.

In the years that followed they were successful in garnering support, and in influencing government to ensure that the conditions under which they carried out their trade were beneficial to both tradesman and consumer. A booklet written in 1967 chronicling the first 20 years of the AMUSF states:

During the first year, no fewer than 9 branches were formed over a wide ares from Northumberland to the South Coast.

The Association continues to this day, and their member directory lists over 250 member businesses.

Everyone else has got one, why don’t we?

Membership of a trade body like the AMUSF is considered a normal part of life in many industries. You’d be hard pressed to find a doctor, nurse or teacher who isn’t a member of one industry body or another. For industries such as healthcare or education this is understandable: having an official organisation to call on is very helpful for professional development and advice. In engineering, too, there are dozens of unions, institutes, associations and societies you can join. So many that there is even an organisation called the Engineering Council, which grants licences to professional engineering institutions.

And there are hundreds if not thousands of industry bodies just in the United Kingdom. If you’re running a dating agency you may well be a member of the Association of British Introduction Agencies, or if you manufacturer carpets for a living then the The Carpet Foundation is where you belong.

What about hairdressers? Don’t worry, there’s the Hair Council who have your back. There’s even a Direct Selling Association which represents direct sellers and aims to ‘develop best practice and the highest level of business ethics in the industry’. There really is an industry body for everything.

Except the web. Don’t get me wrong; there are associations for designers and software professionals, project managers and testers. There are organisations that purport to represent technology workers, but none that I would consider of the web. Open; transparent; representing the huge diversity of people whose livelihoods rely on the web – run by the same people who build the web.

This is the time we needs such an organisation the most. When Western governments are surpassing repressive regimes in their intrusive data collection policies and attitude to personal privacy. When decisions about online security are being mismanaged at a huge scale. When the very legal framework which underpins the work we do and how people use it is being thrown up in the air by Brexit.

Far be it for me to go on a political rant (sorely tempted though I am) – surely it is at least time for a knowledgeable voice to speak out against the constant stream of bad information, misunderstandings and misleading statements from our politicians and leaders. Now, thanks to a bunch of people having an idea over some drinks, there is.

The Web Matters

Web Matters is:

Web Matters is a new, independent, member-driven industry association for those who create and work on the open web.

There’s no high bar to jump to get in, no exams to pass. If you consider your work to be ‘on the web’ then this is a group for you.

We are developers, programmers, designers, and business owners across all languages, platforms, roles, and years of experience.

If you’re thinking that sounds a bit like Open Source software development then you’re right – it’s meant to. Everything is being done in the open, from writing a manifesto to the website code itself to the discussion forums, transparency is being built into the DNA of Web Matters.

This truly is an industry body by web people for web people. It’s not there to pamper to millionaire start-up playboys in London, not designed for huge agencies and Fortune 500 companies to hold golf tournaments. It’s for the freelancer, the small business owner, the web developer in a medium sized business, the agency designer, the tester, the project manager, the business analyst, the UX designer, the API developer, the mobile app wrangler. Anyone for whom the web is their craft, and therefore for whom legislation affecting the web affects them. Your membership belongs to you, irrespective of which company or client you’re working for. We’re putting web people at the heart of Web Matters.

At this moment there are millions of people online – booking holidays; viewing new baby photos; looking up information about healthcare; chatting with friends; searching for a new job; considering a purchase; researching for an exam; playing games. We’re the people that build the world they currently inhabit. A world of global data flow, of millions of miles of cables transmitting gazillions of zeroes and ones. A world with layers of complexity they don’t see, but which affects almost every area of modern life.

The web matters. And it’s time we stood up for this medium which is both shaping and being shaped by the current political and social landscape.

We need you as much as you need us

Let’s return to the AMUSF. The pamphlet chronicling the first 20 years of their organisation, written way back in 1967, says:

For any enterprise to succeed, it must have enthusiasm, perseverence, adaptability and vision. Enthusiasm among all involved from the head down to those who have a minor though very necessary role. Perseverence is essential; no matter how carefully a programme is prepared, there will be difficulties in giving effect to it. This will always apply but particularly in this era in which Governments interfere with the activities of the ordinary citizen, possibly to a greater extent than ever before.

Will you join us?

Server-side rendering is only half a solution

We live in a fallen world. We are surrounded by faults, some of which we may not notice – others stop us in our tracks. The severity of some faults is dependent on context.

For example, a crack in the windscreen of a toy car is unlikely to cause much consternation. A crack in the windscreen of a space shuttle is more serious. When cooking, a little too much chilli in your con carne and a fussy child won’t eat it. A little too much nut in a supposedly “nut-free” factory can lead to many people being badly affected. Context matters.

On the web we have these three technologies:

  • HTML
  • CSS
  • JavaScript

One of these is not like the others. HTML and CSS are declarative: they are just hints to the browser about how content should be displayed. If the browser hits a fault of any kind it tries to recover itself, and for the most part succeeds. Syntax errors, missing files, DNS issues, unknown properties and elements; all of these faults and more are soaked up by the forgiving nature of HTML and CSS parsers.

Not so with JavaScript. With great power comes great responsibility, and an imperitive technology like JavaScript – which dictates to the execution environment exactly what it should do – is designed to fail if any faults are encountered. This is right and proper; it would be hard to use a programming language which continued merrily on its way whenever a fault occurred.

So, we use Progressive Enhancement principles to ensure we’re creating web sites which are not brittle and will be resilient to the faults which they will inevitably encounter. You’ve heard me preach about this stuff many, many times before.

One of those principles is to use server-side rendering, which means that the initial response for a web site should be a populated HTML document, not just an empty shell. This is a no-no:

<!doctyle html>
		<title>My Cool App!</title>
		<div id="app"></div>
		<script src="app-all-the-things.js" />

Server-side rendering is a win for performance, as well as ensuring your web site isn’t entirely dependent on JavaScript for its initial render. But there’s a danger here; that we treat server-side rendering as a complete solution to protect us against ALL possible JavaScript failures. Believing server-side rendering to be a panacea is a mistake.

Progressive Enhancement isn’t just about the initial response, it applies to the entire lifecycle of a page: whether that’s a traditional page of content, or a view of a “Single Page App”. Because, in a runtime environment you as a developer don’t control, errors can happen at any time. Not just in the initial render, but even while the user is interacting with the page.

This is often because of 3rd party scripts, but can also be caused by problems caused by your own code. For example, a line of JavaScript being executed which the browser doesn’t understand, or a failure of an API request. As professionals we try to mitigate against such faults, but they will happen anyway despite our best efforts because we don’t control the runtime environment of the browser.

So as these on-page faults will happen, what can we do? In the words of Stefan Tilkov:

…build a classic web application, including rendering server-side HTML, and use JavaScript only sparingly, to enhance browser functionality where possible.

Yes, we go old-school. We use <form> and <a> elements just as if JavaScript doesn’t exist. We handle form submissions and routing on the server, just as if JavaScript doesn’t exist. Because – when an on-page fault occurs – JavaScript doesn’t exist for that interaction.

So, render your content server-side; it’s a sensible thing to do. But don’t forget that the rendered HTML must be functional even if everything else breaks. Going back to our example page above, you could server-side render content like this (truncated) example:

<!doctyle html>
		<title>My Cool App!</title>
		<div id="app">
			<h1>My Cool App!</h1>
			<p>Choose a filter and upload your image below for fun and good times!</p>
			<div id="filters"></div>
			<div id="image"></div>
		<script src="app-all-the-things.js" />

Yes, the content is rendered, but the app still isn’t usable unless all the JavaScript downloads, parses and executes correctly. What you’ve provided the user is not nothing, as in the previous example, but it’s not functional either.

If you provided a server-side rendered HTML page containing a form that was functional irrespective of whether any additional resources on the page worked correctly (and I’m including images, CSS as well as JavaScript) then you’ve implemented your functionality in the simplest possible technology and protected yourself against unforeseen faults. Like this:

<!doctyle html>
		<title>My Cool App!</title>
		<div id="app">
			<h1>My Cool App!</h1>
			<p>Choose a filter and upload your image below for fun and good times!</p>
			<form action="/imagify" method="post">
					<label for="filters">Choose a filter</label>
					<select id="filters" name="filters">
					<label for="image">Choose an image</label>
					<input id="image" name="image" type="file" />
		<script src="app-all-the-things.js" />

The great news about this approach is it doesn’t prevent you going absolutely crazy with the very latest bells and whistles! You can use all the modern JavaScript techniques you like (checking that the browser supports them, of course) while knowing that your trusty HTML and server-side logic is the safety net. It bakes resilience into your app at the foundational level.

I hope I’ve given you some food for thought, and demonstrated that while server-side rendering is a good thing to do it’s not the be-all-and-end-all of Progressive Enhancement. You, the developer, should think about ALL the ways in which faults could affect your users throughout the entire lifecycle of the page.

Announcing inDIVisible, a new JavaScript framework/library

I am super excited to announce my latest Open Sauce JavaScript framework/library: inDIVisible. Let me tell you why it’s the answer to everything, and you should start using it right now.

Everyone knows HTML is old-skool. All those elements, bleurgh. Boring names, boring properties. And they aren’t dynamic – they just describe content! Clearly they are unfit for use in Modern Web Applications.

Instead, simplify your life AND get all the JavaScript goodness you crave using inDIVisible. Here’s how:

Yo peeps (not hello world)

<div>Yo peeps!</div>

Pretty simple, right? But this doesn’t do anything. Let’s bind it to our JavaScript model called div.

<div #div="div"></div>

Easy! Whenever your model changes your element updates automagically. This should be enough for you to write a Facebook killer, but there’s more!

Handling input

Want to update the div property in your div model? No problem!

<div [div@] ?div="div.div"></div>

This attribute:


Tells inDIVisible to turn the element into an input field. That’s right! No need for pesky input elements.

And this attribute:


Updates the div.div property with the value the user enters. It couldn’t be easier!

What’s that? You need a checkbox? Piffle:

<div [div/] ?div="div.div"></div>

See that /? It looks like a tick, which is like a checkbox. Radio buttons are like this:

<div [div.] ?div="div.div"></div>

And if you want a button, just do this;

<div [div]></div>

But buttons *do* stuff, right? What you need is to call a function.


inDIVisible is a functional framework/library, too. You can invoke functions like this:

<div (div)="div.div"></div>

Just handle the click event (there aren’t any other events worth handling) you want to respond to like this:


Then name the function you want to run:


Woah! Genius!

Advanced usage

Need to handle loops? Behold:

<div #div="div.div" *div></div>

That cheeky little * tells inDIVisible to repeat the element for each item in the model.

And how about showing and hiding elements? It’s as easy as this to show the element only if the element is truthy:

<div #div="div.div" +div></div>

And you do this to hide the element only if the element is falsy:

<div #div="div.div" -div></div>


CLI is a Command Line Interface. But inDIVisible is better than that, so it has CLA – Command Line Awsomeness. Type commands like this in your bishbashbosh shell to make cool stuff happen:

div div /div divdiv-div

Honestly, why are you still here? Use it now, you bunch of hipster sheep.


No-one downloads anything now, so there’s no link to any files. Instead you should type one of these commands, whichever one is going to work for you:

grunt install inDIVisible
gulp install inDIVisible
snort install inDIVisible
sniffle install inDIVisible
fart install inDIVisible
bower install inDIVisible
bowel install inDIVisible
flump install inDIVisible
twerk install inDIVisible
plonk install inDIVisible

Then build it and run it. I’ll leave you to figure out those commands, because I have to go drink some organic kale and quinoa juice.

Technical Credit

There’s a well-known concept in programming that refers to the negative effects poorly-made decisions can have on the quality of software over time: Technical Debt. The Wikipedia article gives some examples of the causes and the consequences of technical debt.

This financial analogy is a useful one, as it nicely describes the long-term impact of debt – the longer you have it, the worse the problem becomes. Conversely you can have credit (e.g. savings) in your account for a long time, waiting for the proverbial “rainy day” to take advantage of your good planning. Want to splash out on a new pair of sparkly galoshes? No problem!

At the An Event Apart conference in Orlando in October 2016, Jeremy Keith spoke about "Resilience: Building a Robust Web That Lasts" which was a talk about progressive enhancement cleverly disguised as it didn’t use the phrase ‘progressive enhancement’. In that talk Jeremy dropped a knowledge bomb, calling building sofware using the principles of progressive enhancement like building ‘technical credit’.

This, in my opinion, is genius. It’s a gloriously positive spin on technical debt, which is too often seen as the product of bad developers. It’s saying “you, developer, can make a better future”. I love that.

It appears there is little online which talks about this “technical credit” concept. In fact, the only decent resource I could find is a 2014 paper from the Conference on Systems Engineering Research entitled ‘On Technical Credit’. The author, Brian Berenbach, gives a brief but eloquent introduction to the idea that we should concentrate on what should be done, rather than what shouldn’t be done to make a system better.

From the abstract:

"Technical Debt" … refers to the accruing debt or downstream cost that happens when short term priorities trump long term lifecycle costs… technical debt is discussed mostly in the context of bad practices; the author contends that the focus should be on system principles that preclude the introduction, either anticipated or unanticipated, of negative lifecycle impacts.

Sounds great; let’s stop bad things happening. How? The abstract continues:

A set of heuristics is presented that describes what should be done rather than what should not be done. From these heuristics, some emergent trends will be identified. Such trends may be leveraged to design systems with reduced long term lifecycle costs and, on occasion, unexpected benefits.

Emphasis mine. I’ll wait here while you to read the rest of the document.

At this point hopefully you can see the clear link to the principles of progressive enhancement. Let’s look at a few examples emergent trends – which I’ll call ‘properties’ as the paper uses this term – and the (un)expected benefits that progressive enhancement may give. But first, a quick refresher on what progressive enhancement is.

The principles of progressive enhancement

I can’t put progressive enhancement in a neater nutshell than Jeremy does in his talk ‘Enhance!’:

  1. Identify the core functionality
  2. Implement it using the simplest technology possible
  3. Enhance!

For websites this boils down to practical principles like these:

But there’s no hard-and-fast set of rules for progressive enhancement, because every site has different functionality. That’s why it’s considered a philosophy rather than a checklist. As Christian Heilmann said, progressive enhancement is about asking "if" a lot.

Emergent properties

Someone once said words to the effect of "the only constant is change", meaning that the only thing you can rely on is that things will not stay the same. That’s good! Progress is positive and brings with it new opportunities.

These opportunities can be seen as emergent properties – new or existing attributes of things which emerge as time goes on. For example, the increasing uses of mobile computing devices and fast home connection speeds are emerging properties leading to opportunities for new types of business. Likewise, the prevalent use of social media and its unprecedented bulk collection of data about its users is allowing new models for advertising – and, unfortunately, more nefarious uses – to emerge.

These emerging properties are often very difficult if not impossible to predict. Progress can lead to unexpected outcomes. Technology in particular is often put to unanticipated uses and exhibits unexpected behaviour when used at scale.

Who, for example, could have predicted the explosion of new devices and form factors just a few years ago. Devices once the domain of science fiction are now commonplace, and the range of new input types – notably touch and voice – is revolutionising how people interact with technology.

While fixed line download speeds are increasing many in developing nations, who arguably are the ones who could benefit the most from widespread Internet access, are stuck with very slow speeds, if any at all. Clearly we have a long way to go to achieve parity in global access to the Internet.

(Un)expected benefits

With such a wide array of both expected and unexpected properties of the current technological revolution, building our systems in such a way to both be resilient to potential failures and benefit from unanticipated events surely is a no-brainer. The ‘On Technical Credit’ paper defines this approach as Technical Credit:

Technical Credit is the investment in the engineering, designing and constructing of software or systems over and above the minimum necessary effort, in anticipation of emergent properties paying dividends at a later date.

This is Progressive Enhancement. It’s about putting some thought in up-front to ask those tricky "what if" questions. Questions such as:

Thinking about these, and many other, potential problems leads you to follow the recipe given by Jeremy which I quoted above:

  1. Identify the core functionality
  2. Implement it using the simplest technology possible
  3. Enhance!

Implementing core functionality using the simplest technology possible – in the case of a website by using well-structured semantic HTML markup generated on the server – gives some expected benefits:

Plus it provides a strong foundation to take advantage of unexpected occurrences; those emerging properties mentioned earlier.

From brand new browsers to old browsers working in a new way, your well-structured HTML will deliver your content even if everything else fails. Support for new input types on a myriad of unimagined devices will be taken care of by the browser – rather than you having to find Yet Another JavaScript Component™ that adds the support you need. And as new APIs are added you can layer on support for these knowing the foundation of your site is rock solid.

Spending Technical Credit

So you’ve built your system carefully, thinking about the many ways in which it could fail. You’ve done ‘over and above the minimum necessary effort’ and can now sit back, confident in the hope that should a rainy day come you’ve accrued enough technical credit to weather the storm.