Technical Credit

There’s a well-known concept in programming that refers to the negative effects poorly-made decisions can have on the quality of software over time: Technical Debt. The Wikipedia article gives some examples of the causes and the consequences of technical debt.

This financial analogy is a useful one, as it nicely describes the long-term impact of debt – the longer you have it, the worse the problem becomes. Conversely you can have credit (e.g. savings) in your account for a long time, waiting for the proverbial “rainy day” to take advantage of your good planning. Want to splash out on a new pair of sparkly galoshes? No problem!

At the An Event Apart conference in Orlando in October 2016, Jeremy Keith spoke about "Resilience: Building a Robust Web That Lasts" which was a talk about progressive enhancement cleverly disguised as it didn’t use the phrase ‘progressive enhancement’. In that talk Jeremy dropped a knowledge bomb, calling building sofware using the principles of progressive enhancement like building ‘technical credit’.

This, in my opinion, is genius. It’s a gloriously positive spin on technical debt, which is too often seen as the product of bad developers. It’s saying “you, developer, can make a better future”. I love that.

It appears there is little online which talks about this “technical credit” concept. In fact, the only decent resource I could find is a 2014 paper from the Conference on Systems Engineering Research entitled ‘On Technical Credit’. The author, Brian Berenbach, gives a brief but eloquent introduction to the idea that we should concentrate on what should be done, rather than what shouldn’t be done to make a system better.

From the abstract:

"Technical Debt" … refers to the accruing debt or downstream cost that happens when short term priorities trump long term lifecycle costs… technical debt is discussed mostly in the context of bad practices; the author contends that the focus should be on system principles that preclude the introduction, either anticipated or unanticipated, of negative lifecycle impacts.

Sounds great; let’s stop bad things happening. How? The abstract continues:

A set of heuristics is presented that describes what should be done rather than what should not be done. From these heuristics, some emergent trends will be identified. Such trends may be leveraged to design systems with reduced long term lifecycle costs and, on occasion, unexpected benefits.

Emphasis mine. I’ll wait here while you to read the rest of the document.

At this point hopefully you can see the clear link to the principles of progressive enhancement. Let’s look at a few examples emergent trends – which I’ll call ‘properties’ as the paper uses this term – and the (un)expected benefits that progressive enhancement may give. But first, a quick refresher on what progressive enhancement is.

The principles of progressive enhancement

I can’t put progressive enhancement in a neater nutshell than Jeremy does in his talk ‘Enhance!’:

  1. Identify the core functionality
  2. Implement it using the simplest technology possible
  3. Enhance!

For websites this boils down to practical principles like these:

But there’s no hard-and-fast set of rules for progressive enhancement, because every site has different functionality. That’s why it’s considered a philosophy rather than a checklist. As Christian Heilmann said, progressive enhancement is about asking "if" a lot.

Emergent properties

Someone once said words to the effect of "the only constant is change", meaning that the only thing you can rely on is that things will not stay the same. That’s good! Progress is positive and brings with it new opportunities.

These opportunities can be seen as emergent properties – new or existing attributes of things which emerge as time goes on. For example, the increasing uses of mobile computing devices and fast home connection speeds are emerging properties leading to opportunities for new types of business. Likewise, the prevalent use of social media and its unprecedented bulk collection of data about its users is allowing new models for advertising – and, unfortunately, more nefarious uses – to emerge.

These emerging properties are often very difficult if not impossible to predict. Progress can lead to unexpected outcomes. Technology in particular is often put to unanticipated uses and exhibits unexpected behaviour when used at scale.

Who, for example, could have predicted the explosion of new devices and form factors just a few years ago. Devices once the domain of science fiction are now commonplace, and the range of new input types – notably touch and voice – is revolutionising how people interact with technology.

While fixed line download speeds are increasing many in developing nations, who arguably are the ones who could benefit the most from widespread Internet access, are stuck with very slow speeds, if any at all. Clearly we have a long way to go to achieve parity in global access to the Internet.

(Un)expected benefits

With such a wide array of both expected and unexpected properties of the current technological revolution, building our systems in such a way to both be resilient to potential failures and benefit from unanticipated events surely is a no-brainer. The ‘On Technical Credit’ paper defines this approach as Technical Credit:

Technical Credit is the investment in the engineering, designing and constructing of software or systems over and above the minimum necessary effort, in anticipation of emergent properties paying dividends at a later date.

This is Progressive Enhancement. It’s about putting some thought in up-front to ask those tricky "what if" questions. Questions such as:

Thinking about these, and many other, potential problems leads you to follow the recipe given by Jeremy which I quoted above:

  1. Identify the core functionality
  2. Implement it using the simplest technology possible
  3. Enhance!

Implementing core functionality using the simplest technology possible – in the case of a website by using well-structured semantic HTML markup generated on the server – gives some expected benefits:

Plus it provides a strong foundation to take advantage of unexpected occurrences; those emerging properties mentioned earlier.

From brand new browsers to old browsers working in a new way, your well-structured HTML will deliver your content even if everything else fails. Support for new input types on a myriad of unimagined devices will be taken care of by the browser – rather than you having to find Yet Another JavaScript Component™ that adds the support you need. And as new APIs are added you can layer on support for these knowing the foundation of your site is rock solid.

Spending Technical Credit

So you’ve built your system carefully, thinking about the many ways in which it could fail. You’ve done ‘over and above the minimum necessary effort’ and can now sit back, confident in the hope that should a rainy day come you’ve accrued enough technical credit to weather the storm.

I Am The Very Model Of A Modern Web Developer

I am the very model of a modern web developer
I build my sites in Ember or in React or in Angular
The download size is massive but development is easier
My grunt and gulp and NPM all prove that I am geekier

I am the very model of a modern web developer
My animations cause offence in anyone vestibular
A carousel with massive pictures should be seen as de rigeur
And light grey text with background white will make my content sexier

Some people say my websites should all be enhanced progressively
But my developmental choices have been made deliberately
A 12 meg payload is a guarantee of exclusivity
I don’t want Luddite users who refuse to upgrade from 2G

I have a brand, you could say I’m an Internet celebrity
But try to load my site on anything but super fast 4G
You’ll just get empty pages on your Android or your iPhone 3
Upgrade your phone, you pauper, or you’ll never get a byte from me

With apologies to Gilbert and Sullivan.

HTML Matters

Rant time. No-one can deny that web development tooling has improved in leaps and bounds over the last few years. I’ll sound like a moaning old man if I talk about how primitive things were in the old days, so I won’t.

But despite this wealth of tools, loads of good quality information online, and access to resources and training why do I still regularly see HTML like this in new web projects:

<div class="footer">
<span><img src="twitter.gif" /></span>
<span><img src="facebook.gif" /></span>
<span><img src="instagram.gif" /></span>
</div>

It appears modern web developers have within their grasp a panoply of development and build tools – NPM, bower, gulp, grunt etc – but don’t have access to HTML elements which have been implemented in browsers for years. HTML matters!

It matters because structure matters. Meaning matters. Semantics matter (but don’t go overboard). Accessibility matters. For many projects, SEO matters.

A web page is, at it’s core, a structured document. Pile on all the fancy-pants JavaScript frameworks you want, but you’re still delivering HTML to a rendering engine built in a browser. If you’re making no effort to use appropriate HTML elements to mark up your content then you need to sharpen up your skills.

Unit testing in WordPress

One of the things I really appreciate about developing in the .Net stack is the fantastic unit test support. Using mocking libraries like moq and leaning on the power of nunit to handle my dependencies means I can write unit tests that truly do test just the unit under test. True unit tests are useful for three very important reasons:

  1. That the code is doing what it should
  2. That the code handles unexpected inputs correctly
  3. That after refactoring the code continues to do what it did before

A robust, extensive suite of tests – both unit and integration tests – are crucial for good quality software and, fortunately, have been pretty common in the .Net development world for a long while.

When developing for WordPress, however, it’s not always been that way. I remember not so many years ago that test of any kind wasn’t something often talked about in the WordPress community. I guess we were focussed on getting code shipped.

Things have changed, and automated testing is now a recognised part of the WordPress development workflow. One of the problems with the WordPress Unit Test Suite, as pointed out by Greg Boone, is that it’s not actually a suite of unit tests – it has dependencies like a MySQL database, so would be more correctly called a suite of integrations tests. Pippin also calls these kind of tests “unit”, but they are definitely integration tests.

I’m at risk of over-egging this point, so please read this good description of the difference between unit and integration tests.

To ensure the large WordPress plugin I’m currently building is as good as it can be I want to build a suite of (true) unit tests. That means I need way of mocking WordPress functions (such as do_action, apply_filters and wp_insert_post) and globals such as $current_user and – crucially – $wpdb. It turns out there are a few options, which I’ve briefly investigated. I’ll be using WP_Mock and the PHPUnit test double features.

The well-known WP_Mock from the clever guys at 10up is the backbone of mocking WordPress. It allows you to mock any WordPress function with some simple syntax:

\WP_Mock::wpFunction( 'get_permalink', array(
            'args' => 42,
            'times' => 1,
            'return' => 'http://example.com/foo'
        ) );

This will mock the get_permalink method when the only argument is the integer 42, ensuring it is only called once, and returning the string ‘http://example.com/foo’. Clever stuff.

There are other static methods in the WP_Mock class which allow you to:

  • Mock a method which returns the same value (a pass-through method)
  • Mock the calling of filters and actions
  • Mock the setting of actions and filters

Mocking $wpdb turns out to be pretty simple, as I can use the built-in test double functionality in PHPUnit. Sample code in the MockPress project wiki shows I can do this:

// in my test setUp method:
global $wpdb;
unset($wpdb);

// whenever I want to mock a $wpdb function I set up the method to mock:
$wpdb = $this->getMock('wpdb', array('get_var'));
// and set the value I want to be returned from my mock method:
$wpdb->expects($this->once())->method('get_var')->will($this->returnValue(1);

// now I can check the mock returns what I want:
$result = $wpdb->get_var("select anything from anywhere");
$this->assertEquals(1, $result);

I now just have to ensure my code is written in such a way as to make unit testing easy! I can highly recommend The Art of Unit Testing by Roy Osherove if you want to get into this deeply.

Crash Test Dummies

Crash test dummie reading "Crash Testing for Dummies"No, this isn’t a post about the band. It’s about real crash testing, also known as progressive enhancement testing.

Of course, this had to be Another Progressive Enhancement Post, didn’t it!

Ever thought about why car manufacturers test their cars under crash conditions? Is it because people deliberately drive their cars into walls or ditches? No; not usually, anyway. They test the safety of their cars because we live in an unpredictable world where things go wrong, all the time. Exceptional circumstances surround us every single day. Often we experience near misses – sometimes we’re no so lucky.

In fact, things go wrong on the roads so often that we’ve created thousands laws and guidelines that try to minimise the possibility of these exceptional circumstances occurring. We have speed limits and training before anyone can get behind the wheel of a car. We have street lighting and pedestrian crossings, kerbstones and crash barriers.

Yet things still go wrong on our roads. Sometimes through carelessness and stupidity, sometimes though negligence. Sometimes the blame can’t really be applied to anyone in particular.

Car manufacturers invest in making their cars safe, so that when the unexpected happens – which, at some point, it will – the occupants and other road users are kept as safe as possible. We expect nothing less, and safety features are rightly promoted to help sell cars. That’s good; we should strive to create a safer world where possible.

Yet on the web it’s a different story. No-one believes that things never go wrong online. In fact in my experience there’s rarely a web browsing session where something didn’t break. Images fail to load, sites respond so slowly they appear to be down, JavaScript throws an error because two scripts from different 3rd parties can’t co-exist, web fonts don’t load and so text is invisible. The list of what could – and often does – go wrong when loading websites goes on, and on, and on.

What’s happening here? Do we as web developers, designers, business owners not realise the inherent unpredictability of the Internet? Do we not understand that the web was designed to be like this – to keep going even if all that is delivered to the browser is the HTML? No, many of us understand but sweep this reality under the carpet.

We are dummies.

We’re dummies because we chase after the latest JavaScript framework-du-jour without considering if it supports the core principles of the web. We overload our pages with unoptimised images and gargantuan CSS files generated by a pre-processor. We fail to deliver first and foremost what our users fundamentally require – the content.

We’re dummies because we leave the crash testing to our users – the very people we should be protecting from those exceptional circumstances! And then we have the gall to complain that they aren’t using the latest browser or operating system, or that their device is underpowered. Here’s the reality for you: even when browsing conditions are optimal, things still often go wrong.

So, in my opinion are JavaScript frameworks bad? Do I detest CSS pre-processors? Do I have an allergy to beautiful imagery online? No, of course not. It’s our use of these tools which I rail against. Enhance your pages as mush as you want, but start from the beginning. Semantic HTML content and forms.

Don’t be a dummy.