I Am The Very Model Of A Modern Web Developer

I am the very model of a modern web developer
I build my sites in Ember or in React or in Angular
The download size is massive but development is easier
My grunt and gulp and NPM all prove that I am geekier

I am the very model of a modern web developer
My animations cause offence in anyone vestibular
A carousel with massive pictures should be seen as de rigeur
And light grey text with background white will make my content sexier

Some people say my websites should all be enhanced progressively
But my developmental choices have been made deliberately
A 12 meg payload is a guarantee of exclusivity
I don’t want Luddite users who refuse to upgrade from 2G

I have a brand, you could say I’m an Internet celebrity
But try to load my site on anything but super fast 4G
You’ll just get empty pages on your Android or your iPhone 3
Upgrade your phone, you pauper, or you’ll never get a byte from me

With apologies to Gilbert and Sullivan.

Making the ShopTalk Show theme tune

I’m a regular listener to the ShopTalk Show podcast, a really great show about web design and development hosted by Chris Coyier and Dave Rupert. A couple of years ago, not long after the show started, I offered Chris and Dave a very rough piece of music as a theme tune, and they’ve been using it ever since.

But the time has come to refresh the theme tune, and the guys asked me to record a new version. Specifically they wanted it “rootsier”, and around 20 seconds in length. Here is the new version:

If you’re interested in how I recorded it then read on!

Chris and Dave wanted me to incorporate a recording of a crowd shouting the show motto “Just Build Websites”. I heard in my head exactly how that sample could be used at the end of the tune, so I knew what I was aiming for.

I busked a quick bit of music in the style they wanted. I grabbed my guitar, started strumming an A chord, and very quickly had a chord progression I was happy with, including the ending that could include the “Just Build Websites” sample. I wrote the music down on the back of an envelope (as that’s a traditional thing to do) and added some ideas for an arrangement:

00-music

Now I could start recording. I used the fantastic free Audacity software, as I wouldn’t need any fancy effects. It handles simple multitrack recording, and is really easy to use.

Click track

First I laid down a click track, to ensure I was keeping in time with myself. I recorded enough bars for a lengthy click intro before I start playing, and of course enough bars to go right to the end of the recording. If you’re playing along to a click track like this it’s a good idea to have at least 4 bars before you start playing. It gives your hands time to get from the keyboard to the instrument, and lets you settle into the tempo.

01-click-track

Acoustic 1

Then I grabbed my trusty Line6 Variax 500 and chose the Martin D-28 6-string acoustic guitar sample sound. That’s the main guitar you hear at the beginning.

02-acoustic-1

The Variax is great as it gives a really good impersonation of a real guitar. Yes, anyone with a good ear can hear it’s not a real Martin D-28, but then it didn’t cost ten grand. Plus it comes with a massive range of other stringed instrument samples, a couple of which I talk about below.

The first acoustic took a few takes to get right, but I wasn’t happy with the sound, so decided to add a second acoustic guitar.

Acoustic 2

This time I chose a Gibson J-200 and double-tracked the first acoustic, but using my fingertip instead of a pick. I panned the second acoustic to the left and pulled the levels down. This gave me a nice stereo spread of sound.

Bouzouki

I knew for the main body of the piece I wanted a ringing guitar sound, probably double-stringed and higher than the guitars. I don’t have a bouzouki (although I think that will be my next instrument purchase; Jeremy will be pleased) but by choosing a Martin D 12-28 12-string guitar sample and putting a capo up on (I think) 5th fret it was a reasonable rendition.

This took quite a few takes to get right as my fingers didn’t want to do what I told them. Sometimes that’s just the way it goes. Here are the two acoustics and bouzouki tracks.

04-bouzouki

Bass

The time had come to lay down the Groove Machine. Ahem, I mean record the bass. I’ve got a beautiful 1978 Fender Precision bass, and a couple of takes later I had it done.

05-bass-before

Bass almost always needs come compression to even out the levels. I applied some basic compression to the bass track:

06-bass-compressor

And ended up with something looking much fuller, although a couple of peaks were uncomfortably high:

07-bass-after

Here’s a simple trick. If you see a peak which is really high, like the ones in the red circles above, it’s likely that there are just one or two waves which are high. Zoom right in and select the high waves:

08-zoom-selection

Then pull the levels down by a notch:

09-amplify

This is before:

10-peak-before

And this is after:

11-peak-after

I’ve found that this doesn’t affect the sound in any noticeable way, but help to stop clipping.

Banjo

To round out the “rootsy” nature of the recording I added a banjo part, using the Gibson Mastertone sample in the Variax. It’s pretty low in the mix, and it’s not complicated, as I ain’t a good finger picker. But it’s a nice little addition to the piece, particularly at the beginning.

When I say I’m not a good finger-picker I mean I’m really awful, so getting this banjo part OK was Hard Work. I lost count of how many takes it took, but it was well over 20.

Just Build Websites (JBW)

Now I was ready to add the JBW sample. The problem was the words are said faster than the tempo of the tune. When I was busking the chord progression I tried playing the music fast enough to fit the speed the words are said, but it sounded manic.

You can see here the peaks of the JBW track don’t match with the peaks of the bass track above.

12-jbw-raw

What I needed to do was cut the words apart, so they are spoken at the same tempo as the music. I toyed with stretching the JBW sample, but it sounded awful.

Cutting the JBW sample into parts was easy, but then I got a horrible “clipped” effect after the first two words, because there was artificially-inserted silence. What to do? Add reverb? Nope, that sounded horrible and false. Instead, I copied parts of the sample and shifted it along to mask the gap. Here you can see how I lined the words up with the bass notes, then covered the gap with my fake echo:

13-jbw-echo

I pulled the level of the fake echo down until it was reasonably unobtrusive, but still masked the gap:

Drums

Next I used my Alesis DM5 digital drum kit to lay down a drum line. It’s mostly pretty simple, but I did do a sweet triplet fill before the last two notes:

The drum sound was a bit dry, so I added some small room reverb:

14-drums-reverb

Mixing

As the levels were getting set as I added the tracks there wasn’t much to do in terms of mixing. I did find the bass got lost, particularly in small headphones. By the way, it’s a good idea to check your mix not just on studio monitors, but on headphones. I used Grado SR60 and RHA MA350 (I think) in-ear headphones. A bit of EQ on the bass soon made it pop out again:

15-bass-eq

After exporting my mix to 2 tracks I “topped and tailed” the resulting file to remove empty space at the beginning and fade out nicely at the end. Then I encoded to MP3 and emailed it to Chris and Dave. Job done.

Nearly.

The second acoustic was slightly out of tune one one or two notes, so I re-recorded that. And I wasn’t happy with two of the notes on the bass as well, so I re-recorded that. Dave also made a great suggestion of adding a little banjo intro. My amazing (not) finger-picking skills once again saved the day:

As this banjo was stuck out on its own I added a touch of reverb to beef it up.

Now, listening to the individual parts you may thing “hmm, these are a bit raw”. You’re right, they are. As part of that “rootsy” feel I didn’t want to overall piece to feel too polished – I wanted it to sound like a bunch of guys messing around with some recording gear late one night after a few beers. Hopefully I’ve achieved that, and hopefully the listeners to the ShopTalk Show will like it. I certainly enjoyed recording it.

Progressive Enhancement

I recently sent this as email to a colleague. You’ll be glad to know we sorted things out :0)

The little chat we had last week about AngularJS has been playing on my mind. As you know I’m not against JavaScript (I love it, and have even written an Open Source JavaScript library) and I have a personal project partly planned to try out AngularJS, which looks awesome.

However, your comment that “progressive enhancement is one way to do it” (emphasis mine) bothers me. A lot. I’ve heard this attitude from a lot of developers, and I believe it’s wrong. Not because I believe every website or app (more on the difference between those two things later) should never use JavaScript, but because it ignores the fundamental layers of the web which cry out for a progressive enhancement approach. These layers are:

image004

Here’s how a browser works (more detail).

  1. It requests a URL and receives an HTML document
  2. It parses that document and finds the assets it needs to download: images and other media, CSS, JavaScript
  3. It downloads these assets (there are rules and constraints about how this happens which vary slightly from browser to browser, but here’s the basics):
    1. CSS files get downloaded and parsed generally very quickly, meaning the page is styled
    2. JavaScript files get downloaded one-by-one, parsed then executed in turn
    3. Images and other media files are downloaded

Let’s look at the absolute fundamental layer: the HTML document.

HTML

Way back in the beginning of the web there was only the HTML document on a web page; no CSS, no JavaScript (very early HTML didn’t even have images). In fact without HTML there is no web page at all: it’s the purpose of HTML to make a web page real. And a set of URLs, for example the URLs under a domain like my-site.com, which doesn’t return at least one HTML document is arguably not a website.

Yes, URLs can also be used to download other resources such as images, PDF files, videos etc. But a website is a website because it serves HTML documents, even if those document just act as indexes of the URLs of other (types of) resources.

HTML is the fundamental building block of the web, and software which can’t parse and render HTML (we’ve not even got to CSS or JavaScript yet) can’t call itself a web browser. That would be like software which can’t open a .txt file calling itself a text editor, or software which doesn’t understand .jpg files calling itself an image editor. So we have software – web browsers – which use URLs to parse and render HTML. That is the fundamental, non-negotiable, base layer of a web page, and therefore the web.

One of the great things about all HTML parsing engines is they are very forgiving about the HTML they read. If you make a mistake in the HTML (leave out a closing tag, whatever) they will do their best to render the entire page anyway. It’s not like XML where one mistake will make the whole document invalid. In fact that’s one of the main reasons why XHTML lost in favour of the looser HTML5 standard – because when XHTML was served with its proper MIME type a single syntax mistake would make the page invalid.

And if a web browser encounters elements or attributes it doesn’t recognise it will just carry on. This is the basis on which Web Components are built.

So even serving a broken HTML page, or one with unknown elements or attributes, will probably result in a readable document.

CSS

The next layer is CSS. At this point we’ve left the fundamentals and are onto optional components. That’s right – for a working web page CSS is optional! The page might not *look* very nice, but a well-structured page of HTML will be entirely usable even with no styling, in the same way that water is entirely drinkable without any flavourings.

But most – but not every – browser supports CSS (the basics of it, at least), so why do I treat it as optional? There are a few reasons. Firstly CSS might be turned off (which is very unlikely, but possible). But more importantly the CSS file might not be available or parsable:

  • The server hosting the CSS file may be offline
  • The file may be temporarily unreadable
  • The URL for the file may be wrong
  • DNS settings may be incorrect
  • The CDN may be down
  • The file may be empty
  • The file contains something that isn’t CSS or a syntax error

There may be lots of other problems, I’m sure you can think of some.

Now, any one of those errors could also be a problem with an HTML document. In fact if the document is being served from a CMS then there are a lot more things that could go wrong. But if any of those errors happen for an HTML document then the browser doesn’t have a web page to parse and render at all. Browsers have mechanisms to handle that, because everyone knows that URLs change all the time (even if they shouldn’t):

image007

So CSS is optional; it is layered on top of the fundamental layer (HTML) to provide additional benefits to the user – a nicely styled page.

And in the same way that web browsers are forgiving about the HTML they parse, they are also forgiving about CSS. You can serve a CSS file with syntax errors and the parts the rendering engine *can* parse correctly it will, and will ignore the rest.

So if an HTML document links to a CSS file which contains partially broken syntax the page will still be partially styled.

JavaScript

Now we come to the top layer, the client-side script.

I don’t need to tell you that JavaScript is sexy at the moment, and rightly so – it is a powerful and fun language. And the fact it’s built into every modern web browser arguably gives it a reach far wider than any other development platform. Even Microsoft are betting their house on it, with Windows 8 apps built on HTML and JavaScript.

But what happens when JavaScript Goes Bad on a web page? Here’s the same quick list of errors for a CSS file I wrote about above:

  • The server hosting the JS file may be offline
  • The file may be temporarily unreadable
  • The URL for the file may be wrong
  • DNS settings may be incorrect
  • The CDN may be down
  • The file may be empty
  • The file contains something that isn’t JavaScript or a syntax error

Let’s stop right there and look at the final error. What happens if a browser is parsing JavaScript and finds a syntax error (not quite all errors, but a lot) or something it can’t execute? It dies, that’s what happens:

“JavaScript has a surprisingly simple way of dealing with errors ? it just gives up and fails silently”

Browser JavaScript parsing engines are still pretty forgiving, but *way* less forgiving than HTML and CSS parsers. And different browsers forgive different things. Now, you might think “Just don’t have syntax errors” but the reality is bugs happen. Even something as innocuous as a trailing comma in an array will cause older Internet Explorer to fail, but not other browsers.

So JavaScript, while powerful and cool, is also brittle and can easily break a page. Therefore it *has to be optional*.

Web apps

You might be thinking “Yes, but my site is a Web Application, so I need JavaScript.” OK, but what is a web app exactly? I can’t say it better than Jeremy Keith, so I’ll just link to his article.

Progressive enhancement

This is the crux of progressive enhancement. Here’s the recipe:

  • Start with a basic web page that functions with nothing but the HTML, using standard semantic mark-up. Yes, it will require full-page GET or POST requests to perform actions, but we’re not going to stop there – we’re going to enhance the page.
  • Add CSS to style the page nicely; go to town with CSS3 animations if you want
  • Add JavaScript to enhance the UI and provide all the modern goodies: AJAX, client-side models, client-side validation etc

The benefits are obvious:

  • If the JavaScript or CSS files (or both) fail for any reason whatsoever the page still works
  • The use of semantic HTML means the page is fully understandable by search engine spiders
  • Because everything is rendered in HTML, not built up in JavaScript, it is understandable immediately by assistive devices
  • Serving fully-rendered HTML is quicker than building that same HTML client-side
  • Built-in support for older – and newer – browsers and devices

The best web developers on the planet all argue that progressive enhancement is the best way to approach web development. I honestly have no idea why anyone would think otherwise. There’s a good article (it’s actually the first chapter of Filament Group’s “Designing With Progressive Enhancement” book) on the case for progressive enhancement here.

Modern JavaScript

There are some people who use juicy headlines (this is called “link-baiting”) which doesn’t help those developers who are trying to promote progressive enhancement, instead causing JavaScript-loving developers to proclaim “you hate JavaScript!”. You know what developers are like: they get hot-headed. It’s much better to try to think clearly and objectively about development and come up with solutions based on real data.

The reality is that most modern JavaScript libraries don’t support progressive enhancement out of the box: AngularJS included. If there were a way to render standard HTML with real links and forms and then enhance that HTML with Angular I would be all over it, but unfortunately I haven’t found anything that explains how to do it yet.

This is something which I’ve been thinking about a lot, and I did a little proof of concept for a basic data-binding system. I wonder if it would be possible to apply the same techniques to Angular.

For me personally I’m a big believer in progressive enhancement, not just for accessibility reasons but for front-end performance as well. I do recognise it will probably add time to development, however the same can be said for Test Driven Development. The goal for progressive enhancement and TDD is the same: a better, more stable foundation for systems.

Book review: Digital Adaptation by Paul Boag

Almighty God, our heavenly Father,
we have sinned against you and against our fellow men,
in thought and word and deed,
through negligence, through weakness,
through our own deliberate fault.

It’s no secret to web design and development professionals that many organisations just don’t “get” the web. This shouldn’t be a surprise: I expect that chefs would say that many organisations don’t “get” cuisine, or newspaper editors would say that many organisations don’t “get” the media industry. That’s not necessarily a problem – we can’t expect everyone to know HTML and CSS.

Chefs are expected to know the nuances of many ingredients, but a restaurant manager only needs to know whether customers like the food and are willing to pay for it. Newspaper editors must understand the process of finding, investigating and writing stories, printing and distributing their publications, but a paper shop only needs to know what newspapers they need to stock. In the same way most organisations don’t need to know the technical aspects of building websites, but they do need to know how they can adapt themselves to get the best from the digital world we now inhabit.

At the risk of being accused of hyperbole I’m going to stress that “adapt” is the right word here. It’s about a constant evolution to enable growth and long-term survival.

For many years Paul Boag has been one of the leading voices in the web industry (my cheque is in the post, right, Paul?) through his website and podcast, not to mention the great work he and his colleagues at Headscape have done for a wide range of clients. So it’s great news that those years of insight have been poured into his latest book: Digital Adaptation.

It’s a great read; full of nuggets of wisdom gleaned from countless meetings with organisations of all shapes and sizes. When it comes to clients who want websites, Paul has seen it all. Which brings me onto the quote above. Yes, there is a point to it.

You see, people – and that who we have to deal with; real, human people – are a complicated bunch. We have a huge range of attitudes and thoughts, all shaped by the vast array of experiences we each travel through. We’re not always perfect, and we certainly don’t always make the right decisions. We fail in a number of ways: in thought, in word, in deed.

Sometimes we think something is true when it’s not – for example thinking carousels are a good idea. Sometimes we think something is false when the reality is different.

More often that we care to admit we say things that aren’t right, either. We speak without thinking; we tear down rather than build up.

And, if we’re honest, our actions rarely live up to what our best selves would do. We react badly, act thoughtlessly, ignore opportunities to do good.

And the reasons why are as varied as our failures. We fail through negligence, through weakness, and sometimes through our own deliberate fault.

But I’m not here to be a tree-hugging motivational speaker, I’m reviewing Paul’s book! Which is about … people. People who misunderstand the web, who don’t know what is possible, who sometimes (it has to be said) want their own way or the high way.

Organisations are just groups of people. And whatever the reasons why organisations don’t “get” the web, and irrespective of how those reasons take form, those people can be challenged and educated to start to “get” it. That’s what Paul’s book is about at it’s heart; kick-starting the changes in attitudes that are required for organisations to adapt to the digital landscape.

I’ve already said it’s a great book, but I do have a central problem with it. Paul repeatedly uses the word “digital” as a noun, but most people are used to it being used as an adjective. This might be a barrier to some people. Here’s an example:

Similar trends are occurring in the newspaper industry and among cable TV companies, as digital changes how consumers read the news and watch TV.

Wrapped up in that word “digital” is a whole set of other more definite nouns; the Internet and web, digital downloading, email and SMS, smartphones, tablets and much more. But while we in the digital industry understand that “digital” as a noun encompasses these things, and much more yet to be invented, my hunch is that those outside the industry (shall we call them “Muggles“?) may not get this.

Paul also uses digital in its more traditional form as an adjective; ‘digital team’, ‘digital strategy’ etc. This, I feel, will be more understandable to those we’re trying to educate. But on the whole this is a small nit-pick.

At 176 pages the book is brief enough to read in a couple of sittings, and it feels great. Printed on thick paper, with Veerle Pieters gorgeous illustrations. Even the main title on the front cover has a subtle texture to it. It’s a quality product.

Paul starts with an honest appraisal of the problem we face as digital professionals; that organisations can be ignorant, scared or indifferent to the changing world. His insight into the thought processes of the management in these organisations is enlightening and thought-provoking.

One thing I must particularly highlight is Paul stressing that the management attitudes digital professionals often struggle with may not be all deliberate (remember that “through ignorance, through weakness, through our own deliberate fault” thing?). This is timely and often needed; understanding and good communication must work both ways.

The bulk of the book consists of practical and well-reasoned arguments why organisations should not just embrace the digital revolution but fully engage with it – to the extent that it informs their entire strategy. This is “digital by default”, a motif which runs throughout the book and the premise of which I fully agree with.

This, however, is my second bug-bear with Digital Adaptation. I know from experience that design teams are sometimes perceived by outsiders as unapproachable, uninterested in the wider aims of the organisation, and unwilling to be pragmatic about design. Of course that is rarely the truth. But we have to face facts: the offices of professionals in most other industries don’t contain beer fridges, games consoles or loud music. Perhaps those things are necessary for a creative environment. I’m no designer, so I can’t really comment.

Digital Adaptation nods towards these attitudes and tries to cut through the frippery to the real heart of the matter; that designers are serious professionals who are crucial to the ongoing success of an organisation in the changing culture we inhabit. The problem for me is that I don’t think enough is said to distance the design industry from the perceptions I’ve encountered in more traditional industries.

There are excellent and strong examples of a modern digital strategy doing wonders for organisations, notably in the discussion of the gov.uk work done by Martha Lane Fox, Mike Bracken and the gov.uk team. And there are plenty of stories from a range of modern businesses – Twitter, Google, Mailchimp and others – to further strengthen Paul’s central message that digital (yes, as a noun) matters.

All in all this is a great book which I believe will further and enhance the conversations happening in boardrooms, design studios and development offices in organisations all over the world.

Summit Events

There’s a great little – although rapidly growing – web industry meetup in Leeds called Hey!Stac, run by the nice chaps from We Are Stac. I’m a regular attendee and really enjoy the mix of design, front-end development and deeper technical talks that are delivered each month by all kinds of interesting speakers. It’s a great event, you should come along if you can.

But it’s the big-picture talks that I enjoy the most. In February 2014 Vincent Pickering unveiled his ideas around a new type of event that is neither conference nor hack day but somewhere in between, all framed around "The Need for Conversation". The slides for that talk are here, and there’s also a blog post by Vincent here.

His central premise was that conferences are great for getting the word out; for distributing knowledge in a one-to-many information transmission. However they become increasingly less effective the larger they get in two crucial ways:

  1. Sparking conversation between the speaker – the one proposing the ideas or questions – and the audience
  2. Resulting in real change or concrete innovations

I’m no regular at the kinds of conferences Vincent is talking about. In fact I’ve only ever been to one of them – a Macromedia event at the Royal Armouries in, wow, 2000 or 2001 I think. So I’m not qualified to make judgements about the current crop of web industry conferences, but I do see the firehose of commentary about these events as they flow through my Twitter stream. According to the comments I’ve seen I think Vincent has a very good point.

It may be possible – perhaps even inevitable – to attend these kind of events and come away with more questions that you arrived with. More things you’d like to try but don’t have a clue how to get started. Lots of information but nothing practical set down that can be used to spread the ideas more widely. In the worst cases I can imagine conferences are just hot air generators.

But it doesn’t have to be that way. What Vincent proposes is a new kind of event, one centred around the conversations that need to happen to foster tangible developments in our, or indeed any, industry. Here’s the process that Vincent is proposing (with a fair number of assumptions added by me) all based around a website that captures problems which may be worked on at the event:

  1. People submit problems to the website. These could be questions that need answering, gaps in technology that need to be filled or conundrums about processes or methodology.
  2. The event organisers consider the problems submitted and choose ones suitable for working on
  3. At the start of the event the organisers assign each of the selected problems to small teams – probably around 5 people – for discussion with the aim of producing some kind of output by the end of the event
  4. At the end of the event some or all of the teams present their findings, developments and discussions with the wider group
  5. All output from the event is placed back onto the website to allow conversations and developments to continue with the wider community

This might sound a lot like a hack day, but it’s a much larger vision than just hacking around an API or modding hardware. This format is something that could be applied to non-technical problems, the events run by non-technical folk. And that, for me, is where things get really interesting.

The humble suggestion box has a long but sadly tainted history in many companies. In my experience because most suggestion boxes have no inherent feedback loop many employees treat them as nothing more than a joke; an irrelevant and shallow attempt by management to make employees feel like they have a say in the direction of the company. Which, in some cases, they are.

The problem here is that the suggestion box is a "black box". You put your suggestions in, but don’t know what happens to them and very rarely get any response. This is partly what Vincent’s suggestion is aiming to address: by making the entire conversation around the problems submitted to the website fully open and collaborative the proposers, organisers and community members have full visibility of every part of the process. If an idea is going to be rejected it will only be rejected after full consideration in plain sight.

Some organisations are already embracing online forms of collaborative brainstorming, from web-based suggestion box for Portsmouth University Library to the Town Hall sessions of WordPress the web is a platform that lends itself perfectly to discussion of ideas. Some are even making a business out of it, but all these attempts are still variants of black boxes with no guarantee of real change, innovation, answers or progress.

That’s the point of Vincent’s Summit Event. The event itself is time dedicated to fixing the problems suggested, researching and collaborating on innovations – making something real happen. The aim is to have something tangible by the end of the event to present back to the attendees which then would be put back on the website for the wider community to continue the conversation.

Even if the output could be construed as a failure, the fact that people have discussed and wrestled with the problem means it’s not wasted time and, as Vincent mentioned in his talk, will save others toiling down dead ends. So even a failure isn’t a failure, which Thomas Edison had something to say about.

Admittedly this is a bit of an abstract idea to get to grips with. Some examples would help. Here are a few, web industry-based, problems I can think of that could possibly be worked on at one of these events:

  • So what’s the practical, robust solution for responsive images?
  • Is there a right way to get Grunt working on a Windows machine?
  • Getting content from a client is always the hardest bit of a website project. How can we make that better?
  • Is it possible to maintain very high standards for page performance when the users of the CMS are, in a word, clueless?

Considering other industries, especially problems that are not technology related, shows how this format can lead to real innovation and progress. Here are some ideas:

  • Many people don’t trust solicitors. What can we do to increase trust in the profession?
  • Is it possible to make flying more secure, but without intrusive body scanners and searches?
  • There isn’t a simple way to compare houses one may be interested in buying across multiple estate agents. What can we do to make that possible?
  • Opening a standard can of tuna in oil is a very messy affair. Let’s fix it!

Some of these examples may be a bit stupid, but they illustrate that almost anything could be tackled by a suitably sized group given a tight time constraint. Let’s imagine they are assigned a really complex problem. There may not be any hope of the problem being fixed during the event but getting interested, motivated people in a room together to get their heads around it and move the conversation on is bound to be beneficial – especially when the output from that discussion is made public.

I think it’s interesting that Vincent chose the word "summit" to describe this new type of event. While summits are for heads of state to get together for a chin-wag, Vincent’s Summit Events are different because a) anyone can attend and b) they are designed to actually achieve something. Therefore they are more akin to jazz summits, where multiple performers of a particular instrument get together to produce a recording. Jazz summits are both a celebration of the instrument and an opportunity to collectively encourage and enhance the skill of the performers.

So I’m really interested to see what comes out of this idea, specifically empowering grass-roots movements to fix problems and improve the world – whether that’s new developments for the web industry or innovations for industries we haven’t even thought of.