Tips for non-designers…

Just like Garrett I don’t think I’m much of a designer. I wish I was, or rather I wish I had better graphic design skills to use in conjunction with my technical skills. But that isn’t to be, I fear. And anyone that can do both sides of the web design/development coin well should be slapped. Hard. Otherwise life would not be able to continue with that level of imbalance in the universe.

So articles like this can be very helpful. They aren’t a magic bullet, but they do give some pointers. And I’d agree with the points he made, especially that “content is king”. Gah, how many times have I found myself waiting for too long for a client to give me some text – any text! – so I can build a site for them. I find it very strange. After all, you wouldn’t hire a plasterer and decorator to redecorate your house then refuse to let them in, would you? Gah, again. Rant over. For now.

So, good article. Although I think that Andy has a good point. But he must work on bigger projects than I do ;0)

Partially rethinking the web…

Earlier today I read this article at Digital Web. My initial reaction was “What is this guy on?!”, but the author, Dirk Knemeyer, is well respected and so his ideas deserve being taken seriously.

While I agree with the sentiment that the web is broken, from the point of view that in general user experiences (shudder, that phrase makes me go cold) are somewhat lacking. Several very high profile sites – which, incidentally, thousands of people still use on a daily basis – are actually prett naff when it comes to providing an intuitive user interface. Maybe this web generation is pretty forgiving when it comes to slightly off-key site design.

However, the fact that people will use rubbish if they are given it doesn’t mean we should settle for rubbish. Too many times I’ve heard a newbie to the web complaint that it’s all too complicated, all these usernames and passwords, URLs, buttons, links, text, adverts etc. And they’re right, it is too complex. Even experienced webbies get confused sometimes because – in essence – the real problem with the web in my view is that there is too much information, and it’s not easy to find the bit you want. That’s not helped by the proliferation of spam that continues to roll in like an unstoppable tide.

Knemeyer’s point is that the web, certainly in it’s current form, can’t support the kind of rich applications that people want. And there are certainly technical constraints – not just with download times, but response times in general for web servers can vary greatly. And eventually everyone will get annoyed with a website that works well one day and badly the next. Or will they? I hear mutterings from collegues all round me most days about how their computer has “got it in for me today”. If we expected the same standard of reliability from our computers that we expect from our microwaves, fridge-freezers or DVD players (Ed: well done for not using ‘cars’ as an example in that list) then there would be a lot more very busy computer support people. The fact is that most people expect computer technology to be flaky, prone to uptimes and downtimes. And this, unfortunately, is true when it comes to desktop software.

In the article the following list appears:

  1. Web applications only have one advantage over desktop applications: universal access and no need for a local installation.
  2. Desktop applications have many advantages over Web applications, including: more powerful, faster, denser information displays; more robust interaction models; lusher presentation environments; easier natural integration into customized information and personal data collection
  3. Given the ubiquity of connectivity?the ability to be online almost anywhere, at any time, on any digital device?the one advantage the Web has is reduced to a software issue. A client-side application can leverage the interactive powers of the Web just as easily as a server-side application

While I agree with portions of that, I think Dirk is very dismissive of web applications. For one thing they have several other advantages in addition to the one (two?) he mentioned. Firstly, as they require no installation, they require no upgrading or patching. The application exists in one place, and is maintained in just that one place. Any changes are automatically sent to the client. Surely that is a massive advantage over having, potentially, thousands of different version fo a piece of software on desktops all over the world.

And because the demand on system resources for a web browser (can be) much smaller than a desktop app they can be run on slower machines, ones without oodles of RAM and a large hard drive. Try running pretty much any modern desktop software, such as Microsoft Office, on a Pentium 233 and see how fast it feels to use. Web apps can also provide many of the features that are found in desktop apps, such as dense information displays (although doesn’t that go against the goal to make the web more useable?) and lush presentation environments. In fact I would say that web apps inherently have a better presentation framework when using CSS to its full advantage. (That reminds me, I have a web app GUI stylesheet that I was going to make available. I must put that online soon). It’s certainly easier to modify the graphics, fonts, layout and menu system of a web app than a desktop app, even with clever use of XML configuration files. That very problem has recently been causing headaches for the .net software developers I work with.

One thing I’m not sure about at all is this:

Instead of designing, creating and deploying a site at the business level, content and specifications can be prepared and pushed forward, converted by the browser or application into the interactive form that each individual customer has specified is preferable.

Is Dirk suggesting that all data, from all websites, can be formatted in a sensible manner by client-side systems? Surely this would be a challenge – for one thing there is a wealth of data out there; much of which can be formatted in similar ways (that’s why RSS is able to do what it does), but there is also much that has to be handled in a very specific, customised way. Relational databases are successful in storing many different types of data because they are so flexible, they understand enough to know that they don’t understand what data they might be called upon to store, so they leave their options open. Would a client-side content presentation system be as flexible when it may not understand the data it is receiving? If so then great, if not then we’re asking for trouble. Most users, I would guess, don’t want to spend ages setting up their “content reader” to format blogs one way, technical manuals another way, e-books another way and product price lists another way. And even if they did, wouldn’t it be better for the producer of that content to provide both – a pre-formatted version and direct access to that data for the more tech-savvy users to manipulate how they wish. Like a well-presented web page with an optional XML feed of some kind.

Of course, that’s what most blogs and RSS aggregators do at the moment, take raw content and style it. With varying degrees of success, I might add. I happen to use Bloglines, but I see the constraints that are found there. Not least of which is a clunky and old-fashioned interface. Maybe we should be drawing together all these disparate threads and creating a standard list if microformats to handle different types of data – each data type would then have a default presentation style which could be modifiable by users. That system could be encapsulated in a series of standards for web apps, so people could subscribe to lots of different data sources, stick with the default data presentation format or format data in the way they want, and travel around having the same view of their data anywhere they go. Maybe there are already specifications like that already for some data types. Just like there are specifications for blog feeds (RSS/RDF/Atom) there could be specs for product data feeds put together by a working group consisting of respresentatives from many industries. Or specs for technical documentation, help files, presentations and any type of data you want. And that data need not all be textual, either. Interesting thoughts.

Because I can feel myself getting all protective about web apps I’d better talk about my ideas for the future. Firstly let’s look at the great web apps that are available at the moment. Google Maps, Flickr, Google Suggest, Basecamp etc. These are all making waves by proving that powerful, flexible, easy-to-use software can run inside a web browser. They all have their problems and foibles, but they are still paving the way for future rich web applications. And good on them, it’s great to see technologies that have been around for a while – JavaScript, mainly – be used to make something wholly new. This year is a very exciting time; I think someone coined the phrase “year of the web app” and I don’t think they’re far wrong.

However, and this is where I backtrack a bit, desktop software has a lot still to say. The new raft of great little tools that plug into your browser – be they search, syndication tools, page customising etc – are only the beginning. Using the power of open file standards and simple protocols data can be shared very effectively, and the distinction between web and desktop blurred much more than is currently the case. I experimented a while ago with writing my own browser (based, I am ashamed to say, on Internet Explorer) with the aim of loading extra buttons and functions in from a customisable XML file. The idea was to run web applications in a custom browser framework with some user-customisable options. It was never finished, but the idea is possibly one of the ways in which desktop apps and the web can be combined. And then there’s this new GreaseMonkey thing, which already is very, very interesting.

So, is the web really broken? I don’t think so, although we have a lot of challenges ahead of us to make sure it meets the needs of people.

Finally may I apologise for the number of buzzwords in this article. I think I must have leveraged them to death.

Busy busy busy…

Well, this last week has been extremely busy. As you may be able to tell from the title of this entry Ed: get on with it, foolish gibberer. In no particular order I have:

  • Worked on two intranet systems non-stop
  • Looked round 9 houses
  • Shortlisted 2 of the said 9 houses as potential new Maisons de Chris
  • Had an offer on the house we are selling
  • Provisionally accepted said offer
  • Cleared out the wash-house
  • Driven to my parents to deliver a tree
  • Moved offices

It might not sound much, but I feel shattered. And it’s only Wednesday. I did want to talk about a really interesting cartoon explaining micropayment models that I saw a few days ago, but I can’t find it. If anyone knows where it is I’d like to see it again.

In the meantime I will leave you with this.

Online form protection…

I was going to call this definitive form protection, but as soon as you call something definitive someone else writes something much better. And this isn’t really that definitive anyway, it’s just a collection of ideas I’ve picked up about how to protect your forms.

So, to business. You have a form on your website. It does what almost all forms do – allows users to submit data for processing, be it searching a database, sending a message, updating some information etc. However forms by their very nature are pretty insecure, and users – as well all know – are not to be trusted. So we should make sure that what is being selected or typed into the form is safe for use. If we don’t then we risk being wide open for, among other things, SQL injection attacks.

Traditionally web developers have used JavaScript as a way of checking that users are typing what they should into boxes. Using a JavaScript function to check each form field has the right kind of data in when the user submits the form, telling them that they’ve forgotton to enter their name, that their email address doesn’t look right, or that they need to select their favourite colour, for example, is great. It means the user gets instant feedback on any mistakes thay have made. Because, let’s face it, if there is opportunity for a mistake to be made on a website, it will be made by someone. You might think your system is foolproof, but there’s always a better fool than you out there.

The best script I’ve seen to check form input using JavaScript is this one, which not only adds a little warning icon to each field you get wrong, but turns that field red. Oh, and it gives a message back on the screen as well. Fantastic work.

But, there’s always a but, that’s not very secure. People could copy your form, paste it into their own page removing your classy JavaScript and still submit whatever info they want. So what you need is a second line of defence, and a touch of server side processing does the trick quite nicely.

When your form processing script receives the information, you need to run a couple of checks on it. Firstly make sure that the page request type is “POST” (all forms should be sent using POST rather than GET, as it’s a bit safer). Secondly you should check that the form has come from where you expect it to, using the referrer information. If the form data comes from somewhere wierd, or isn’t a POST request, then you can either display a message to the user, stop processing entirely, redirect the user to somewhere else, or create a clever system to jump out of the users CD drive and splat them in the face with a custard pie. That last one might only be available on Internet Explorer, though.

Then, once you think your data is coming from the right place in the right way, you should check each field to make sure it is in the format what you expect. This is where regular expressions are invaluable. Make sure that email addresses have a @ symbol and a full stop. If you offer a select list in your form, make sure the value submitted is one of the available ones. If you offer the user an input text box or textarea, make sure you escape any dodgy characters. PHP has a fantastic range of string functions to allow you to sanitise text in many different ways.

Make sure that any textual input isn’t longer than it needs to be, and don’t create any database connections until you absolutely have to. If in doubt, do not process anything. Don’t allow people to send you a 500 character field value if all they are entering is a username. Your primary concern is with protecting your system, and nicely-formatted messages letting a user know that they entered something wrong can be very helpful. Remember, all browsers have a back button that allow the user to try again.

There’s loads more you can do to protect your forms, but in general implementing these ideas with give you good protection against nasty data.

UPDATE: I’ve recently used, on a high-profile site that’s had problems with internet nasties submitting duff data through forms, an additional system for form validation. The field names for each field have been randomly generated each time the page is requested, and the field names passed through to the receiving page in an obfuscated manner. Obfuscation is the act of making something mixed-up, confused and non-obvious, but in a manner which you can get the real data back. It’s like encryption without encryption.

The receiving page then de-obfuscates the field names and uses those to get the data that the user typed in. That way the field names that will be accepted only exist one time, for one individual user. That will make it much harder for either a spam robot or a nasty person to infiltrate the form. When it comes to web security there is no such thing as a perfect system; the aim is to make things as difficult as possible for someone who is trying to break/break into your website.