Tue May 24
I recently wrote this article for a website I’m involved in and thought I might put it up here. I did have the idea of starting a series of articles on my website of which this would be the first one. But then I though that the blog does the same job. So here it is:
RSS stands for Really Simple Syndication, and is a lovely, wondrous thing that is transforming the web. Simply put it means that the content (text, images, articles, poems, blog entries etc) from a site can all be added to one handy file on the website that is freely accessible by anyone. This handy file uses the miracle computer language known as XML, which has revolutionised the storage of information, particularly on the web.
The file, for example one can be seen at http://www.wiblog.com/geektimes/wiblog.xml, is in such a format that although it looks a bit confusing to the human eye, machines can read it easily. Not only that, but they can format the text in it. So this means that using a handy RSS program (known as a “feed reader”, RSS files are known as “feeds”) you can view the latest information put on your favourite websites without going to them at all. And the titles, text, images, links etc can all be laid out in pretty much any way you want.
The feed reader I prefer is called Bloglines, and is actually a website at www.bloglines.com. I like it because it keeps track of what I’ve read and what I haven’t, so I don’t have two lists of feeds I read at work and at home. I can, if I want, start reading an article at work and finish it at home, and Bloglines will rememer what bits I’ve read.
Using Bloglines, or indeed any other feed reader, you “subscribe” to the feeds you want to read, and the software takes care of going to the website to see if any new information has been posted. So for news sites I see headlines with links to the articles within minutes of them being posted, delivered right to my desktop. For Amazon I can keep check on the most popular selling items in any category. And of course I can keep up to date with dozens of weblogs very easily.
Wiblog.com has the power of RSS under it’s belt – there’s a link on each menu (near the bottom) to the RSS feed so people know the address to subscribe to. In fact the blog at stillbreathing.co.uk is powered by the RSS feed of my Geektimes blog on Wiblog.com, so I only have to write my articles once and they appear in both places.
In case someone says “RSS, ugh!” to you, there are several different formats of feeds available. We think RSS is the best (which is why Rhys kindly collected information about it for me a couple of years ago), but there are also other formats such as RDF and Atom. Different blog websites use different, or sometimes multiple, feed formats. It doesn’t matter – most feed readers are capable of reading pretty much anything. That’s one of the great advantages of using the XML language, it’s universal.
For the curious: XML.com has a good introduction to the syndication phenomenon.
For the technical: Here is the specification for the RSS format.
For webby types: This article will, if you let it, blow your mind as to the possibilities of data sharing on the internet.
Information syndication is fast becoming indispensible for many people. Just think – it’s like having pay-per-view on the web, but without paying. Fantastic.
Fri May 20
Just like Garrett I don’t think I’m much of a designer. I wish I was, or rather I wish I had better graphic design skills to use in conjunction with my technical skills. But that isn’t to be, I fear. And anyone that can do both sides of the web design/development coin well should be slapped. Hard. Otherwise life would not be able to continue with that level of imbalance in the universe.
So articles like this can be very helpful. They aren’t a magic bullet, but they do give some pointers. And I’d agree with the points he made, especially that “content is king”. Gah, how many times have I found myself waiting for too long for a client to give me some text – any text! – so I can build a site for them. I find it very strange. After all, you wouldn’t hire a plasterer and decorator to redecorate your house then refuse to let them in, would you? Gah, again. Rant over. For now.
So, good article. Although I think that Andy has a good point. But he must work on bigger projects than I do ;0)
Fri May 20
Earlier today I read this article at Digital Web. My initial reaction was “What is this guy on?!”, but the author, Dirk Knemeyer, is well respected and so his ideas deserve being taken seriously.
While I agree with the sentiment that the web is broken, from the point of view that in general user experiences (shudder, that phrase makes me go cold) are somewhat lacking. Several very high profile sites – which, incidentally, thousands of people still use on a daily basis – are actually prett naff when it comes to providing an intuitive user interface. Maybe this web generation is pretty forgiving when it comes to slightly off-key site design.
However, the fact that people will use rubbish if they are given it doesn’t mean we should settle for rubbish. Too many times I’ve heard a newbie to the web complaint that it’s all too complicated, all these usernames and passwords, URLs, buttons, links, text, adverts etc. And they’re right, it is too complex. Even experienced webbies get confused sometimes because – in essence – the real problem with the web in my view is that there is too much information, and it’s not easy to find the bit you want. That’s not helped by the proliferation of spam that continues to roll in like an unstoppable tide.
Knemeyer’s point is that the web, certainly in it’s current form, can’t support the kind of rich applications that people want. And there are certainly technical constraints – not just with download times, but response times in general for web servers can vary greatly. And eventually everyone will get annoyed with a website that works well one day and badly the next. Or will they? I hear mutterings from collegues all round me most days about how their computer has “got it in for me today”. If we expected the same standard of reliability from our computers that we expect from our microwaves, fridge-freezers or DVD players (Ed: well done for not using ‘cars’ as an example in that list) then there would be a lot more very busy computer support people. The fact is that most people expect computer technology to be flaky, prone to uptimes and downtimes. And this, unfortunately, is true when it comes to desktop software.
In the article the following list appears:
- Web applications only have one advantage over desktop applications: universal access and no need for a local installation.
- Desktop applications have many advantages over Web applications, including: more powerful, faster, denser information displays; more robust interaction models; lusher presentation environments; easier natural integration into customized information and personal data collection
- Given the ubiquity of connectivity?the ability to be online almost anywhere, at any time, on any digital device?the one advantage the Web has is reduced to a software issue. A client-side application can leverage the interactive powers of the Web just as easily as a server-side application
While I agree with portions of that, I think Dirk is very dismissive of web applications. For one thing they have several other advantages in addition to the one (two?) he mentioned. Firstly, as they require no installation, they require no upgrading or patching. The application exists in one place, and is maintained in just that one place. Any changes are automatically sent to the client. Surely that is a massive advantage over having, potentially, thousands of different version fo a piece of software on desktops all over the world.
And because the demand on system resources for a web browser (can be) much smaller than a desktop app they can be run on slower machines, ones without oodles of RAM and a large hard drive. Try running pretty much any modern desktop software, such as Microsoft Office, on a Pentium 233 and see how fast it feels to use. Web apps can also provide many of the features that are found in desktop apps, such as dense information displays (although doesn’t that go against the goal to make the web more useable?) and lush presentation environments. In fact I would say that web apps inherently have a better presentation framework when using CSS to its full advantage. (That reminds me, I have a web app GUI stylesheet that I was going to make available. I must put that online soon). It’s certainly easier to modify the graphics, fonts, layout and menu system of a web app than a desktop app, even with clever use of XML configuration files. That very problem has recently been causing headaches for the .net software developers I work with.
One thing I’m not sure about at all is this:
Instead of designing, creating and deploying a site at the business level, content and specifications can be prepared and pushed forward, converted by the browser or application into the interactive form that each individual customer has specified is preferable.
Is Dirk suggesting that all data, from all websites, can be formatted in a sensible manner by client-side systems? Surely this would be a challenge – for one thing there is a wealth of data out there; much of which can be formatted in similar ways (that’s why RSS is able to do what it does), but there is also much that has to be handled in a very specific, customised way. Relational databases are successful in storing many different types of data because they are so flexible, they understand enough to know that they don’t understand what data they might be called upon to store, so they leave their options open. Would a client-side content presentation system be as flexible when it may not understand the data it is receiving? If so then great, if not then we’re asking for trouble. Most users, I would guess, don’t want to spend ages setting up their “content reader” to format blogs one way, technical manuals another way, e-books another way and product price lists another way. And even if they did, wouldn’t it be better for the producer of that content to provide both – a pre-formatted version and direct access to that data for the more tech-savvy users to manipulate how they wish. Like a well-presented web page with an optional XML feed of some kind.
Of course, that’s what most blogs and RSS aggregators do at the moment, take raw content and style it. With varying degrees of success, I might add. I happen to use Bloglines, but I see the constraints that are found there. Not least of which is a clunky and old-fashioned interface. Maybe we should be drawing together all these disparate threads and creating a standard list if microformats to handle different types of data – each data type would then have a default presentation style which could be modifiable by users. That system could be encapsulated in a series of standards for web apps, so people could subscribe to lots of different data sources, stick with the default data presentation format or format data in the way they want, and travel around having the same view of their data anywhere they go. Maybe there are already specifications like that already for some data types. Just like there are specifications for blog feeds (RSS/RDF/Atom) there could be specs for product data feeds put together by a working group consisting of respresentatives from many industries. Or specs for technical documentation, help files, presentations and any type of data you want. And that data need not all be textual, either. Interesting thoughts.
However, and this is where I backtrack a bit, desktop software has a lot still to say. The new raft of great little tools that plug into your browser – be they search, syndication tools, page customising etc – are only the beginning. Using the power of open file standards and simple protocols data can be shared very effectively, and the distinction between web and desktop blurred much more than is currently the case. I experimented a while ago with writing my own browser (based, I am ashamed to say, on Internet Explorer) with the aim of loading extra buttons and functions in from a customisable XML file. The idea was to run web applications in a custom browser framework with some user-customisable options. It was never finished, but the idea is possibly one of the ways in which desktop apps and the web can be combined. And then there’s this new GreaseMonkey thing, which already is very, very interesting.
So, is the web really broken? I don’t think so, although we have a lot of challenges ahead of us to make sure it meets the needs of people.
Finally may I apologise for the number of buzzwords in this article. I think I must have leveraged them to death.
Wed May 18
Well, this last week has been extremely busy. As you may be able to tell from the title of this entry Ed: get on with it, foolish gibberer. In no particular order I have:
- Worked on two intranet systems non-stop
- Looked round 9 houses
- Shortlisted 2 of the said 9 houses as potential new Maisons de Chris
- Had an offer on the house we are selling
- Provisionally accepted said offer
- Cleared out the wash-house
- Driven to my parents to deliver a tree
- Moved offices
It might not sound much, but I feel shattered. And it’s only Wednesday. I did want to talk about a really interesting cartoon explaining micropayment models that I saw a few days ago, but I can’t find it. If anyone knows where it is I’d like to see it again.
In the meantime I will leave you with this.