Software design


17
Jun 10

Digital simulacra and the iPad human interface guidelines

This was originally posted as a comment to an article in UX Magazine about the iPad human interface guidelines. I was reminded by it today by this blogpost by Ben.geek.nz about the forthcoming Windows Phone 7 UI design. While I haven’t seen a WP7 in the flesh it looks as if it may come closer to the spirit of innovative digital design I invoke below. It remains to be seen and as always, god is in the details.

This conversation would be funny if it weren’t so depressing.

So here we have what is supposedly one of the world’s leading technology companies launching what it calls a “magical and revolutionary” product. And what does it do? It goes and encourages developers to build twee simulacra of physical objects. How unmagical. How unrevolutionary. How dull. Apple have seriously employed top-flight designers and developers to build digital representations of address books and books and goodness knows what else that computers are designed to get rid of. And by “get rid of” I mean “eliminate as a concept” not “replace with a digital lookalike”. Now they want everyone else to do the same. No thanks. This is 2010 not 1910.

Continue reading →


17
Dec 09

What’s the point of a tweeting mobile library?

@SutMobLib Twitter screenshot

Last week I launched @SutMobLib, a Twitter account that tweets the location of Sutton’s mobile library in real time. No, I’m not sitting here all day sending messages. A program does that automatically. Every time the library gets to a new stop it posts up its location.

Continue reading →


15
Mar 09

Building a local news mashup with Twitter, TwitterFeed, Delicious, Yahoo! Pipes, Ruby and RSS

sutton-local-news-mashup

(Click on the image to download the PDF, 19KB, opens in new window/tab.)

Like this? Follow me on Twitter: http://twitter.com/adrianshort

I’m a self-confessed and unashamed news junkie and this is how I’m starting to mash up news in my local area. For those that aren’t local, Sutton is a London borough with a population of approximately 180,000. Stonecot Hill is a neighbourhood within Sutton with a population of a few thousand.

Here’s how it all works.

Sources (green boxes)

I write Stonecot Hill News which is a local news blog running as a standalone WordPress installation on its own server. It produces an RSS 2.0 feed which here is treated as an outbound API.

Paul Burstow is the local member of parliament (constituency: Sutton & Cheam). Paul posts news regularly to his website and for many years that site has been serving an RSS 1.0 (RDF) feed. Whether he realises it or not, Paul laid one of the first foundations for news mashability in the borough.

The Sutton Guardian is the local newspaper, published by Newsquest. Together with its sister titles in other areas, they publish several dozen RSS 2.0 feeds for a wide variety of content.

Sutton Council is the local authority for the borough. Despite a recent £270,000 revamp to their website they haven’t yet managed to step into the Twenty-First and produce any RSS feeds. However, they do publish a variety of content regularly on their website, including their press releases.

APIs (grey boxes)

For the non-technical: API stands for Application Programming Interface, but that doesn’t tell you very much. Think of APIs like connectors or adapters that allow one program to plug into another in the same way that our household appliances can all connect to the electrical network because they share common plugs and sockets.

An API may be inbound (allowing data to be put into an application), outbound (allowing data to be extracted) or both.

As we can see in the diagram, applications which use APIs can be daisy-chained together, with the output of one application being fed into another.

RSS and Atom feeds are also APIs in that they provide a structured way for a program to get data out of an application. These feed formats are simple to implement (many applications produce them automatically) and are the first thing to consider when implementing a simple outbound API for an application.

Mashers (pink boxes)

Mashers are small programs that connect otherwise incompatible inbound and outbound APIs together. TwitterFeed is a simple example. Say you want to automatically post the new items from your blog to your Twitter account. Your blog serves an RSS feed but Twitter, while it has an inbound API, cannot accept RSS directly as input. TwitterFeed links the two, allowing the user to define any number of RSS feeds as inputs and any number of Twitter accounts as outputs, via the Twitter API. In this way, TwitterFeed plugs blogs into Twitter.

Yahoo! Pipes is a much more sophisticated and flexible masher. It can take inputs from a variety of sources (RSS, Atom, CSV, Flickr API, Google Base or even raw web pages), sort, filter and combine them in every conceivable way, and output the results as a single stream in various formats (RSS, JSON, and KML, the geo-format used by Google Earth). For my mashup I created this pipe to filter Paul Burstow’s, the Sutton Guardian’s and Sutton Council’s news and only pass through items containing the word “stonecot” to the stream that eventually ends in the @stonecothill Twitter feed, which is just for Stonecot Hill residents. The number of items coming through these sources about Stonecot Hill is very low, but when something appears residents will want to see it. (By way of example, only a single press release from Sutton Council in the last 227 concerns the Stonecot Hill area specifically.)

As mentioned above, Sutton Council doesn’t provide an RSS feed or any other kind of outbound API for its press release. I wrote a screen scraper in Ruby (using Hpricot) that grabs the press releases directly from the council website, dumps them into a MySQL database and pushes new items into the Delicious API. I’ve used Delicious here for two reasons. Firstly, because it generates an RSS feed automatically from all the items posted to it, so I can easily connect this output to other mashers and APIs further downstream without having to generate and host an RSS feed myself. Also, Delicious provides a useful search facility on its website allowing me to easily search just the press releases from Sutton Council. This isn’t possible with the council’s own website, where searches are scoped to the entire site.

Destinations (orange boxes)

In my diagram, the destinations are sites and services which represent new ways of consuming information coming from the original sources. Don’t want to read Sutton Council’s press releases on their own website? You can folllow them in Delicious or on Twitter. Want to keep up with the latest news about Stonecot Hill? Again, the @stonecothill Twitter account can find this for you from various sources. I also add my own items to @stonecothill, making it a unique mashup of original and syndicated content that’s highly targeted and very local.

The information stream doesn’t need to end with these destinations. Any destination that provides an outbound API can simply be another link in the chain to downstream services. In my diagram, the RSS feed from Delicious is used to do just that, pushing all its content on to the @suttonboro Twitter account, and just the Stonecot Hill-related content on to the @stonecothill account via the Yahoo! Pipes filter. Twitter has its own specific outbound API and also serves RSS feeds. There’s nothing to stop anyone else building on these destinations by combining and filtering them with other sources to produce their own unique, relevant information streams that they find useful.

What next?

If you run a website, it’s time to start thinking of mashability with the same degree of seriousness as you treat human visitors. Your website needs to serve up feeds and APIs so that other programs can connect to your content and deliver it to people in ways and contexts that they find useful. Some of these may have an audience of thousands or even millions. Others may have an audience of one. Regardless, by providing an API to your content you enable others to build things that you haven’t imagined, don’t have the resources or desire to build yourself, and won’t have to maintain. Businesses like newspapers that survive by selling their content (or selling advertising around their content) are thinking very carefully about the challenges and opportunities for the future of their industries. For government and voluntary organisations, it’s time to start thinking more like evangelists than economists. Spread the word like the free Bibles in hotel bedrooms and take every opportunity to get your message out there.

Sutton Council have been encouraged in various ways to implement feeds on their own website and the song will remain the same until they do. I don’t want to maintain my scraper for ever and I certainly don’t want to build any more of them.

The whole API and mashability agenda is far bigger than simple web feed formats like RSS and Atom. It’s time for technologists to stop flogging the line that “RSS is an easy way for people who follow lots of websites to read all their news in one place”. Direct human consumption of RSS feeds is never going to hit the mainstream in that way. If you’re reading this, you’re far more likely that average to use an RSS reader. (I’ve got 86 feeds in my Google Reader right now). The average web user has barely heard of the concept and most definitely don’t do it. I suspect they never will. But it’s likely they’re already benefiting from syndicated content through sites and applications that they use. If they never have to see or care about the underlying technology that’s really no more a problem than worrying that the average web user doesn’t understand HTTP or DNS. It’s just plumbing that can stay out of sight and out of mind as long as it works.

For the minority that do use personal RSS readers, I’d like to see more of them with built-in filtering features. Setting a simple keyword filter on a feed makes RSS reading considerably more powerful.

For those serving up feeds, I’d like to see Atom more widely used. Without wanting to open a can of Wineresque worms, RSS 2.0 fudges a number of important issues around content semantics and provides no support whatsoever for correctly attributing items in feeds mashed from several sources. Atom was designed to solve these problems and it does. Let’s use it.

Lastly, mashability is about every conceivable kind of content and content type. It’s not just about news and text. Every stream of information should have its own machine-readable feed. Every system that can accept data from human input should implement an inbound API to do likewise. To take one example, FixMyStreet is a website for people to report street faults to local authorities and currently takes around 1000 reports a week. It even has its own iPhone application so people can report faults complete with GPS locations and photos directly from the street. Only a single local authority in over 400 has implemented an inbound API to receive these reports. The rest get them by email, which must be manually copied into their own databases with all the effort, expense, possibility for error and opportunity costs that represents. Third-parties building extensions to other people’s systems is no longer unusual, so organisations need to embrace the possibilities rather than fighting against it or standing around looking bemused.

It’s time to open the doors and windows and get the web joined up, mashed up and moving.


2
Mar 09

My thermometer has got an API

Stonecot Hill thermometer

Meet my thermometer.

It’s an old-school analogue device, probably at least 50 years old. I don’t expect it’s very accurate, certainly not by scientific standards. It hangs outside my door and every now and then I take a look at it and record its reading.

But this is no ordinary thermometer. It’s eqiupped with a capability that would have probably been inconceivable to the people that made it: it has its own API so that programs can “read” it, automatically, across the Internet.

Before explaining how you can use the API I should explain why I’ve bothered to do this.

By any sensible standard, this project in itself is almost completely useless. I haven’t created any kind of clever link between the thermometer and the database in which I store its readings. I just read it when I feel like it. Some days I might get five or six readings. (You can get more sophsiticated computer-connected thermometers that do this automatically but I don’t have one.) Then I might go for five or six days without reading it at all. Combined with what I assume is a fairly low level of accuracy as a result of the device itself, its positioning and the possibility of human error when reading it and recording its value, the data produced seems almost worthless.

There is a crucial difference between almost and completely worthless.

This project is my first contribution towards the Internet of Things. The thermometer is a very simple sensor, producing, as I’ve explained, very low quality data. The API is incredibly basic. There is only one query: read the thermometer. And the result is the following hash:

temperature_c Latest (not current) temperature, in centigrade
ts Unix timestamp of the latest reading
lat Approximate latitude of the thermometer
lng Approximate longitude of the thermometer

 

This thermometer is a very simple device with a very basic interface. If you’re relying on it to provide up-to-date, accurate temperature data, you’re a fool. But imagine if a dozen people in my neighbourhood did exactly the same thing — all reading their not-particularly-accurate thermometers whenever they felt like it and updating their databases. As the number of thermometers increases the chance of being able to find a sufficiently-recent reading from one of them improves. And as the data from all these thermometers is being read by a program (or programs), it’s quite possible to use statistical techniques to smooth out any apparent errors.

This is the opportunity afforded by the Internet of Things (sometimes known more generally as ubiquitous computing or everyware). The model is almost completely opposite to current ideas in server and desktop computing, which are focused on a relatively small number of devices with ever-increasing computing power, storage capacity and sophistication. IoT takes a very large number of fairly stupid devices that have, as a minimum, the ability to network with other things and serve up just one piece of useful information, and then mashes them up into something that’s infinitely more powerful than the sum of its parts.

Imagine a world in which most objects know who they are, where they are and can serve up very basic reports on their status to other objects. Such a “system” would be massively redundant. Like the cells in a body, no one object would be necessarily particularly important in the grand scheme of things. It’s the power of being able to combine these things together — and of them being able to combine themselves — that opens up a wealth of new directions. A city in which every lamppost could transmit its precise location would likely be more accurate than GPS. Lampposts don’t move and they don’t get obscured by clouds.

Making this happen won’t require any fundamentally new technological advances, just small improvements to the cost of existing ones. Platforms like Pachube already exist to enable people to network these kinds of devices and build complex, emergent systems on top of them.

Grand schemes aside, using the thermometer’s API is very simple. Just send an HTTP GET request to:

http://www.stonecothillnews.co.uk/thermometer.php

and you’ll get back a JSON-encoded hash as described above. This is the latest reading that has been taken on the thermometer. It could have been a minute or a month ago.

Here’s how you do it in PHP:

<?php
define('ENDPOINT', 'http://www.stonecothillnews.co.uk/thermometer.php');
$handle = fopen(ENDPOINT, 'r');
$contents = fread($handle, 8192);
fclose($handle);
$data = json_decode($contents);
print_r($data);
?>

and here’s a Ruby example:


require 'open-uri' # URI wrapper for File.open
require 'activesupport' # for JSON decoding
require 'pp' # pretty printer

ENDPOINT = 'http://www.stonecothillnews.co.uk/thermometer.php'

handle = open(ENDPOINT)
output = handle.read
handle.close
data = ActiveSupport::JSON.decode(output)
pp data

I don’t imagine many readers will have a great deal of use for irregularly-updated temperature data from my neighbourhood. But imagine what could be possible with the contributions of millions of devices like this.


9
Sep 08

Estimated date of birth — an interaction design pattern

Context

You want to collect the dates of birth of a group of people so that you can analyse and segment the group by age, but asking for a date of birth isn’t necessary for any specific reason and many people in the group may balk at giving you this private information.

Continue reading →


16
Aug 08

Hack your world

First came the guerilla gardeners, sowing seeds and planting plants in public places without permission.

Then there were the guerilla benchers, installing street seats where the local authority had been too poor or too mean to do it themselves.

On the web, a growing community of civic hackers has been building sites on top of public information to mash it up in new ways that the publishers hadn’t imagined or didn’t have the means or motive to build.

Continue reading →


21
Nov 07

Too much information

A jack has been plugged in

You’d have to get up pretty early in the morning to put one over the system management software that comes with the Acer Aspire 9300.

A jack has been plugged in!

A jack has been unplugged!

Do you think I don’t realise already? Who’s the one doing the plugging and unplugging?

An important usability principle is to conserve the user’s attention. Let them focus on what matters most. Emphasise the main event, quieten the minor details and remove everything that simply doesn’t need to be shown.

For pity’s sake, don’t pop up a balloon just because I’ve plugged my headphones in.