Blogging on blogging

I remember the early days of blogging – back in the mid/late 90s when I used to read Dave Winer’s Scripting News and Jorn Barger’s Robot Wisdom regularly. It was actually a bit rubbish: most blogs spent a huge amount of time discussing the value of blogging.

It’s not really changed. If you could somehow get information about such a meta-question out of, say, Technorati, I’d bet a pint that you’d find that the runaway most-blogged-about topic is … blogging. There are nuances in every field: no doubt the numismatic blogoverse discusses subtly different issues to the furry community. But it’s basically the same argument, over and over. Don’t you hate that?

I certainly do … and yet here I am doing just that on my own blog, in response to Martin Weller asking Is Blogging A Good Use Of Time?

He discusses some of the benefits and comes – unsurprisingly! – to the conclusion that those benefits do justify the use of time.

I agree with all his benefits, but my take is slightly different. I’ve had a personal blog for ages, but I put off (work) blogging for as long as possible for the ‘time sink’ reason.

I started this blog because it was becoming more and more indefensible to be doing my job without blogging.

Part of my job is to track new technology and see how it can be harnessed to support OU teaching. To do that, I need to be part of that technology world. And in that world, if you don’t blog, you don’t exist. Simple as that.

Principles 2.0

As part of the ‘BBC 2.0’ project, the BBC have come up with Fifteen Web Principles, according to Tom Loosemore’s blog. John Naughton observes that “Like all great ideas, they’re pretty obvious — once someone else has thought of them.”

They’re good stuff, and fairly obvious, but not, of course, out of the blue – it looks like a hybrid of previously-stated usability principles and Web 2.0 ones. Which is of course what you’d want.

There’s one I don’t entirely buy:

7. Any website is only as good as its worst page: Ensure best practice editorial processes are adopted and adhered to.

That looks much more old-school BBC than Web 2.0 to me. This is, of course, entirely true of content that centrally-managed and presented. But when you’re going for something more open and user-created, you have to live with the fact that some of the stuff there is going to be, frankly, rubbish. Indeed, I’d bet that you’ll have a power-law distribution of quality: a small number of outstanding items, and a very long tail of what looks like dross. Except, of course, a small number of people might find some value in a given individual item … and if your infrastructure works right, you multiply this small value by the much larger number of items and get greater value there than up the other end of the distribution.

One of my current projects is revamping the OU’s Knowledge Network. This is a knowledge-sharing site for expertise about teaching and learning. We invented it in 1999-2000, and if we’d done it later we’d just have picked up a lot of web 2.0 stuff. We pretty much re-invented social tagging before it was widespread, for instance, but our implementation was too crufty to catch on (and we didn’t start right). The reason we haven’t just dumped it in favour of open tools is that the ability to give fine-grained control over access to the information – integrated with the OU’s existing security infrastructure – is a key feature.

We’re going live with a facelifted version today, and then we get our teeth stuck in to a major change to produce ‘KN 2.0’. We have our own principles for this, which look similar:

  1. As open as possible
    The Web 2.0 philosophy of radical user empowerment is very much in tune with the direction of the original KN. Anyone can publish, and there are no gatekeepers.
  2. As secure as necessary
    Radical openness is not appropriate for some of the information in the KN. Fine control over access enables free discussion of information that needs to remain confidential.
  3. Expertise exists in people, not computers
    Try to enhance existing person-to-person links for knowledge exchange, not replace them.
  4. Don’t duplicate effort
    Take advantage of other systems wherever possible. Don’t try to do what other services do better (e.g. quality-assured document repositories and gold-standard archival).
  5. Play well with others
    Make it easy for the KN to work with other systems and processes, e.g. by open-sourcing (entire system or new components), creating/using open APIs and standards

It’ll be obvious to anyone that principle 2 is the ringer in Web 2.0 terms. Just like the BBC, we’ve got our own departure from the Web 2.0 philosophy. That doesn’t worry me. In fact, if we had no difference, we really should be just using what’s already out there – or, of course, joining in with developing it, like we’re doing with Moodle.

Location, location, location

Have just been to one of our regular Technology Coffee Mornings, where people take turns to explain/demo some technology. I did one myself a while ago about RSS. Today’s was by Patrick McAndrew, and his topic was Geocaching.

He made it look easier than I thought – although he was at pains to explain that the technology is all still very flaky, doesn’t work a lot of the time, and needs a lot of technologist glue to get it working well. He’d even written his own Google Maps mash-up to help out with getting data from a PC to his GPS-equipped HP PDA. His mash-up makes it easy to capture an image from Google Maps with two positions marked, and export the locations of those positions in a format the PDA software understood. Transfer those two to the PDA, match the marks on the image with the locations, and bingo – the image is synchronised with the GPS data and he can get a live position on a real map. (Later: He’s already blogged in more detail about how it works.)

He also mentioned that more and more phones have a GPS chip on them, even if they don’t have any software to make use of it soon.

It’s not a new observation, but I was very struck that location-based stuff is going to be very, very big in the next five years. If you have a GPS chip on your phone, you can know where you are. Connecting that up to a central database, you can know what interesting things are nearby – for whatever your current value of “interesting” is. And if your mates also have a GPS chip on their phone, you can know where your mates are. I predict that the social networking possibilities that affords will take off massively. (Unless the networks kill it with silly walled-garden approaches, or absurdly expensive offerings, which they might. But that should only delay it a few years until the chips become even cheaper and widely available as a standalone device.)

There are services that sort-of do this already, but you have to all be signed up to the same service – none yet have anything like critical mass – and uploading where you are now isn’t as automatic as it’d need to be. Twitter’s success at SXSW was – I reckon – an example of this. (Of course, it didn’t have the precise GPS data, but it worked fine as a physical social networking tool there because it was a restricted domain so you could easily and unambiguously specify where you were within a tiny bit of human-generated text.)

But where’s the learning?!

Harder to answer. On MOBIlearn – a large EU-funded mobile learning project I worked on a short while ago – we found “lazy planning” of learning activities one of the things you could do with the technology that you couldn’t do otherwise. (Traditionally, you schedule a learning activity in advance and let everyone know the time and place it’s happening; with lazy planning, you text/phone them at some arbitrary time and get them together that way. It’s just a learning version of what people do these days when going out – instead of agreeing “7.30 in the Red Lion” in advance, at around 7.30 you text each other saying “I’m in Red Lion where RU?”.) This sort of location-based stuff would sit very well with that.

It might also link up with the new university model/’Open Universities’/skunkworks idea that Martin Weller’s been working on. He’s blogged about it being a very long-tail operation: a way of getting the small number of people with very niche learning interests together. I think the location-based stuff could also help with more popular learning interests: you’d be able to get the small number of people with the popular learning interest who happen to be nearby together.

Thinking about it, that’s pretty much what we do as the OU with our tutorial system. For a small-population course, we might have a handful of tutorial groups spread all over the place. (E.g. our online MA, where a tutor group might have students in Thurso, Margate, Lille, Abu Dhabi and Wellington.) For a large-population course, we can arrage many more tutorial groups so that for most students there’s one in their part of the city or the nearest market town.

But that’s very slow turnover stuff: the groups form for the whole course, which is typically 9 months. I’d imagine the location-based social networking stuff would be more about groups forming for an hour or two.

I’m now starting to think about how Reed’s Law of group-forming networks reckons that the value of a group-forming network grows more like O(2^N) than the O(N^2) that Metcalfe’s Law says a telecoms network does, and how enhancing the group-forming aspect of a given network – say of OU students – will therefore dramatically enhance its value … but this post is already too long and I need to head off to the next thing.

(And I’ve not put in the links here, sorry – but it’s a real content post!)

Hello world!

First post! And the big question for any new blog: is it a one-post wonder, a month-long marvel, or an ongoing project? Find out here first!

There are all sorts of tricky issues with a work-related blog. How much detail to make public? Will the employer approve? All of which are excuses I’ve used not to get started. But this is a start.