Blog

Content Battles

Martin has another interesting post, arguing that “Digital content wants to be free, and will seek the path to maximum access.”

He makes a good case based on some examples from photos, broadcast and music. I’ve two points of departure.

Firstly, I think ‘photos, broadcast and music’ are old-media concepts that don’t have a guaranteed right to exist in the new-media world. Online, these map – in a complex way – on to images, audio, video and combinations of those. (FWIW I don’t think ‘streaming audio/video’ category is a stable, separate category in to the future either – it’s a workaround for limited bandwidth.) It’s a tribute to how embedded that way of thinking is that even an analyst of Martin’s stature and experience paints the world in those terms.

Secondly, the analysis is incomplete without acknowledging that digital content also wants to be expensive. The original information-wants-to-be-free quote was from Stewart Brand back in 1984, and is worth restating in full:

On the one hand information wants to be expensive, because it’s so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other.

And that’s what’s been going on with the music/audio industry.

And that’s what’s just starting to go on in the video industries. We’ve got YouTube playing the Napster role and any number of consumer-hostile walled-garden DRM solutions from bone-headed unimaginative existing market incumbents.

These include, alas, a lot of people who should be in a position to Do The Right Thing, but sadly aren’t, such as the BBC and Google. The BBC have made what I reckon is their worst decision of recent years by going for a DRM-ed offering (tied to Microsoft), despite overwhelming public offering. For stuff the licence-fee payers have already paid for! And Google Video is another disaster. Google is shutting down its video service. Punters who signed up in good faith and bought DRMed video from them now face being unable to play those videos.

The battle in audio is far from over. The battle in video hasn’t really got started.

Leitch Review of Skills

Have just been to a meeting where we discussed, inter alia, the Leitch Review of Skills and its potential impact on the OU.

For those of you who’ve not had a chance to read it cover to cover, the general gist is – surprise, surprise – that the UK needs a lot more skills.  At all levels.

How this is to be achieved varies by level.  The Review urges shifting much more Government resource in to basic and intermediate level skills.  It also says there should be far more at degree level and above, but says that the expansion here should be funded by employers and individuals.  The Review also says that offerings from HE providers must be much more “demand-led”.

The OU’s Council – our ultimate governing body – looked at all this, and I’ve seen the briefing paper they had and indirectly heard their response.  It seems pretty smart.  As I understand it, it goes:

a) The OU is pretty well connected with employers already – though of course we can do better;
b) Don’t for one moment assume that there will be a sudden huge flood of new money in to HE from employers – there won’t; and
c) Note that the Government has yet to set out a timetable for implementing the Review – assuming it decides to do so.

There is a lot of potential for exciting stuff post-Leitch, but there’s a lot of problems too.  (I’m particularly skeptical of the role they envisage for Sector Skills Councils, for one thing, although at least it’s not recommending a whole new machinery for doing that job.)  I think we’ll need to wait and see before anything dramatic arrives.

Joining things up in my head, I think that the Leitch push to be more demand-led, more bespoke, and more cost-effective (all at the same time!) cries out for a Web 2.0-style mass customisation operation.  How we do that at scale, though, is a huge challenge.

Neologism corner: Twittorial

My strong suspicion is that the educational impact of Twitter will, largely, be like text messaging: learners might use it a lot as part of what TRAC would categorise as administration in support of teaching, but it’s not likely to be a major new pedagogical medium. Sure, it’s many-to-many, but it’s many-to-many broadcast, not many-to-many interactive. Great for learning about what your mates are up to at the moment but not so helpful for learning about tensor fields. But I’ve been wrong before.

So, I hereby coin the word Twittorial – or perhaps less trademark-challengingly – twittorial, to meaning an educational experience mediated or strongly influenced by microblogging. Not quite sure what an effective twittorial would look like, although I suppose you could turn the concept slightly on its head and use it to describe a Twitter-related HOWTO.

If the word ever takes off, you saw it here first. I can’t find a single mention in search engines. I offer it up freely under a Creative Commons attribution license (the new name for academic good manners): do what you want with it but make it clear where you got it from.

Death of Peter Knight

My boss, colleague, and friend, Professor Peter Knight, Director of the Institute of Educational Technology, died suddenly and unexpectedly last weekend. It was a pleasure and a privilege to work with him. He gave me huge amounts of support and encouragement. His public management style and mine were somewhat different, to say the least, but we worked very well and effectively together as complements. He taught me so much, and there was so much I had yet to learn from him that I never will now. I will miss him profoundly.

It’s hard to come to terms with, and we’re still somewhat in shock. A lot of my time this week has been spent managing the situation, as part of the senior management team in the Institute, and I expect it’ll stay that way for a while yet. I’ve been very struck by how supportive, professional and capable my colleagues are.

Blogging on blogging

I remember the early days of blogging – back in the mid/late 90s when I used to read Dave Winer’s Scripting News and Jorn Barger’s Robot Wisdom regularly. It was actually a bit rubbish: most blogs spent a huge amount of time discussing the value of blogging.

It’s not really changed. If you could somehow get information about such a meta-question out of, say, Technorati, I’d bet a pint that you’d find that the runaway most-blogged-about topic is … blogging. There are nuances in every field: no doubt the numismatic blogoverse discusses subtly different issues to the furry community. But it’s basically the same argument, over and over. Don’t you hate that?

I certainly do … and yet here I am doing just that on my own blog, in response to Martin Weller asking Is Blogging A Good Use Of Time?

He discusses some of the benefits and comes – unsurprisingly! – to the conclusion that those benefits do justify the use of time.

I agree with all his benefits, but my take is slightly different. I’ve had a personal blog for ages, but I put off (work) blogging for as long as possible for the ‘time sink’ reason.

I started this blog because it was becoming more and more indefensible to be doing my job without blogging.

Part of my job is to track new technology and see how it can be harnessed to support OU teaching. To do that, I need to be part of that technology world. And in that world, if you don’t blog, you don’t exist. Simple as that.

Principles 2.0

As part of the ‘BBC 2.0’ project, the BBC have come up with Fifteen Web Principles, according to Tom Loosemore’s blog. John Naughton observes that “Like all great ideas, they’re pretty obvious — once someone else has thought of them.”

They’re good stuff, and fairly obvious, but not, of course, out of the blue – it looks like a hybrid of previously-stated usability principles and Web 2.0 ones. Which is of course what you’d want.

There’s one I don’t entirely buy:

7. Any website is only as good as its worst page: Ensure best practice editorial processes are adopted and adhered to.

That looks much more old-school BBC than Web 2.0 to me. This is, of course, entirely true of content that centrally-managed and presented. But when you’re going for something more open and user-created, you have to live with the fact that some of the stuff there is going to be, frankly, rubbish. Indeed, I’d bet that you’ll have a power-law distribution of quality: a small number of outstanding items, and a very long tail of what looks like dross. Except, of course, a small number of people might find some value in a given individual item … and if your infrastructure works right, you multiply this small value by the much larger number of items and get greater value there than up the other end of the distribution.

One of my current projects is revamping the OU’s Knowledge Network. This is a knowledge-sharing site for expertise about teaching and learning. We invented it in 1999-2000, and if we’d done it later we’d just have picked up a lot of web 2.0 stuff. We pretty much re-invented social tagging before it was widespread, for instance, but our implementation was too crufty to catch on (and we didn’t start right). The reason we haven’t just dumped it in favour of open tools is that the ability to give fine-grained control over access to the information – integrated with the OU’s existing security infrastructure – is a key feature.

We’re going live with a facelifted version today, and then we get our teeth stuck in to a major change to produce ‘KN 2.0’. We have our own principles for this, which look similar:

  1. As open as possible
    The Web 2.0 philosophy of radical user empowerment is very much in tune with the direction of the original KN. Anyone can publish, and there are no gatekeepers.
  2. As secure as necessary
    Radical openness is not appropriate for some of the information in the KN. Fine control over access enables free discussion of information that needs to remain confidential.
  3. Expertise exists in people, not computers
    Try to enhance existing person-to-person links for knowledge exchange, not replace them.
  4. Don’t duplicate effort
    Take advantage of other systems wherever possible. Don’t try to do what other services do better (e.g. quality-assured document repositories and gold-standard archival).
  5. Play well with others
    Make it easy for the KN to work with other systems and processes, e.g. by open-sourcing (entire system or new components), creating/using open APIs and standards

It’ll be obvious to anyone that principle 2 is the ringer in Web 2.0 terms. Just like the BBC, we’ve got our own departure from the Web 2.0 philosophy. That doesn’t worry me. In fact, if we had no difference, we really should be just using what’s already out there – or, of course, joining in with developing it, like we’re doing with Moodle.

Location, location, location

Have just been to one of our regular Technology Coffee Mornings, where people take turns to explain/demo some technology. I did one myself a while ago about RSS. Today’s was by Patrick McAndrew, and his topic was Geocaching.

He made it look easier than I thought – although he was at pains to explain that the technology is all still very flaky, doesn’t work a lot of the time, and needs a lot of technologist glue to get it working well. He’d even written his own Google Maps mash-up to help out with getting data from a PC to his GPS-equipped HP PDA. His mash-up makes it easy to capture an image from Google Maps with two positions marked, and export the locations of those positions in a format the PDA software understood. Transfer those two to the PDA, match the marks on the image with the locations, and bingo – the image is synchronised with the GPS data and he can get a live position on a real map. (Later: He’s already blogged in more detail about how it works.)

He also mentioned that more and more phones have a GPS chip on them, even if they don’t have any software to make use of it soon.

It’s not a new observation, but I was very struck that location-based stuff is going to be very, very big in the next five years. If you have a GPS chip on your phone, you can know where you are. Connecting that up to a central database, you can know what interesting things are nearby – for whatever your current value of “interesting” is. And if your mates also have a GPS chip on their phone, you can know where your mates are. I predict that the social networking possibilities that affords will take off massively. (Unless the networks kill it with silly walled-garden approaches, or absurdly expensive offerings, which they might. But that should only delay it a few years until the chips become even cheaper and widely available as a standalone device.)

There are services that sort-of do this already, but you have to all be signed up to the same service – none yet have anything like critical mass – and uploading where you are now isn’t as automatic as it’d need to be. Twitter’s success at SXSW was – I reckon – an example of this. (Of course, it didn’t have the precise GPS data, but it worked fine as a physical social networking tool there because it was a restricted domain so you could easily and unambiguously specify where you were within a tiny bit of human-generated text.)

But where’s the learning?!

Harder to answer. On MOBIlearn – a large EU-funded mobile learning project I worked on a short while ago – we found “lazy planning” of learning activities one of the things you could do with the technology that you couldn’t do otherwise. (Traditionally, you schedule a learning activity in advance and let everyone know the time and place it’s happening; with lazy planning, you text/phone them at some arbitrary time and get them together that way. It’s just a learning version of what people do these days when going out – instead of agreeing “7.30 in the Red Lion” in advance, at around 7.30 you text each other saying “I’m in Red Lion where RU?”.) This sort of location-based stuff would sit very well with that.

It might also link up with the new university model/’Open Universities’/skunkworks idea that Martin Weller’s been working on. He’s blogged about it being a very long-tail operation: a way of getting the small number of people with very niche learning interests together. I think the location-based stuff could also help with more popular learning interests: you’d be able to get the small number of people with the popular learning interest who happen to be nearby together.

Thinking about it, that’s pretty much what we do as the OU with our tutorial system. For a small-population course, we might have a handful of tutorial groups spread all over the place. (E.g. our online MA, where a tutor group might have students in Thurso, Margate, Lille, Abu Dhabi and Wellington.) For a large-population course, we can arrage many more tutorial groups so that for most students there’s one in their part of the city or the nearest market town.

But that’s very slow turnover stuff: the groups form for the whole course, which is typically 9 months. I’d imagine the location-based social networking stuff would be more about groups forming for an hour or two.

I’m now starting to think about how Reed’s Law of group-forming networks reckons that the value of a group-forming network grows more like O(2^N) than the O(N^2) that Metcalfe’s Law says a telecoms network does, and how enhancing the group-forming aspect of a given network – say of OU students – will therefore dramatically enhance its value … but this post is already too long and I need to head off to the next thing.

(And I’ve not put in the links here, sorry – but it’s a real content post!)

Hello world!

First post! And the big question for any new blog: is it a one-post wonder, a month-long marvel, or an ongoing project? Find out here first!

There are all sorts of tricky issues with a work-related blog. How much detail to make public? Will the employer approve? All of which are excuses I’ve used not to get started. But this is a start.