A web generation, and the marginal cost of publication has fallen to nearly zero

Martin Weller has got a bit irked by someone likening open access journals to stealing bread, and makes the obvious-to-us points:

Open access is not like stealing bread from Tesco for two VERY SIMPLE reasons:

i) If I take a loaf bread from Tesco then it is no longer there. If I read an online article it is still there for everyone else. We knew this back in 1998 when we talked about non-rivalrous goods. Can we stop using this misguided analogy now? Pleeeease?

ii) You could make the argument that each download of pirated music equals a sale lost (although that is a flawed argument I think). So maybe the argument has some validity for products you want to sell (big maybe for me). But for journal articles it really has no validity – the whole point of research publications is that you want as many people to read it as possible.

I agree, although there’s some interesting stuff buried in point (ii): for academics, researchers and the General Public Good, the whole point of research publications is for as many people to read it as possible – or more precisely, for everyone to whom it is relevant to read it. But for a for-profit company, the whole point of any activity is to make money for the shareholders. The happy part of the history of capitalism is that generally, the most sustainable way of making money for shareholders is to do things that people value. Without getting too far in to the whole critique-of-capitalism argument, if restricting access makes more money for a commercial publisher, then the company has a legal duty to do that. The interests of academics, researchers, the General Public Good and those of commercial publishers were very happily aligned – or at least, very usefully overlapping – for perhaps a couple of hundred years. I think that this is ceasing to be the case, now that the marginal cost of publication has fallen to nearly zero.

That last bit warrants a little unpacking: the marginal cost of publication has fallen to nearly zero. The marginal cost of publication is the cost to produce one additional copy. That’s not the same as the average cost, or the cost of producing the first copy. And the cost has fallen to nearly zero – not actually zero. There is a difference, and it can be a very important one in some circumstances.

The difference between “the cost of publication is zero” and “the marginal cost of publication is nearly zero” can sometimes be negligible – but sometimes it can be profound. And, of course, the price and who pays it is another matter altogether, or at least can be. This deserves further exposition in the context of scholarly publishing, but this margin of my time is too small to contain it.

Anyway.  Martin concludes:

“It’s not as if digital content is new now is it?”

Indeed not. I was just thinking that this morning. I think of myself as working on new technology in teaching, and when I came to the OU (1998) you could use “web-based teaching” fairly interchangeably with “new technology in teaching”. It’s largely the web which has driven the marginal cost of publication down to nearly zero, and the changes were well afoot even then.

But the web isn’t really very new these days. The web got going in April 1993, when CERN said the WWW would be free to use and Mosaic was released. People born then are sitting their AS levels right now, and hoping to go to university next year.

I like to use 1993 as the start date for the web because (a) it’s when it started to catch on, and (b) it’s when I heard about it, which is obviously a key moment in any technology’s lifecycle (!). (As I like to point out before issuing any technological prognostications, my verdict at that point was that it was rubbish and would never catch on. I have since changed my mind.)

If you want to be purist, it’s even more stark. You could date the birth of the web to when Sir Tim had the first complete set of web tools working (browser, editor, server, pages): Christmas 1990. People born then are going to graduation ceremonies right now.

It’s easy to overstate and oversimplify the case for a Net Generation – as my colleague Chris Jones and his team will tell you. But I don’t think it’s going too far to say that it’s hard to call a technology entirely new if it’s been around for a generation.

Pointless babble or social grooming?

There’s a kerfuffle on Twitter at the moment about a study of Twitter that found that “Twitter tweets are 40% [pointless] babble“.  I know, I know, another new media self-referential navel-gazing situation.

But it serves as a good example of a really important point about teaching with new technology.  Teachers have to be immersed in digital technologies if they’re going to make good use of them in learning.

There’s lots to criticise about this particular study in terms of its methodology, and I’m sure other blogs will oblige if you really care.  (I personally was quite amused at the great examples of way-too-precise percentages and of trawling findings out of obviously too-small datasets that appear in the ‘full’ PDF report – and also mildly entertained at the way they pad out the report with cut-and-paste stuff, including the silly ‘Teens don’t Tweet’ story that danah boyd demolished.)

But these fish-in-a-barrel shortcomings don’t really matter: in broad-brush terms, it’s probably roughly right that traffic on Twitter is about 10% self-promotion and spam, 10% of news broadly considered, 40% conversation, and 40% ‘pointless babble’.

Fundamentally, though, this study (almost) entirely misses the point of what people on Twitter experience.  It sampled the Twitter public stream, which is the total assemblage of what everyone using the service is producing.

But what looks like ‘pointless babble’ isn’t pointless, if it’s from people you know or care about.  It’s social grooming, it’s keeping in touch.  It’s what most human conversation is about.  If you think this stuff is pointless babble, you’re really not going to enjoy parties.  Or indeed be likely to maintain fulfilling personal relationships.  On Twitter, you get to choose whose ‘pointless babble’ you want to follow.  Almost nobody who actually uses Twitter uses it by reading the public stream.

If you learn about Twitter by reading these sorts of reports, you’ll get a bizarre view that really tells you very little about what it’s like to use as a service.

And this brings me to the general point about teaching with new technology: you can do the most methodologically sound research about Twitter you like, but without a decent appreciation of what it is to use the service, you’re going to struggle hard to make sensible use of it in teaching.

Now I’m certainly not arguing that research in to new technologies is not valuable to teachers: it’s hugely important.  (And I would say that, it’s a large part of my job!) But without a practical perspective – and I suspect that means personal experience – it’s all but impossible to use research to devise good learning experiences for actual learners.

It’s like the old caricature of book larnin’ that has someone teaching themselves to swim by reading a book, without ever setting foot in the water.  It’s self-evidently ridiculous. Cutting-edge research in to swimming is helping to create swimming costumes that dramatically improve swimmers’ speed. Would-be swimming coaches who stay abreast of that research might think it would be a good idea to get their hapless learners to wear those for their half-hour learning sessions … failing utterly to appreciate that the gains only come to elite swimmers, and that it can easily take up to half an hour to struggle in to the high-tech suits.

This is very similar to Martin Weller’s Pathetic Sharks argument (see p21/22): if we don’t dive in to these technologies, we run the risk of being like Viz magazine’s Pathetic Sharks, who looked scary, but were too scared to actually go in to the water.

And it’s what I was getting at in my all-time number one hit blog post, We Have A Mountain To Climb.  The surface issue there wasn’t Twitter, but me being the only laptop user in a lecture, and thus annoying everyone else in the room when I banged away on my keyboard.  (Ironically, the subject of the lecture – given by the OU’s then-VC – was an eloquent argument that ‘scholarship in this university, in this century, has to be irrevocably tied to the technology and knowledge media‘.)

The deeper issue is identical: Most teachers in higher education are not getting practical experience of digital technologies.  It’s just not part of their daily practice.  They’re not immersed in it; many have barely dipped their toes in.  Even if we could get the very best ed tech research to their fingertips (hard enough), they’re never going to make great use of new technologies in their teaching without that practical experience.

Changing the everyday practice of educators is going to be hard. But we have to do it.

Higher Education is in the early stages of a transformation that’ll be at least as profound as the upheavals that digital technologies are bringing to the music and newspaper industries.  There’s a huge opportunity – and a huge challenge! – for us at the OU and in other universities to lead innovation here.

If we don’t, we’re in real trouble. But if we can ride the wave instead of letting it crash over us, it’s going to be extremely exciting times for teachers and learners.  And to do that, we – as a community – have to be practitioners in the space we’re trying to innovate in.

Update: danah boyd is riffing on the same silly Twitter study, effectively as ever.

Digital residents and digital tourists

I think we should stop talking about “digital natives” and “digital immigrants” altogether. It’s unhelpful and unclear. A better distinction might be between “digital residents” and “digital tourists”.

I’ve never liked the terms “digital native” and “digital immigrant”, as introduced/popularised by Prensky, and the “born digital” idea as applied to people (rather than, say, media artefacts) is profoundly problematic. I’m not the first or only person to raise this – lots of people have criticised it. (And with very flaky Internet access at the moment, I can’t link to or cite to them … which is a bit annoying but saves me the bother – good job this isn’t a proper academic paper.)

Firstly, there’s important moral issues in appropriating language about indigenous people and human migration. I really don’t think the parallels are helpful or instructive here.

Secondly, there’s the fact that the categories are not fixed in generational terms: as is widely attested, there are plenty of retired-age people who have great facility with digital technologies, and spent large amounts of time online, and plenty of teenagers who struggle with them and find them overwhelming and alienating. (And the particular application to students starting at university is particularly problematic: the proportion of mature students is not negligible and is rising.)

Thirdly, it attributes inherent, unchangability to one’s approach and use of technology. One cannot aspire or attempt to become a digital native: one either is or one isn’t. There are plenty of people who come to digital fluency at a later stage in life than infancy.

Fourthly, it unhelpfully sets up an insurmountable barrier of incomprehension between teachers (by definition digital immigrants) and learners (by definition digital natives).

I do buy, however, that there are important qualitative differences between people who are familiar with digital technologies and can use them with a fluency, facility and creativity that others can’t.

So a much better metaphor, I think, is to contrast “digital residents” with “digital tourists” – or perhaps “digital visitors”.

Digital residents are familiar and comfortable with digital technologies, use them as part of their everyday lives, and therefore – to a greater or lesser extent – tend to take them for granted.

Digital tourists, however, are not familiar with digital technologies, and struggle to make good use of them. Some are enthusiastic, gushing admirers; at the other end of the spectrum, some loathe every moment of their visit and leave quickly, vowing never to return.

Often the things the digital tourists find compelling are very different to the things that digital residents do – partly because of the effect of novelty, and partly because of the amount of time spent there. And as a result, they tend to behave very differently in what’s superficially the same context.

Balancing the needs of tourists and residents is a well-known social problem in the physical world. It’s easy for tourists to be unaware of the huge impact they can have on the residents, and it’s also easy for residents to be unwelcoming as a result. But it’s entirely possible for the two communities to co-exist very happily in the same space, recognising that they each contribute something valuable. And frequent tourists might, over time, find that they have more-or-less settled in the place they originally came to as visitors and have come to know and love. Correspondingly, a longstanding digital resident might decide to leave – or at least take a holiday. (Plenty of very-online people take a break away from the net for a while now and then.)

The potential tension between tourist and resident is likely to be much less contentious and intractable in the digital world. One of the fantastic things about the digital world, as opposed to the physical one, is that in many ways that matter, more people being there tends to make things better. That’s true in some contexts in the physical world, but not all. If you want to settle and build a house, you have to find somewhere to put it: physical land is (often) a very limited resource, and is what economists call an ‘exclusionary good’ – if I own and build a house on a piece of land, you can’t. But digital “land” is (often) not a limited resource in the same way: me having this blog in no way stops you or anyone else setting up a blog.

There are, of course, plenty of people who would dearly love to visit the digital world and perhaps settle there, but lack the opportunity. And we shouldn’t forget the people who are perfectly happy with their non-digital lives and just get on with them. For completeness and entertainment value, we could also include digital xenophobes, who’ve never actually spent any time in the digital world, but still bang on about how awful (they assume) things are there – often spouting ill-informed and hostile speculation.

There are still problems with this metaphor. It’s still dichotomising (either one thing or the other), when I’m pretty sure it’s much more of a spectrum. But I think it’s a lot more helpful and accurate.

Edit: (with more connectivity) Juliette points out in the comments that plenty of people have already proposed digital residents/digital visitors (as a quick search confirms). There are fewer mentions of digital tourists in this context, although I did stumble on this guide to being a digital citizen, not a digital tourist.   I don’t think one should necessarily aspire to being a digital citizen – tourism is perfectly legitimate, so long as it’s done sensitively.  And the perspicacious and legendary John Naughton (and I’m not just saying that because he’s agreeing with me here!) draws a helpful parallel with his experience as a tourist in Provence.

The murky issue of licensing

Over on the excellent new-look Cloudworks, there’s a debate going on about what to do about licensing of content on the site.  There’s two questions: one is what license to choose, and the other is what to do about the stuff that’s already on the site (!).  The latter I’m going to discuss over on that article, since it really only applies to Cloudworks, but what license is the Right One is a big and more general question.

This is far from a settled issue in the educational community. There’s reasonable consensus that Creative Commons licences are Good, rather than more restrictive ones, but as you probably already know, there are multiple versions of Creative Commons licenses.   The details are set out nicely on the Creative Commons Licenses page.  As someone releasing material, there are basically four conditions you can choose to apply:

  • Attribution (BY – must give credit)
  • Share Alike (SA – can make derivative works but only if licensed under a similar licence)
  • Non-Commercial (NC – only if not for commercial purposes)
  • No Derivatives (ND – can only distribute verbatim copies, no derivative works)

You can combine (some of) these to create six distinct licences:

  • Attribution (CC:BY)
  • Attribution Share Alike (CC:BY-SA),
  • Attribution No Derivatives  (CC:BY-ND),
  • Attribution Non-Commercial (CC:BY-NC),
  • Attribution Non-Commercial Share Alike (CC:BY-NC-SA)
  • Attribution Non-Commercial No-Derivatives (CC:BY-NC-ND)

There’s also a newish licence called CC0, which is intended as a way of unambiguously releasing a work into the public domain, free of any restraint.

So – assuming your aim is to promote the widest possible access to the material – what license should you choose?

There is a big, ongoing and fundamental argument going on in the Open Educational Resources (OER) and wider online educational community about this, with Stephen Downes and David Wiley perhaps the most articulate/notable exponents of two different positions.  To caricature horribly (and my apologies to both), Stephen Downes’ position is that the most effective licence is CC:BY-NC-SA, and David Wiley’s is that simple CC:BY is better (or even CC0).  This is overlaid (or perhaps underpinned) by a difference of approach, between a strong principled approach, and a pragmatic one.  (If you’re at all interested, I really do recommend digging in to their ongoing conversation about this.  A good starting place might be this post by Stephen Downes, and this one by David Wiley, mostly on NC – or these lists of the responses of one to the other.  If you’re going to Open Ed in Vancouver, they’re planning to debate each other face-to-face, which should be very illuminating.  A recent contribution to the ongoing debate is that Wikipedia has recently moved to CC:BY-SA.)

The argument for minimal licensing (CC:BY or less) is in essence that the other conditions create un-necessary barriers to reuse and distribution.  So, for instance, insisting on Non-Commercial would stop a company distributing printed copies of the work for profit, which might make it more available than it would otherwise be.  The arguments for more restrictive licensing include a fear that commercial interests will crowd out the free sources, using their greater marketing leverage, and that requiring Share-Alike keeps the ‘open-ness’ attached to the work.

There are obvious parallels with the Free/Open Source Software debate: there, the ideologically-pure line (what you might call the Richard Stallman view) has not been anything like as widely-adopted as the more flexible one (what you might call the Linux view).  Being widely-used, of course, does not mean that the approach is right.

For educational resources, my own current personal view is that CC:BY is the licence of choice, where possible.

It’s the least restrictive license and is the lowest barrier to sharing. All the other CC licenses create speedbumps (or worse) to people who want to use or remix  material.

We know that re-use is not widespread default practice in the educational community, and adding in extra barriers seems entirely the wrong tack to me.  If you’re wanting to use something, the extra conditions create headaches that make it  – for most practical purposes – at least look like it’s easier and quicker to just re-create your own stuff.  It’s hard enough persuading colleagues that it’s a good idea to re-use material where possible rather than re-creating it, never mind if they also need a long course in Intellectual Property Rights to understand what they can and can’t do with it.  Each of the qualifications to a simple CC:BY adds extra questions that the potential reuser needs to think through.

We can dismiss ‘No-derivatives’ fairly easily: it’s an explicit barrier to remixing or editing.   As a potential user, you have to think about things like how much you’re allowed to quote/reuse as fair use/comment.  And if you are prepared to simply copy it verbatim, what constitutes verbatim?  What if you change the font?  Or print it out from an online version?  Put a page number, heading or links to other parts of your course at the top or bottom?  Can you fix a glaring and misleading typo?

‘Non-commercial’ is also full of tricky questions.  Most universities are not commercial for these purposes … except not all university activities are covered.  What about using it on a website with ads on it?  Like, say, your personal academic blog that’s hosted for free in exchange for unobtrusive text adverts?   What about a little ‘hosted for free by Company X’ at the bottom?  A credit-bearing course where all the students are funded by the State is clearly not commercial in this sense … but what about one where (in the UK context) they’re all full fee-paying foreign students?  Or a CPD-type course where there’s no degree-level credit and the learners all pay fees?

‘Share-alike’ means you have to worry about whether the system you’re wanting to use the material on allows you to use a CC licence or not.  Does, say, your institutional VLE have a blanket licence that isn’t CC-SA compatible?  And what if you want to, say, produce a print version with a publisher who (as most do) demands a fairly draconian licence?

For any given set of circumstances, there are ‘correct’ answers to most of these questions.  (And they’re certainly not all ‘No you can’t use it’ in many situations that obtain in universities.)  But you need to be pretty savvy about IP law to know what they are.  And even then, a lot of it hasn’t been tested in the UK courts yet, so you can’t be certain. Worse, what you want to do with the stuff when you’re reusing it may change in future – you might start off making a free online course, but then it might take off and you want to produce a book … but you can’t because some of the stuff you used had NC attached.  Or you might want to transfer your free non-assessed online course to a more formal for-credit version in your University on the institutional VLE … but you can’t because some of the material had SA attached.

You can be a lot more confident about future flexibility if you stick to CC:BY material, and there’s a lot less to worry about whether you’re doing it right.  So my view is that if you want to release material to be re-used as widely as possible, CC:BY makes your potential audience’s life much easier.

Complete public domain release would – on this argument – be even better, except that as an academic, I see attribution as crucial and fundamental, so I can’t let go of that!

I’m not overwhelmingly ideologically committed to this position: it’s very much a pragmatic view of what is most likely to get the best outcome.  I certainly don’t dismiss the counter-arguments about the dangers of commercial, closed pressures: they are real.  But I think on the balance of probabilities that the ease-of-reuse argument outweighs those, and CC:BY is usually the licence of choice.

Farewell Vista

The IT news today is full of reports that most purchasers of Windows PCs will from now be able to upgrade their system from Windows Vista to Windows 7, for little or no money, when it becomes available in October.  This – along with Windows 7’s ‘XP simulation’ mode – is indeed probably the death knell of Windows Vista.  Which will probably be unlamented by many.

That was such an appalling vista that every sensible person would say, ‘It cannot be right that these actions should go any further.’

That’s not about Windows, it’s actually Lord Denning’s fatuous reasoning for dismissing the  Birmingham Six’s application for leave to appeal in 1980, on the startling grounds that if they succeeded in overturning their conviction for pub bombings,  it’d make it clear to everyone that there had been the most shocking and extensive fit-up. Which, of course, there had been.  ‘Appalling vista’ became a bit of a buzzphrase among people campaigning for the Birmingham Six’s eventual release.  The phrase has been coming to mind again recently.

It remains to be seen, though, whether the loss of traction by Microsoft with Vista – coupled with the explosion of platforms that aren’t conventional desktop PCs – is a recoverable blip like with Windows ME, or a clear turning point in the history of IT.

I wouldn’t bet against Microsoft’s ability to sell software at scale – they are very good at it. Writing off a company that huge with that large a cash pile and that many smart people would be daft.

But I am sure, as I said in my Babel post, that multiple platforms are here to stay, and the times when you could assume that nearly everyone using a computer had Microsoft Windows are long gone.

(Though as people have pointed out in comments and directly to me, they never really existed anyway.)

A new Babel

There’s an explosion of platforms to develop applications on at the moment, which is exciting in many ways – lots of new environments and possibilities to explore.  But it makes life harder for everyone – people who are making things, and people who are choosing things.

Back in the mid to late 90s, it was pretty much a PC world.  If you wanted a computer, you knew that if you had a PC, then (apart from a few vertical niche markets), you’d have access to pretty much any new software or tool that came out.  People who made things could develop for the PC and know that nearly everyone (who had a computer) could use their stuff, apart from the small minority of people who’d deliberately chosen a computer that didn’t use what everybody else was using.

And then in the late 90s to the mid 00s, it’s was pretty much a web world.  For the most part, if you had a computer and an Internet connection, you’d have access to pretty much any new tools that came out.  People who made things could develop on the web and (with a bit of futzing around with browser-specific stuff), pretty much everyone (who had a computer and an Internet connection) could use their stuff.

But now there’s not just PCs, Macs and Linux computers, there’s not just Internet Explorer, Firefox and Safari, there’s also the iPhone, Android (G1 – HTC Dream etc), Windows Mobile, Symbian/S60  (e.g. Nokia N97 and N86, out today), and the entirely new environment (webOS) for the Palm Pre (due any minute).  All of these are separate environments to use and to make things for.

It’s a nightmare.  As a user, or a developer, how do you choose?  How do you juggle all the different environments and still get stuff done?

Because juggling multiple environments is where things are.

This is all part of an ongoing transition.  When computers first arrived, there were lots of people for every computer.  Microsoft started out with the then-bold ambition “a computer on every desk and in every home, running Microsoft software” – a computer for every person.  Now we’re well in to the territory of lots of computers for every person.

This makes for harder work for everyone – to get the best out of things as a user or developer,  you need to be polyglot, able to move between platforms, learning new tools routinely.

It’s also, though, a hugely exciting range of opportunities and possibilities.   We are very much still in the middle of a golden age of information technology.

@stephenfry 20x better than BBC News

As I mentioned yesterday, we launched the Evolution Megalab in the UK yesterday.  It was on the Today programme (weekly audience around 6.5m), on the BBC News website (weekly audience around 14m), and on various regional broadcasts including BBC Scotland. We were hoping for a bigger splash on the BBC – we promised them an exclusive in return – but they cancelled the larger-scale broadcasts at the last minute.  No matter – it was a big trad-media splash, and kept at least four-and-a-half people at the OU busy most of the day (Kath the media contact, Jonathan and Jenny the media faces, Richard the programmer, plus me and others spending some time).

We got about 2400 unique visitors as a result.   Which is by far our busiest day … yet.

(We got about 1000 when we launched similarly on German national TV last month.)

Devolve Me is a related project – which I had nothing to do with – and is a bit of silliness that lets you morph a photo of yourself (or a loved one, or indeed a hated one) to look like an early hominin.  The site was pootling around nicely at about 1500 hits a day, and then a certain Stephen Fry tweeted:

Indebted to @iRobC once more: See how you’d have looked as an early human – OU site http://tinyurl.com/9tacnt #devolve Coolissimo

… and it got 52,500 hits. As our press release about this points out,

The spike in traffic to the OU website illustrates the growing influence that social media is having in today’s communications, with people increasingly sharing links and sourcing their news feeds online.

A single tweet by a single power user on a single social network gets you more than twenty times more exposure than mass broadcasts to tens of millions.

Cut another way, of the order of 1 in 1000 people who heard about the Evolution Megalab via the BBC visited the site, but about 1 in 7 people who saw Stephen Fry’s tweet visited that site. (He has over 360,000 followers at the time of writing.)

It is, of course, all about audience and targetting. I’d bet the majority of people following Stephen Fry on Twitter would be mildly interested in a cool website about evolution; and I’d bet that the overwhelming majority of the Today audience isn’t.

I suppose I shouldn’t be surprised, but I am – at least at the scale of the difference.  The old ways of getting messages out are being superseded while we watch. Sometimes dramatically.

Wild woods vs walled gardens

One the way back from the discussion of the Future of the Internet earlier today, I walked through campus and saw yet again two contrasting ways of growing plants:
Wild wood (OU campus)

The wild wood is a fairly heavily-wooded area in the middle of campus, with trees above an underzone of largely annual shrubs, herbs and grasses.
Walled garden (OU campus)

This walled garden is just outside Walton Hall, the Georgian manor house from which the campus takes its name.

I’m not the first one to note that these two contrasting approaches seem to have strong parallels with different ways of organising technical or educational projects (Martin Weller is fond of an ecosystem/succession model of HE institutional use of technologies, for instance). Or indeed with any human endeavour.

But it’s on my mind, and a good excuse to be outside in the Spring sunshine, and to (implicitly) sing the praises of lightly-managed endeavours like woods, meadows and hugely successful applications of generative technologies.

With a walled garden, you plan the layout of the garden, and spend intensive effort making sure that everything is neat and tidy and fits with the plan you’ve made.  It’s largely about control.  With a wild wood, things are a lot less tidy and constrained.

However, it’s a serious mistake to think that the two approaches are entirely at odds.

A walled garden needs regular attention, but with good planning that can be minimised, and if you imagine that you can control precisely what the plants do there … you obviously haven’t been gardening very long. You have to work with what actually grows, not your ideal of what should grow and how.

A wild wood also needs management and attention – it may not look it but it does – even when mature.  Certainly, over-management is generally a bigger danger than prolonged periods of what the medics call ‘expectant management’.  But if you don’t promptly remove invasive species you can easily end up with a stand of nothing but Japanese knotweed.  And you probably need some mechanism to keep the woody plants from over-dominating:  either natural grazing – a problem if you’ve fenced out all grazing animals in an effort to protect your wood – or something else. (This particular wood gets an annual cut of the meadow-like herbaceous lower layer – meadows are another of my favourite habitats.)

Creating a wood is a hard and long job.  You can do some things to speed up the process – though not much if you’re going for semi-natural woodland, but you’re looking at many decades, not a few years, before it looks anything like mature.  It can require as intensive processes of planting and weeding – at least in the early years – as any walled garden.  It certainly won’t look precisely like you imagined it at the start, if you’re even around to see it.

And all that is assuming that the local environment supports the sort of woodland you’re wanting. If it doesn’t, you’re looking at tremendously intensive inputs, if you can achieve it at all.

Walled gardens can be spectacularly beautiful and peaceful places.  But I much prefer woods.

Video is rubbish

I’ve had an idea in the back of my head for ages for a post on how fundamentally rubbish video is as a medium on the Internet. (Rough outline: it’s not the quality/bandwidth/storage capacity issue – that’s a problem still, but will fade. It’s fundamental to the nature of high-intensity visual media. You can’t skim. Reading speed is way higher than spoken speed. Audio suffers similarly but you are usually doing something else while you listen. In an attention economy, video is far and away the most expensive format. Lauren Weinstein, writing in RISKS earlier this year, makes the contrary case that you need video to capture subtleties of expression.)

I was prodded again by Martin’s recent discussion about David After Dentist, the latest viral video. (Outline notes: I think that what this shows is that viral stuff, especially videos, are the antithesis of what higher education is about – very surface, very little consideration or thought required.)

But now I’ve been prodded by some solid proof that video is terrible – just take a look at these videos of me on YouTube.

Here’s me talking about my talk last week on Scholarly Publishing 2.0:


Here’s me talking about the Biodiversity Observatory late last year:

And here’s me saying Learning Design is going to be big, five years ago at a Lab Group meeting:

On the plus side, these are all good ideas. (Latching on to LD five years ago looks prescient, though I hadn’t grokked that the thing that would get it to scale was to relax the stringent standards thing.)

On the negative side: it takes an awfully long time to grasp these ideas from the videos. That’s partly an artefact of my dreadful presentational style (I’m aware of my propensity to um and ah, but these make it painfully obvious – and note to self: look at the camera, dum-dum, not shiftily all over the place – and letting that experimental beard be captured for posterity was a terrible mistake). But it’s largely because video is such a slow medium.

Getting away from screens

After yesterday’s session on multi-touch surfaces, I saw that Rhodri Thomas tweeted:

v interesting demo earlier on use of ‘Surface’-like multitouch table – but are we ever going to get away from interacting with screens?

Which got me thinking about the degree to which we already interact with computers without screens.  I was also reminded of a rather staggering (but believable on exploration) claim I heard on the radio last week from a guy from Intel, who reckoned that more microprocessors would be manufactured in the next year or two than currently exist in the world.  The overwhelming majority of these are not in computers-as-we-know them: they’re buried away in embedded applications.  So this morning I thought I’d try to note all the microprocessors I’d interacted with other than by traditional screens, from getting up to sitting down at my first traditional computer screen to type this.  Some of these are slight cheats since they do have displays (e.g. the central heating timeswitch), but they’re not the sort we usually think of.  (We do need some leeway here because if by display you mean some way in which a processor can make its state known to a human and/or vice versa, it’s by definition impossible for any interaction to occur.) Anyway – a rough quick list:

  • central heating system – the timeswitch programmer to turn it on, and more processors in the boiler itself to run the system
  • bedside clock
  • fridge/freezer (for milk) – thermostatic and frost-free control working away
  • microwave
  • kettle – not certain since older models are purely electro-mechanical, but this one’s brand new and I strongly suspect there’s at least one processor in there managing overheating/boil dry and possibly actively optimising the heating process
  • radio
  • umpteen electronic toys used by the kids
  • electric shower – controlling the flow and heating rates

Then I left home and got in the car:

  • car – engine management system, and possibly other subsystems I don’t really know about, oh, and another radio
  • streetlights – some were still on suggesting they’re individually controlled (time? light?) rather than centrally switched – must have passed hundreds of these or more
  • SID – Speed Indicating Device – measured my speed, flashed it up on a display, then a smiley face to say it was under the limit
  • Pelican crossing with lights
  • level crossing with lights

And then I got to campus and towards my building:

  • More lighting
  • Security barriers
  • CCTV cameras
  • RFID security card entry system
  • automatic doors
  • heating blower behind the door
  • building management system controlling temperature and ventilation – this does have a traditional screen view but I don’t interact with it that way
  • lighting controllers
  • coffee machine

… a pretty large haul, and that’s not taking in to account any of the processors helping deliver utilities I used (gas, electricity, water).  It rather swamps the number of traditional screens I’ll be interacting with today: phone, iPod touch, laptop, desktop.  And of course those themselves rely on a large number of less visible processors running the network and power systems, and the hundreds of computers (or more) I’ll interact with more directly online today.