Technology of memory

I wrote a while ago about the science of memory – I’ve just come across an interesting Wired article about Piotr Wozniak and his own-life project based on implementing an idea about how to remember more stuff, via his software SuperMemo.  The website and software seem terribly clunky, but the idea has some appeal.  It’s at least one level if not two levels of description up from neurotransmitters, which makes implementation look more convincing.

The finding – allegedly supported by lab research, but I’ve not chased that down (yet?) – is that recall tails off exponentially, and that the rate of fall-off reduces with subsequent reminders, and that there is therefore a pattern of optimum reminder times to learn something – like this:

Now, I’m generally suspicious of the notion that really good learning requires a lot of memorisation, for all sorts of reasons (not least that it’s so often used as an excuse to teach only rote memorisation and not do more fundamental stuff).  And I’m also suspicious of the idea of learning facts in isolation.  (I’m also not entirely sold on the idea that there exist unproblematic ‘facts’ that are unproblematically available to educators to deploy

But I do buy the argument that a certain amount of memorisation is needed in some areas.  And a structured remembering system for some things – e.g. tasks – seems like a really good idea.  It might be a better plan than the Remember The Milk/Getting Things Done approach too.  (Considered thoughts on which are for another post, but I’m deliberately putting off getting in to organisational systems since – for me – it would be a terrible – and ultimately fruitless – distraction from actually getting things done.)

I wouldn’t go all the way that Wozniak has done and turn one’s entire life into a rational project, but I do believe that it is often possible to change the way one is in quite powerful ways.

BIL and TED’s Excellent Conference Adventures

TED offers “inspired talks by the world’s greatest thinkers and doers” … with registration – sorry, membership – starting at US$6,000.  (You can see some old talks online, though, and some of those are excellent.)

Not exactly inclusive.  So in response, BIL has been created.  BIL is …

an open, self-organizing, emergent, and anarchic science and technology conference.

Nobody is in charge.

If you want to come, just show up.

If you have an idea to spread, start talking.

If someone is saying something interesting, stop and listen.

Unconferences like this are really interesting idea.  I’ve been to plenty of academic ones, and this sounds well worth doing as an alternative.  (Via BoingBoing.)

When I had a management hat on, I was responsible for running an internal conference for a bunch of about 30 academics and researchers.  It was the one forum where (almost) everyone in the group could talk to (almost) everyone, for two days in the year. I was always keen to minimise the amount of preparatory effort required of the group, while maximising the opportunities for cross-fertilisation of ideas and group bonding.  We tried out all sorts of formats – traditional academic papers, works in progress, panel discussions, workshops, technology fun sessions, and even a great everyone-talks-for-five-minutes day.  (At least, I thought it was great: the feedback was variable.) Of course, the best networking was over coffee, lunch and tea, so allowing plenty of time for those was a fundamental part of the plan.

It was almost but not quite an unconference.  The two main differences were (a) the attendees were a tightly restricted group and (b) we discussed the format in advance as a group, but I had final say on who talked, when and on what.   I think it’d be fun to try more of an unconference approach to the next time we do something similar in our new grouping.

… although I’m wary of saying ‘fun’ because one thing I did learn was that it’s really hard to call something ‘fun’ as a manager and not thereby rob it of all excitement and enjoyment.

(And should Grainne, Patrick and Martin be listening, no, that’s not me volunteering to organise it.)

‘Intellectual property’ is a silly euphemism

As an antidote to the gloom about the Blackboard patent, read Cory Doctorow’s explanation in The Guardian last week (a couple of days before the verdict) of why ‘intellectual property’ is a silly euphemism:

Most of all, it [‘intellectual property’] is not inherently “exclusive”. If you trespass on my flat, I can throw you out (exclude you from my home). If you steal my car, I can take it back (exclude you from my car). But once you know my song, once you read my book, once you see my movie, it leaves my control. Short of a round of electroconvulsive therapy, I can’t get you to un-know the sentences you’ve just read here.

He concludes that

it’s time to set property aside, time to start recognising that knowledge – valuable, precious, expensive knowledge – isn’t owned. Can’t be owned. The state should regulate our relative interests in the ephemeral realm of thought, but that regulation must be about knowledge, not a clumsy remake of the property system.

I do hope we can get there sooner rather than later.  Academics – and in particular, academics engaged strongly with new technologies – can and should be in the vanguard here.

Micro-location data visualisation

So someone mapped the movements of themselves, their two young kids, and the cat, in their living room over the space of an hour, and produced this very lovely graphic:

The method seems terribly laborious to me:

I used a marked-out equally-spaced grid in masking tape and filmed them moving via video across the grid for an hour. I then reviewed the video and plotted their movements on each minute of the video’s timecode onto a ‘room map’ with corresponsing grid.

A real labour of love, and a beautiful and fascinating result. It reminds me of the maps you get from eye-tracking studies of websites.
This is exactly the sort of thing I want us to be able to do – without the heavyweight manual data processing – in our shiny new labs, open in April! We can set up the big lab as – say – a living room, and log what goes on when a bunch of users interact with some new technology or other to do a task.  Out of the box we’ve only got video capture, but it’s designed specifically to allow it to be kitted out to do this sort of thing – there’s any number of technologies we could use for the tracking.

(Via Kevin Kelly)

The Computer knows what you’re thinking

Direct brain input could be with us sooner than I thought – this nifty brain-wave reading headset will allegedly be ready for mass sale next Christmas. (via Engadget)

It’s not a new idea at all, but a usable, widespread instantiation could change the way we interact with computers profoundly.  And raise all sorts of exciting new issues of privacy and openness.  I predict a health scare at some point.

Social:Learn breaks cover!

My colleague Martin Weller has at long last blogged about Social:Learn, the OU project formerly  known as Skunkworks – an attempt to explore what fully embracing the Web 2.0 world could mean for university learning:

It is born of the recognition that the OU (and higher education in general) needs to find ways of embracing the whole web 2.0, social networking world, and that the only way to understand this stuff is to do it.

It’s hugely exciting.  To my mind it’s at least as big a deal as OpenLearn – and if it works at all, even bigger.  Watch Martin’s blog for more as it comes.  I’m still not sure I understand what it is, but my guess is that this is true for the people more closely involved as well.  The journey is well worth setting out on, regardless of whether we reach the destination, whatever that might be.

(As an aside, the previous secrecy, followed by this semi-official leak, and a public announcement to come later, is another great example of how Web 2.0 openness isn’t total.  It’s more than before, but it’s still partial – and that is very important.)

Trying Ubuntu

I decided to try out Ubuntu so I can live in the Linux world a bit. I have a semi-aged tablet PC (an Acer TravelMate C110) lying around. I used to use it heavily as a totable laptop (ignoring the tablet features). It’s got negligible use since I got a shinier notebook (Samsung Q40), so it was ripe for a low-demand, try-it-out OS installation. I was hoping to do better than my colleague Patrick who tried out Fedora core on an old laptop … which then melted. (Oops.)

Summary: It was really much, much easier than you think if you’re technically competent and are at all familiar with Linux. I had far more trouble trying to burn the installation CD (under Windows XP) than I did actually installing Ubuntu on the tablet. So all your mates who tell you Ubuntu is very little bother to install are probably right – if you are fairly technically savvy and have come across Unix at some point as a user. If you’re not, you will probably get bewildered at some point, if not many.

It’s a nice operating system so far. It’s noticeably faster booting and browsing than the old Windows XP system was on the same hardware. I’ve not tried doing anything too clever yet, but for basics it’s great. It is lovely having a shiny GUI but with the gubbins easily accessible under the hood. (And there is a *lot* of gubbins.)

Ubuntu has fantastically simplified the whole process (my previous encounters were with RedHat and SUSE years and years ago ) … although even the shiny, user-friendly stuff suffers from open-source unnecessary forking. Do I want Ubuntu, Kubuntu, Edubuntu … or one of the unofficial versions? Most people don’t know, don’t care, and don’t want to spent precious time finding out.

Next post will be a more detailed install log for those of you who care about such things. (Both of you.)

Physiological basis of memory

Stephen Downes has a fascinating post about the science behind memory, summarising a paper by Nobel prize-winner Eric Kandel on Genes, synapses and memory storage [PDF]– and exploring the implications for learning. It really is excellent and you should read both his post and the original paper.

From studies of Aplysia (the sea slug, one of those classic over-researched model species, like E. coli, Arabidopsis, Drosophila, lab rats and mice, Rhesus monkeys, and Psychology students) Kandel draws out two forms of memory:

  • Short-term storage for implicit memory involves functional changes in the strength
    of pre-existing synaptic connections.
  • Long-term storage for implicit memory involves the synthesis of new protein and the growth of new connections

Stephen takes Kandel’s distinction- that ‘Learning refers to the acquisition of new information about the world and memory refers to the retention of that information over time’ – to mean that:

  • Learning is a semantic process. It is about things. It has meaning.
  • Memory is a syntactic process. It is a set of mechanisms. It may or may not have meaning.

As he says, this is a difficult distinction, and I’m really not sure I agree with it in principle. From the biochemistry we know that learning (almost by definition, actually) takes place in relation to one or more stimuli. That doesn’t, to my mind, require that the learning is meaningful. The associations can be entirely arbitrary. Stephen puts it well when he says that “learning is associative, not propositional”. So is memory.

Learning is certainly related to something, but the transduction of external stimuli in to synaptic changes in the brain is far from direct, and when you get in to associative learning it’s even more complex than that.

I think Stephen may be arguing that only learning can be meaningful, in the sense of referring accurately to the external world. Since the transfer to memory is a separate process, there is a potential loss of accuracy, and hence meaning.

I see two problems with that. Firstly, one can imagine that meaning could arise from the combination of separate learning experiences. It’s only after many encounters with fluffy objects that a baby can understand the difference between a soft toy (that can be safely squeezed or bitten) and a cat (which will hiss and scratch if mistreated). The individual observations make a lot more sense when related to each other. And note that this understanding could be wrong – for instance, the heuristic the child uses to distinguish the two may only work for a limited subset of cats, toys and locations they are found in.

Secondly, and more fundamentally, I think the very concept of ‘meaning’ and ‘sense-making’ are not compatible with the level of description we’re dealing with here. Meaning is a complex, socially-mediated thing. Membrane depolarisation, glutamate release and protein synthesis are much less so. (As an aside, this is related to my deep lack of faith in the larger claims of the Semantic Web project.)

We’re making huge progress in linking that hard-science base to the more directly socially-useful stuff about learning, as that scientific understanding expands hugely. Things like the increasing ubiquity of fMRI apparatus is transforming our understanding of what’s going on physiologically when learning happens. But I don’t think it will ever be possible to straightforwardly and easily move from synapses to semantics, from neurons to meaning.

I’ve an argument brewing for why it’s actually impossible, not just difficult and complex … but that’s for later.

They’re not talking to us

… and while I’m picking nits off Martin’s last post, he says of Bertrand Russell:

But, the whole 2.0, user generated content world would delight him I think.

This reminded me of something I read the other day from the excellent Clay Shirky, arguing that the concept of “user-generated content” isn’t that helpful:

We misinterpret these seemingly inane posts, because we’re so unused to seeing material in public that isn’t for the public. The people posting messages to one another, on social networking services and weblogs and media sharing sites, are creating a different kind of material, and doing a different kind of communicating, than the publishers of newspapers and magazines are.

Most user-generated material is actually personal communication in a public forum. Because of this personal address, it makes no more sense to label this content than it would to call a phone call with your mother “family-generated content.” A good deal of user-generated content isn’t actually “content” at all, at least not in the sense of material designed for an audience.

Why would people post this drivel then?

It’s simple. They’re not talking to us.

Which, I think, we educators could do with bearing in mind more often. Especially as we tread in to areas that students think are their space.

Russell on idleness

I’ve long been an admirer of Bertrand Russell – I find him one of the more lucid writers on philosophy. I even carted a battered paperback edition of his History of Western Philosophy around with me as reading matter on cycling holidays years ago – the interesting ideas:weight ratio was excellent.
So I was interested to see my colleague Martin pick up John Naughton’s take on Bertrand Russell’s essay In Praise of Idleness. Martin wonders he would have made of the modern world:

Russell would I think be shocked to see that when given leisure time a lot of us spend it slumped in front of the TV drinking Pinot Grigio and watching other people on reality shows. But, the whole 2.0, user generated content world would delight him I think. For his painter who wants to paint without starving read Photographer who shares with the world via Flickr. And then there are all the bloggers, wiki writers, YouTube creators, podcasters who create material of mind-bendingly variable quality, but they are engaged in being creative, and that is fulfilling.

I’m sure Russell would’ve been a huge enthusiast for things like web 2.0, gift economies and the rest of it. But I really don’t think he would have entirely despaired at the vision of millions of people slumped on sofas watching reality TV for hours on end – at least they are not busy with pointless make-work.

I think it’s important to think about Russell’s distinction in the types of work, quoted by John in his post:

Work is of two kinds: first, altering the position of matter at or near the earth’s surface relatively to other such matter; second, telling other people to do so. The first kind is unpleasant and ill paid; the second is pleasant and highly paid. The second kind is capable of indefinite extension: there are not only those who give orders, but those who give advice as to what orders should be given.

Since the 1930s, we have seen a huge reduction in the physical difficulty of work of the first kind, a huge increase in the intricacy of it, and a quite staggering extension of work of the second kind, in a way that changes the whole dichotomy. Low-paid service industries as mass employers didn’t really exist back then.

Anyway: I think Russell would probably rightly focus his wrath on the education system that still deprives people of an appreciation of highbrow tastes. I don’t entirely buy that highbrow equals better. But I do strongly believe that all people should be offered opportunities to learn about things that they want to. Our education system is a long, long way from that.