Physiological basis of memory

Stephen Downes has a fascinating post about the science behind memory, summarising a paper by Nobel prize-winner Eric Kandel on Genes, synapses and memory storage [PDF]– and exploring the implications for learning. It really is excellent and you should read both his post and the original paper.

From studies of Aplysia (the sea slug, one of those classic over-researched model species, like E. coli, Arabidopsis, Drosophila, lab rats and mice, Rhesus monkeys, and Psychology students) Kandel draws out two forms of memory:

  • Short-term storage for implicit memory involves functional changes in the strength
    of pre-existing synaptic connections.
  • Long-term storage for implicit memory involves the synthesis of new protein and the growth of new connections

Stephen takes Kandel’s distinction- that ‘Learning refers to the acquisition of new information about the world and memory refers to the retention of that information over time’ – to mean that:

  • Learning is a semantic process. It is about things. It has meaning.
  • Memory is a syntactic process. It is a set of mechanisms. It may or may not have meaning.

As he says, this is a difficult distinction, and I’m really not sure I agree with it in principle. From the biochemistry we know that learning (almost by definition, actually) takes place in relation to one or more stimuli. That doesn’t, to my mind, require that the learning is meaningful. The associations can be entirely arbitrary. Stephen puts it well when he says that “learning is associative, not propositional”. So is memory.

Learning is certainly related to something, but the transduction of external stimuli in to synaptic changes in the brain is far from direct, and when you get in to associative learning it’s even more complex than that.

I think Stephen may be arguing that only learning can be meaningful, in the sense of referring accurately to the external world. Since the transfer to memory is a separate process, there is a potential loss of accuracy, and hence meaning.

I see two problems with that. Firstly, one can imagine that meaning could arise from the combination of separate learning experiences. It’s only after many encounters with fluffy objects that a baby can understand the difference between a soft toy (that can be safely squeezed or bitten) and a cat (which will hiss and scratch if mistreated). The individual observations make a lot more sense when related to each other. And note that this understanding could be wrong – for instance, the heuristic the child uses to distinguish the two may only work for a limited subset of cats, toys and locations they are found in.

Secondly, and more fundamentally, I think the very concept of ‘meaning’ and ‘sense-making’ are not compatible with the level of description we’re dealing with here. Meaning is a complex, socially-mediated thing. Membrane depolarisation, glutamate release and protein synthesis are much less so. (As an aside, this is related to my deep lack of faith in the larger claims of the Semantic Web project.)

We’re making huge progress in linking that hard-science base to the more directly socially-useful stuff about learning, as that scientific understanding expands hugely. Things like the increasing ubiquity of fMRI apparatus is transforming our understanding of what’s going on physiologically when learning happens. But I don’t think it will ever be possible to straightforwardly and easily move from synapses to semantics, from neurons to meaning.

I’ve an argument brewing for why it’s actually impossible, not just difficult and complex … but that’s for later.

Blog comments

Oh no, another blog post about blogging …

John Naughton discusses whether blogs need to have comments, picking up James Cridland’s piece on the topic. John Naughton doesn’t have comments on his blog – and neither do many other big-noise bloggers, including most famously, Dave Winer.

All of them make good points: with posts rather than comments, you get a single voice in a single place; there’s more space (and links) in a post; readers of either blog can see the conversation develop; and even very light-touch moderating a busy comments section is a major task in itself. (As you can see on any high-traffic blog site that enables comments.)

It seems to me that working through posts seems to be working the medium to its strengths. Comments are fine, but they are fundamentally a different medium to blog posts, even though they are usually attached. Using media to their respective strengths is one of those fundamental good ideas to have been developed by the OU – originally articulated and developed through and by my own Institute of Educational Technology. So for me that’s a pretty strong argument.

I’ve been getting more interested in economics recently, and one of the central good ideas of that field is that it’s worth paying attention to the incentives for individuals in any system. So I think James Cridland put his finger on something important when he noted in passing that

The extra addition of Google Juice, etc, also is a good thing for both of us.

That’s the thing. If James and John have a discussion in the comments on one post, they might help develop an interesting thought further. However, if they do it through posts, they also both gain Google Juice, Technorati authority, and so on. As well as all the other benefits above.

They’re not talking to us

… and while I’m picking nits off Martin’s last post, he says of Bertrand Russell:

But, the whole 2.0, user generated content world would delight him I think.

This reminded me of something I read the other day from the excellent Clay Shirky, arguing that the concept of “user-generated content” isn’t that helpful:

We misinterpret these seemingly inane posts, because we’re so unused to seeing material in public that isn’t for the public. The people posting messages to one another, on social networking services and weblogs and media sharing sites, are creating a different kind of material, and doing a different kind of communicating, than the publishers of newspapers and magazines are.

Most user-generated material is actually personal communication in a public forum. Because of this personal address, it makes no more sense to label this content than it would to call a phone call with your mother “family-generated content.” A good deal of user-generated content isn’t actually “content” at all, at least not in the sense of material designed for an audience.

Why would people post this drivel then?

It’s simple. They’re not talking to us.

Which, I think, we educators could do with bearing in mind more often. Especially as we tread in to areas that students think are their space.

Russell on idleness

I’ve long been an admirer of Bertrand Russell – I find him one of the more lucid writers on philosophy. I even carted a battered paperback edition of his History of Western Philosophy around with me as reading matter on cycling holidays years ago – the interesting ideas:weight ratio was excellent.
So I was interested to see my colleague Martin pick up John Naughton’s take on Bertrand Russell’s essay In Praise of Idleness. Martin wonders he would have made of the modern world:

Russell would I think be shocked to see that when given leisure time a lot of us spend it slumped in front of the TV drinking Pinot Grigio and watching other people on reality shows. But, the whole 2.0, user generated content world would delight him I think. For his painter who wants to paint without starving read Photographer who shares with the world via Flickr. And then there are all the bloggers, wiki writers, YouTube creators, podcasters who create material of mind-bendingly variable quality, but they are engaged in being creative, and that is fulfilling.

I’m sure Russell would’ve been a huge enthusiast for things like web 2.0, gift economies and the rest of it. But I really don’t think he would have entirely despaired at the vision of millions of people slumped on sofas watching reality TV for hours on end – at least they are not busy with pointless make-work.

I think it’s important to think about Russell’s distinction in the types of work, quoted by John in his post:

Work is of two kinds: first, altering the position of matter at or near the earth’s surface relatively to other such matter; second, telling other people to do so. The first kind is unpleasant and ill paid; the second is pleasant and highly paid. The second kind is capable of indefinite extension: there are not only those who give orders, but those who give advice as to what orders should be given.

Since the 1930s, we have seen a huge reduction in the physical difficulty of work of the first kind, a huge increase in the intricacy of it, and a quite staggering extension of work of the second kind, in a way that changes the whole dichotomy. Low-paid service industries as mass employers didn’t really exist back then.

Anyway: I think Russell would probably rightly focus his wrath on the education system that still deprives people of an appreciation of highbrow tastes. I don’t entirely buy that highbrow equals better. But I do strongly believe that all people should be offered opportunities to learn about things that they want to. Our education system is a long, long way from that.

Online references management

Another to-do is to get a decent academic references database sorted out, since I’m going to be doing a lot more papers and bids in the immediate future.

Years ago I had a system that worked beautifully and Got It Right (BibTeX) … but that doesn’t play well if you’re not writing La/TeX documents.  EndNote with Word plug-in was the only game after that and was so appallingly annoying that it’s easier and quicker (IMO) to do it all by hand.

But I think I need something better now, and the field has changed profoundly.  So – I’m in the market for a new system.  Planning to explore RefWorks, Zotero, CiteULike, Connotea, HotReference, and anything else I can find quickly.  RefWorks gains a lot of points out of the gate for being supported by the OU Library, with handy linkages from their search results pages.

Any other suggestions?  Recommendations?


Have just been to a meeting about the OU Library‘s services.  One provocative suggestion we discussed was whether we should drop print copies of journals entirely where electronic versions are available.

After an initial boggle, I came down firmly on the ‘yes’ side.  I can’t remember the last time I looked at a print copy of a journal myself.  All the stuff I want access to is online anyway – in my area, if it’s not online, it may as well not exist.  I’ll print stuff off if I want to read it properly (terribly wasteful, of course) – but most stuff I only want to skim anyway.

There are good reasons to be careful, though, which came up in the discussion.  The impact is profoundly different in different disciplines (of course).  In my area, science, and technology, there’s probably less of an issue.  But, for instance, in Art History, the quality of the reproduction in electronic journals is rubbish, and you really need the print copy.  There are some journals where electronic copies lag print by a year or more (mad but true).  There’s the (perceived?) risk of being held hostage over ongoing service fees when you shift from a product to a service model.  There’s the loss of the facility for serendipitous physical browsing (which is different – and arguably more effective and efficient – than electronic).  There’s the loss of access to journals for physical visitors who aren’t members of the university.

And there is the aesthetic aspect that made me pause at first.  There is something secure, comforting and inspiring about printed media, and particularly large collections of it.  But that may be becoming a luxury we can’t afford any more.

On the other hand, our students and ALs simply can’t access the physical stuff.  At least, not most of them.  Resource diverted from electronic access to physical copies is effectively taking resource away from serving them.

(The Library also have some nice stuff going on with journal searching, and they were also talking about setting up a ‘service quality’ version of Tony Hirst‘s OU Library Traveller … but perhaps a post for later.)

Microsoft acts as if Microsoft is doomed

Microsoft is trying to buy Yahoo – or, as El Reg puts it in inimitable form, Microsoft! bids! $44.6bn! for! Yahoo!

That’s … surprising.  With the huge credit crunch at the moment, the conventional wisdom was that mergers and acquisitions activity would be negligible, particularly at the monster scale. Of course, Microsoft has a huge cash pile.  But it was never $44.6bn huge at its peak, and it must be a lot less than that by now.

It always bothers me when the normal run of economics don’t seem to apply to high tech companies.   They’re different, for sure – for short or even medium periods you can see growth rates that you just can’t see in most other industries. But you can also go from massive to bust much faster too.  I don’t think it’s possible to defy financial gravity entirely.  So I’m getting that “but but but” feeling that I had round the dot-com bubble.

That time I didn’t have the courage of my convictions.  I imagined that all these bright people saying that the usual rules didn’t apply were right.  But they were wrong.  So this time I’ll be a bit more forthright with a prediction: no good will come of this for Microsoft.  Yahoo shareholders would be mad to say no.

(Worth noting in passing that this would entail Microsoft owning, since Yahoo bought it at the end of 2005.  I think’ traditional userbase might find that … interesting.)