Ubuntu install log

Ubuntu install log – target machine Acer TravelMate C110, Pentium M, 500Mb, 40Gb.

Also has D-Link DWL-G630 card (gives 802.11g rather than b), which works native in Edgy (and presumably later), according to confusing hardware support list.

Early Feb 2008: Downloaded the CD image from Ubuntu – 7.10 Gutsy Gibbon on to work desktop. Downloaded and installed WinMD5Sum to check the download is Ok. It is. Save for rainy day.

19 Feb 2008 – Not actually raining today but I need to tidy up, so an install while I do that seems a good plan.

11.15 – Downloaded and installed InfraRecorder to burn the CD image

11.30 – Started InfraRecorder burn – failed – blank CD duff? Not clear from error message “Input/Output error.write_g1: scsi sencmd: noerror CDB: [string of hex digits] status: 0”. Didn’t think this machine was SCSI inside anyway.  Also realised don’t have power supply cord for target machine so abandon project for now.

13.55 Got another blank CD – started writing, seems Ok.

14.00 Stopped, with an error (as before) It’s certainly not worked. Suspect the CD writer.

14.10 Ok, try writing on the target machine (have found power supply); downloading image to there.

14.25 Got image. Installed WinMD5Sum, but it crashes. Machine wants to reboot (background Windows Update perhaps to blame) and I need to plug in the CD drive anyway, so off it goes for reboot #1.

14.35 Oh for heaven’s sake. Still crashes. Downloading MD5 from Fourmilab instead. That works and hash checks out fine.

14.40 Writing image using CD writing s/w already on the tablet. (NTI CD-Maker 2000 Plus – very now name)

14.50 Done, CD Ok. Right, trying to boot from it! Oops, missed F2 first time round, tried again, got it – Ubuntu booted! Installing.

14.55 Oh dear. Screen has gone blank and a couple of broken snatches of cheesy music played before stopping. Drive is still spinning so leave it to think … oh, Ok, seems to have installed. Oh, no it hasn’t, it’s just booted – I’m guessing from the desktop with ‘Examples’ and ‘Install’ on it. Maybe ‘Install’ is what I want. Aha – bet this is actually Ubuntu Live (version you boot from CD). Install!

5.00 Note to self: Wifi/Bluetooth light is flashing red – not right. Remember to fix.

15.00 Nice installer so far. Lovely touch to have a text box to type to check your keyboard. Astonishingly few config options – just select location, keyboard and initial user account data. (Ghods I hate passwords – far too many to remember. And I particularly hate password expiry, which my main work password has. I have a good memory for arbitrary strings, but not if I have to keep changing them. Bruce Schneier might say write them down and keep them in your wallet but a legible written list wouldn’t fit in my pocket.) Right, left it doing the install proper.

15.25 It’s done! Rebooting to my shiny new operating system.

15.30 Ah – much slicker cheesy music after the login screen. The boot is terrifyingly blank, I must say. It’s recommended restricted drivers … for the software modem driver? I don’t need that. Ignore. Start Firefox!

15.40 Bother, can get to websites on-site, but not beyond. Tried tracert to explore, got helpful report telling me it wasn’t installed and to try ‘sudo apt-get install traceroute’ … which then told me ‘Package traceroute is not available, but is referred to by another package.’ Ah well. What’s with traceroute? Trying configuring the proxy by hand, if I can remember/dig out the settings.

15.45 Yay! Done! I’m updating this from my laptop running Ubuntu. Job done, basically. That was easy. Oh – need to check: works on WiFi (PCMCIA card), get tablet features working, sort the WiFi/Bluetooth notification (don’t want it turned on by default – and how to do on/off).

16.10 Oh. Hibernate doesn’t work – it crashes when you turn back on again. Perl is there and working, but there’s something not instantly right with C – can’t find stdio.h (easy config tweak surely). Nice to be playing with Unixy stuff again – comes back quickly, thankfully. WiFi seems to work fine on the card, cool. Can’t be doing with the tablet features actually. Oh – check can use external monitor? (No, not yet, need play with X conf, ouch.)

16.30 Ah. Found the Update Manager. >250Mb of updates. Ah well – left those downloading.

17.00 libc6 update failed, because it couldn’t create /etc/ld.so.cache (read-only file system). Also ‘could not create log directory /root/.synaptic/log/’ for the same reason. Hmm. And that’s left Update Manager crashed. Took a bit of hunting to remember I wanted ps -e to get the PID to kill it. Ah – and even once I’d done that I couldn’t start it again. Or Synaptic. Bother. Also keep getting warnings that the application ‘nm-applet’ attempted to change an aspect of my configuration that my system administrator says not to. Ah well. No time now. And since Hibernate isn’t working anyway it’ll have a reboot next time I get to it.

Things todo: C and C++ need build-essential installed. Also explore PHP. Optimise kernel – find appropriate linux-image (Intel Pentium M). Try http://stefan.dnsalias.net/howto/c110.html for useful bits, especially to get the Bluetooth/WiFi turned off.

Trying Ubuntu

I decided to try out Ubuntu so I can live in the Linux world a bit. I have a semi-aged tablet PC (an Acer TravelMate C110) lying around. I used to use it heavily as a totable laptop (ignoring the tablet features). It’s got negligible use since I got a shinier notebook (Samsung Q40), so it was ripe for a low-demand, try-it-out OS installation. I was hoping to do better than my colleague Patrick who tried out Fedora core on an old laptop … which then melted. (Oops.)

Summary: It was really much, much easier than you think if you’re technically competent and are at all familiar with Linux. I had far more trouble trying to burn the installation CD (under Windows XP) than I did actually installing Ubuntu on the tablet. So all your mates who tell you Ubuntu is very little bother to install are probably right – if you are fairly technically savvy and have come across Unix at some point as a user. If you’re not, you will probably get bewildered at some point, if not many.

It’s a nice operating system so far. It’s noticeably faster booting and browsing than the old Windows XP system was on the same hardware. I’ve not tried doing anything too clever yet, but for basics it’s great. It is lovely having a shiny GUI but with the gubbins easily accessible under the hood. (And there is a *lot* of gubbins.)

Ubuntu has fantastically simplified the whole process (my previous encounters were with RedHat and SUSE years and years ago ) … although even the shiny, user-friendly stuff suffers from open-source unnecessary forking. Do I want Ubuntu, Kubuntu, Edubuntu … or one of the unofficial versions? Most people don’t know, don’t care, and don’t want to spent precious time finding out.

Next post will be a more detailed install log for those of you who care about such things. (Both of you.)

Physiological basis of memory

Stephen Downes has a fascinating post about the science behind memory, summarising a paper by Nobel prize-winner Eric Kandel on Genes, synapses and memory storage [PDF]– and exploring the implications for learning. It really is excellent and you should read both his post and the original paper.

From studies of Aplysia (the sea slug, one of those classic over-researched model species, like E. coli, Arabidopsis, Drosophila, lab rats and mice, Rhesus monkeys, and Psychology students) Kandel draws out two forms of memory:

  • Short-term storage for implicit memory involves functional changes in the strength
    of pre-existing synaptic connections.
  • Long-term storage for implicit memory involves the synthesis of new protein and the growth of new connections

Stephen takes Kandel’s distinction- that ‘Learning refers to the acquisition of new information about the world and memory refers to the retention of that information over time’ – to mean that:

  • Learning is a semantic process. It is about things. It has meaning.
  • Memory is a syntactic process. It is a set of mechanisms. It may or may not have meaning.

As he says, this is a difficult distinction, and I’m really not sure I agree with it in principle. From the biochemistry we know that learning (almost by definition, actually) takes place in relation to one or more stimuli. That doesn’t, to my mind, require that the learning is meaningful. The associations can be entirely arbitrary. Stephen puts it well when he says that “learning is associative, not propositional”. So is memory.

Learning is certainly related to something, but the transduction of external stimuli in to synaptic changes in the brain is far from direct, and when you get in to associative learning it’s even more complex than that.

I think Stephen may be arguing that only learning can be meaningful, in the sense of referring accurately to the external world. Since the transfer to memory is a separate process, there is a potential loss of accuracy, and hence meaning.

I see two problems with that. Firstly, one can imagine that meaning could arise from the combination of separate learning experiences. It’s only after many encounters with fluffy objects that a baby can understand the difference between a soft toy (that can be safely squeezed or bitten) and a cat (which will hiss and scratch if mistreated). The individual observations make a lot more sense when related to each other. And note that this understanding could be wrong – for instance, the heuristic the child uses to distinguish the two may only work for a limited subset of cats, toys and locations they are found in.

Secondly, and more fundamentally, I think the very concept of ‘meaning’ and ‘sense-making’ are not compatible with the level of description we’re dealing with here. Meaning is a complex, socially-mediated thing. Membrane depolarisation, glutamate release and protein synthesis are much less so. (As an aside, this is related to my deep lack of faith in the larger claims of the Semantic Web project.)

We’re making huge progress in linking that hard-science base to the more directly socially-useful stuff about learning, as that scientific understanding expands hugely. Things like the increasing ubiquity of fMRI apparatus is transforming our understanding of what’s going on physiologically when learning happens. But I don’t think it will ever be possible to straightforwardly and easily move from synapses to semantics, from neurons to meaning.

I’ve an argument brewing for why it’s actually impossible, not just difficult and complex … but that’s for later.

Blog comments

Oh no, another blog post about blogging …

John Naughton discusses whether blogs need to have comments, picking up James Cridland’s piece on the topic. John Naughton doesn’t have comments on his blog – and neither do many other big-noise bloggers, including most famously, Dave Winer.

All of them make good points: with posts rather than comments, you get a single voice in a single place; there’s more space (and links) in a post; readers of either blog can see the conversation develop; and even very light-touch moderating a busy comments section is a major task in itself. (As you can see on any high-traffic blog site that enables comments.)

It seems to me that working through posts seems to be working the medium to its strengths. Comments are fine, but they are fundamentally a different medium to blog posts, even though they are usually attached. Using media to their respective strengths is one of those fundamental good ideas to have been developed by the OU – originally articulated and developed through and by my own Institute of Educational Technology. So for me that’s a pretty strong argument.

I’ve been getting more interested in economics recently, and one of the central good ideas of that field is that it’s worth paying attention to the incentives for individuals in any system. So I think James Cridland put his finger on something important when he noted in passing that

The extra addition of Google Juice, etc, also is a good thing for both of us.

That’s the thing. If James and John have a discussion in the comments on one post, they might help develop an interesting thought further. However, if they do it through posts, they also both gain Google Juice, Technorati authority, and so on. As well as all the other benefits above.

They’re not talking to us

… and while I’m picking nits off Martin’s last post, he says of Bertrand Russell:

But, the whole 2.0, user generated content world would delight him I think.

This reminded me of something I read the other day from the excellent Clay Shirky, arguing that the concept of “user-generated content” isn’t that helpful:

We misinterpret these seemingly inane posts, because we’re so unused to seeing material in public that isn’t for the public. The people posting messages to one another, on social networking services and weblogs and media sharing sites, are creating a different kind of material, and doing a different kind of communicating, than the publishers of newspapers and magazines are.

Most user-generated material is actually personal communication in a public forum. Because of this personal address, it makes no more sense to label this content than it would to call a phone call with your mother “family-generated content.” A good deal of user-generated content isn’t actually “content” at all, at least not in the sense of material designed for an audience.

Why would people post this drivel then?

It’s simple. They’re not talking to us.

Which, I think, we educators could do with bearing in mind more often. Especially as we tread in to areas that students think are their space.

Russell on idleness

I’ve long been an admirer of Bertrand Russell – I find him one of the more lucid writers on philosophy. I even carted a battered paperback edition of his History of Western Philosophy around with me as reading matter on cycling holidays years ago – the interesting ideas:weight ratio was excellent.
So I was interested to see my colleague Martin pick up John Naughton’s take on Bertrand Russell’s essay In Praise of Idleness. Martin wonders he would have made of the modern world:

Russell would I think be shocked to see that when given leisure time a lot of us spend it slumped in front of the TV drinking Pinot Grigio and watching other people on reality shows. But, the whole 2.0, user generated content world would delight him I think. For his painter who wants to paint without starving read Photographer who shares with the world via Flickr. And then there are all the bloggers, wiki writers, YouTube creators, podcasters who create material of mind-bendingly variable quality, but they are engaged in being creative, and that is fulfilling.

I’m sure Russell would’ve been a huge enthusiast for things like web 2.0, gift economies and the rest of it. But I really don’t think he would have entirely despaired at the vision of millions of people slumped on sofas watching reality TV for hours on end – at least they are not busy with pointless make-work.

I think it’s important to think about Russell’s distinction in the types of work, quoted by John in his post:

Work is of two kinds: first, altering the position of matter at or near the earth’s surface relatively to other such matter; second, telling other people to do so. The first kind is unpleasant and ill paid; the second is pleasant and highly paid. The second kind is capable of indefinite extension: there are not only those who give orders, but those who give advice as to what orders should be given.

Since the 1930s, we have seen a huge reduction in the physical difficulty of work of the first kind, a huge increase in the intricacy of it, and a quite staggering extension of work of the second kind, in a way that changes the whole dichotomy. Low-paid service industries as mass employers didn’t really exist back then.

Anyway: I think Russell would probably rightly focus his wrath on the education system that still deprives people of an appreciation of highbrow tastes. I don’t entirely buy that highbrow equals better. But I do strongly believe that all people should be offered opportunities to learn about things that they want to. Our education system is a long, long way from that.

Online references management

Another to-do is to get a decent academic references database sorted out, since I’m going to be doing a lot more papers and bids in the immediate future.

Years ago I had a system that worked beautifully and Got It Right (BibTeX) … but that doesn’t play well if you’re not writing La/TeX documents.  EndNote with Word plug-in was the only game after that and was so appallingly annoying that it’s easier and quicker (IMO) to do it all by hand.

But I think I need something better now, and the field has changed profoundly.  So – I’m in the market for a new system.  Planning to explore RefWorks, Zotero, CiteULike, Connotea, HotReference, and anything else I can find quickly.  RefWorks gains a lot of points out of the gate for being supported by the OU Library, with handy linkages from their search results pages.

Any other suggestions?  Recommendations?

eJournals

Have just been to a meeting about the OU Library‘s services.  One provocative suggestion we discussed was whether we should drop print copies of journals entirely where electronic versions are available.

After an initial boggle, I came down firmly on the ‘yes’ side.  I can’t remember the last time I looked at a print copy of a journal myself.  All the stuff I want access to is online anyway – in my area, if it’s not online, it may as well not exist.  I’ll print stuff off if I want to read it properly (terribly wasteful, of course) – but most stuff I only want to skim anyway.

There are good reasons to be careful, though, which came up in the discussion.  The impact is profoundly different in different disciplines (of course).  In my area, science, and technology, there’s probably less of an issue.  But, for instance, in Art History, the quality of the reproduction in electronic journals is rubbish, and you really need the print copy.  There are some journals where electronic copies lag print by a year or more (mad but true).  There’s the (perceived?) risk of being held hostage over ongoing service fees when you shift from a product to a service model.  There’s the loss of the facility for serendipitous physical browsing (which is different – and arguably more effective and efficient – than electronic).  There’s the loss of access to journals for physical visitors who aren’t members of the university.

And there is the aesthetic aspect that made me pause at first.  There is something secure, comforting and inspiring about printed media, and particularly large collections of it.  But that may be becoming a luxury we can’t afford any more.

On the other hand, our students and ALs simply can’t access the physical stuff.  At least, not most of them.  Resource diverted from electronic access to physical copies is effectively taking resource away from serving them.

(The Library also have some nice stuff going on with journal searching, and they were also talking about setting up a ‘service quality’ version of Tony Hirst‘s OU Library Traveller … but perhaps a post for later.)