Blog

Future of the Net

Liveblog from a seminar on The Future Of The Net (Jonathan Zittrain’s book – The Future of the Internet and How to Stop It.), 20 March 2009, by John Naughton.

Update: Listen to the MP3 and see the useful concept map from John Naughton himself.

Audience small but quite high-powered (eight, including Tony Walton, Paul Clark, Andy Lane) . OU Strategy Unit trying to reach out to academic units and others.

train tracks with points set to go off a cliff

John  lost his physical copy … but rightly guessed it’d be available online as Creative Commons-licensed text.

Jonathan Zittrain was employed sight-unseen as a Unix sysadmin at 13, then by some process (probably involving Larry Lessig) became a lawyer.

Part of an emerging canon – Lessig’s Code 2.0, Benkler’s Wealth of Networks – heavyweight academic stuff. Two sorts of people – trailblazers and roadbuilders; Lessig is the first. Our role in OU (including Relevant Knowledge Programme) is to follow and be roadbuilders, which is an honorable activity.

Core argument of book: Internet’s generative characteristics primed it for success, and now position it for failure. Response to failure will most likely be sterile tethered appliances.

Transformation of the Internet in a blink of an eye from thinking it’s just “CB de no jours” to taken-for-granted. John’s message is don’t take this for granted.

Three parts: 1 rise & stall of generative network, 2 after the stall (including a long and good analysis of Wikipedia), 3 solutions.

Conjunction of open PC and open Internet created the explosion of creativity, but contains within it the seeds of its own destruction. Parallel with T171 You, Your Computer and the Net (Martin did the PC, John did the net) – but didn’t study what happens when you put them together, which Zittrain does here. Not about proprietary versus open source – PC was an open device, if you could write code you could program the device.

John says people don’t understand what we’ve got in the current Net. Knowing the history helps. Design problem (Vint Cerf, IETF etc) – design for apps that haven’t yet been dreamed of, given distributed ownership. If you’re designing for the future, you don’t optimise for the present. Architectural solution has two key points: anyone can join (permissiveness); dumb network, clever apps (end-to-end principle). The openness is a feature, not a bug. Contrast with the case of the Hush-a-Phone.

Zittrain equation: Open PC + surprise generator = generative system

Thought experiments from James Boyle – gave two talks recently, at the RSA and John’s Cambridge programme. Almost everybody has a bias against openness: when something free and unconstrained is proposed, we see the downsides. (Because you can imagine those, whereas you by definition can’t imagine what hasn’t been invented yet.)  Imagine it’s 1992 and you have to choose between: approved sites with terminals at the end (like teletext/Minitel); dumb, unfiltered, permissive network (the Internet) with general-purpose computers at the end. Who would invest in the latter? Second question, still 1992, have to design an encyclopedia better than Brittanica: broader coverage, currency. Options: 1 – strong content, vast sums of money, strong editorial control, DRM. 2 – I’d like to put up a website and anyone can post stuff. Who’d pick the latter?

Posits tension – or indeed tradeoff – between generativity and security. Consumers will become so worried about this that they’ll (be encouraged to) favour tethered appliances and heavyweight regulation.

(I wonder if I can’t bring myself to believe in the Net being locked-down out of all recognition because I’ve always had it around in my adult life. It’s probably easier for people who really knew a world without it to imagine it going away.)

Part 2 explores our likely response to these problems, then Wikipedia. “With tethered appliances, the dangers of excess come not from rogue third-party code, but from […] interventions by regulators into the devices themselves.”

Criticism of book – it underestimates the impact of Governments on the problem. Remembering 9/11, like JFK assassination. (John was on the phone to a friend who was there at the time!). John wrote in his blog on that day that this was the end of civil liberties as we knew them, and in many ways was right. (My memory was that it was the first huge news story that I got almost entirely from the web.) But – one day the bad guys will get their act together and we’ll see a major incident. Dry-runs with what happened to Estonia. But there will be something huge and coordinated, and that’ll evoke the same sort of response.

Rise of tethered appliances significantly reduces the number and variety of people and institutions required to apply the state’s power on a mass scale. John thinks it’s like the contrast between Orwell and Huxley – likelihood of being destroyed by things we fear and hate, or things we know and love.

Dangers of Web 2.0, services in the cloud – software built on APIs that can be withdrawn is much more precarious than software built under the old PC model.  Mashups work (except they’re always breaking – see Tony Hirst’s stuff, just like links rot). Key move to watch: Lock down the device, and network censorship and control can be extraordinarily reinforced.

iPhone is the iconic thing: it puts you in Steve Jobs’ hands. It’s the first device that does all sorts of good things and could be open but isn’t.  (What about other mobile phones?) Pew Internet & American Life survey – Future of the Internet III – prediceted that the mobile device will be the primary connection tool to the internet for most people in the world in 2020. So this could be a big issue.

Wikipedia analysis in the book is extensive.  Looks at how it handles vandalism and disputes – best treatment John’s seen. How it happens is not widely understood. Discussion about whether Wikipedia or Linux is the more amazing phenomenon. (My argument is that Linux is in some ways less startling, because you have some semi-independent arbitration/qualification mechanism for agreeing who’s a competent contributor and which code works.)

Part 3 – solutions to preserve the benefits of generativity without their downsides. “This is easier said than done”. The way Wikipedia manages itself provides a model for what we might do. (I think not – I think Wikipedia works because it can afford to piss off and exclude perfectly good and competent contributors.) Create and demosntrate the tools and practices by which relevant people and institutions can help secure the Net themselves instead of waiting for someone else to do it – badwarebusters.org.

Barriers – failure to realise the problem; collective action problem; sense that system is supposed to work like any other consumer device.

Nate Anderson’s review in ArsTechnica – three principles – IT ecosystem works best with generative tech; generativity instigates a pattern; ignore the downsides at your peril.

Criticisms: too focused on security issues and not on commercial pressures; not enough on control-freakery of governments; too Manichean – mixed economies; too pessimistic about frailties (and intelligence and adaptability) of human beings; over-estimates security ‘advantages’ of tethered appliances.

Discussion

Parallel with introduction of metalled roads. Crucial to economic development, move people and stuff around as a productive system.  Early days were a free-for-all, anyone could buy a car (if rich enough) and drive it, no need for a test.  Then increased regulation and control.  (Also to cars – originally fairly easily tinkerable with, now not/proprietary engine management systems.)  Issue about equity, as much as open/closedness.

Lessons of Wikipedia and the creators of malware. Malware creators only need to be small in number. To take down Wikipedia and make it undependable would take too much effort and coordination. (I disagree – a smart enough distributed bot attack would do it.)

I can’t imagine no Internet/generative/smart programmable devices because never not had them. Grew up on ZX81 onwards, had the CPU pinout on the connector.  Helps to have smart people around who have known the world before that.

South Korea got taken out by SQL Slammer, bounced back though – system is pretty resilient.

Manhattan Project perhaps a bad parallel for an effort to help here – it was the ultimate in top-down command-and-control project, with a clearly-defined outcome. And it was constrained and planned so tightly that it couldn’t actually work until people like Feynman loosened things up a bit to allow some degree of decentralisation.

How do you sign people up? Won’t do anything about e.g. climate change – until their gas bills shot up. Science and society stuff, well known that people only become engaged when it becomes real to them. Liberal is a conservative who’s been falsely arrested; conservative is a liberal who’s been mugged.

Surveillance – makes it unlikely that major public outrage leading to reaction is small, most people don’t realise their clickstream is monitored. It’s only if something happened that made people realise it that they’d say no.  Hard to imagine the scale of community engagement happening.

Case a few months ago – Wikipedia vs Internet Watch Foundation. Readymade community leapt in to action immediately.  But less likely where you don’t have such an articulate and existing community. Also photographer crackdown – they do have access to the media. Danger of the Niemoller scenario where they come for small groups one at a time.

It’s an argument about the mass of technology, not the small cadre of techies – iPhone can be jailbroken if you know what you’re doing. And there are more not fewer opps for techies and more techies than ever before. Most PC users in the 80s only used what they were given. In 1992 I could write an app for the PC and send it to anyone on the Internet. Except hardly anyone was on the Internet then, and even though most techies were among them, most Internet users then couldn’t write their own stuff – or even install software off the net.  Techies a small proportion still (even though bigger in number than before), so still vulnerable to this sort of attack.

Mobile devices are key here, consumerism. People just want stuff that works, generally.

Google as another example – they build very-attractive services, but on the basis of sucking up all our data.  Harness amoral self-interest of large corporations in this direction. Also (enlightened?) interest of Western Governments in promoting openness.

John uses example of bread mix and a recipe  to illustrate open source. Parallels with introduction of car (wow, I can go anywhere); PC (wow, I don’t have to ask people for most disk quota) and Net (wow, I don’t have to ask for more mail quota). These things have an impact on society, can damage it. So for instance, if you have an open machine, could damage other people’s computers, hence need to regulate ownership and operation. With car, annual check you have road tax, insurance, MOT; with a PC the surveillance needs to be continuous.

The 9/11 disaster scenario is instructive: why didn’t we have the same response to the Troubles? Because not transnational/non-State actors. The Provisional IRA have tangible, comprehensible political objectives that could be taken on. Whereas 9/11 terrorism is more vague.  And malware is different. Wasn’t a problem when it had no business model … but now it has. Can now take it on?

Is the Internet just (!) an extension of civil society and how you should regulate it, or is it something quite different?  Motor traffic law introduced absolute offences (no mens rea) – it’s an offence to drive over the speed limit regardless of whether you know you are going that fast or what the limit is) because quite different threat.  Internet is at least as new so likely to spur at least as revolutionary – and shocking – change to our legal system.  Ok, now I’m scared, so that’s a result.

But we’re only eighteen (nineteen?) years in to the web.  It’s idiotic for us to imagine we understand what it’s implications are.  So the only honest answer is we don’t know. John argues we’re not taking a long enough view. 1455, eighteen years after the introduction of the printing press. MORI pollster, do you think the invention of printing will undermine the authority of Catholic Church, spur Reformation, science, whole new classes, change of concept of childhood.  Web is a complex and sophisticated space, so to regulate it right can’t be done overnight.  Tendency for people to make linear extrapolations from the last two year’s trends.

In the long run, this won’t look like such a huge deal in the history of humanity. It’ll be a bit like what happened with steam. It looks like the biggest deal ever to us only because we’re in the middle of it.

So what do you do when you know that on a 20-year horizon you’re blind?

My answer: get moving now, plan to change and update regularly.  Expect to have to fiddle with it, throw great chunks of things away because they’re no longer relevant. Challenge to OU course production model! (Actually, I’m wrong to say throw away – more expect that things will become eclipsed and superseded – old technologies die very hard.)

We’ve become more open/diverse in our offer to bring in enough people. Which is hard – costs and scale versus personalisation.

iSpot and taxonomy

Work on the Biodiversity Observatory – to be called iSpot to the public – is proceeding apace. One of the things we want to be able to offer to help people in getting scientific names is to be able to map between common names for things and the scientific names. Once you know the scientific name of a species, you can find much more information than if you only know the common name. We also want to be able to help people get scientific names right – it’s easy to get them wrong – so we want to be able to provide facilities like ‘did you mean X’ when people mistype a name.

To make that work, we need a database behind the scenes that has a list of correct scientific names, a mapping between common names, and information about the taxonomic tree: each Species is part of a Genus, which is part of a Family, which is part of an Order, which is part of a Class, which is part of an Order, which is part of a Division, which is part of a Kingdom, which is part of Life.

This gets reasonably complicated even if everybody agreed on what goes where.  There’s all sorts of messing around with sub-Families and supra-Classes and things like that on top of the basic tree structure. But of course people don’t agree. And even if everybody agreed now, new information about species’ relationships to each other is becoming available all the time – especially as genetic sequencing becomes cheaper and easier to do and cleverer ways of mining genetic information to reveal evolutionary history are devised. So as we learn more, species get renamed, merged, split, and relocated in the taxonomic tree. And it’s not just obscure species that most iSpot users will never see that get changed around like this – the common garden snail has now been given at least four different scientific names (Helix aspersa, Cryptomphalus aspersus, Cantareus aspersus, and Cornu aspersum) and which is ‘correct’ or ‘preferred’ has been the matter of debate, sometimes vigorous, over time.

We really don’t want to do this work ourselves. It’s a whole discipline in itself, and we can’t hope to duplicate or exceed it. And one of our central development principles is to build on or link to existing work, rather than duplicating effort.

Luckily, there are two important databases that have (some of!) the information we want.

The first is the National Biodiversity Network‘s NBN Species Dictionary. This is as close as we can get to a complete, definitive list of species in the UK. Different parts of it (checklists) are maintained by different groups of specialists, and updated as those specialist groups decide. New versions are published roughly four times a year. (Although there’s a backlog of new information to check in for the latest update so that’s somewhat delayed.) It includes a scientific name and an NBN species ID. This species ID can be used to access the lovely web services that NBN make available via an API. It also has some mapping to common names for classification groups – it has a controlled list of about 160 names (e.g. ‘terrestrial mammals’, ‘higher plants’) that map on to the scientific names for points on the taxonomic tree but are (hopefully) comprehensible to ordinary people – or at least, ordinary people who start to get a little bit interested in nature. Even better, within each checklist, there is definitive hierarchical information – what Order, Family, Genus etc each species belongs in – for each preferred scientific name. However, combining all these to give a single consensus tree is a huge amount of work. The Natural History Museum (who do a lot of the work looking after the Species Dictionary for the NBN) did this work once, but then gave up because maintaining it was so hard. So the Species Dictionary can give us a definitive list of preferred scientific names (and also other scientific names and how they map on to preferred scientific names). It can also give us a broad-brush top-level classification for all of these. There is taxonomic hierarchy information, but not that’s easily combinable. (I want to have a look to see if we can use the checklist-level hierarchy data to support browsing at that level.), But the Species Dictionary doesn’t give us very comprehensive mappings between common names and scientific names. There’s some, but not a lot.

The second source of data is the Natural History Museum’s Nature Navigator. This is a lovely website for browsing the taxonomic tree, switching back and fro as you want between scientific and common names. Nature Navigator contains everything that has a common name. Some common names map on to scientific species names, but others map on to other parts of the tree – so ‘Pea family’ maps on to ‘Family Fabaceae’. It also contains complete and reasonably definitive hierarchical information (all keyed from the scientific names, rather than the common ones, but you can generate the common one on the fly). This looks much more promising for our purposes, since it has so many more common names and complete, usable hierarchical data.

However (there has to be at least one ‘but’ in taxonomy, I’m learning): it only covers things which have a common name, and lots of things don’t, including things that people will want to spot on iSpot – including insects and spiders. And it’s been frozen in stone since the funding ran out in 2004, and things have changed since then. And the taxonomic data it uses differs from common UK usage in many important regards – for instance, the bird data is quite different to what most UK birders use.

In rough order-of-magnitude figures:

NBN Species Dictionary: contains 250,000 scientific names, which reduces to about 80,000 preferred scientific names for species when synonyms and so on are taken in to account.  Some patchy common name mappings. All classified in to just over 100 ‘comprehensible’ taxonomic groupings. Updated regularly.

Nature Navigator: contains 140,000 common names, mapped on to appropriate preferred scientific names/points on the taxonomic tree.

Just to add to the fun, there is an international effort well underway to create a definitive list for all species across the world, called Catalog of Life, merging work by ITIS in the US and Species 2000 at the University of Reading. The aim is to create a globally-unique identifier for species – a Life Sciences ID or LSID. Thankfully, though, we as a project can leave the coordination and mapping between that and the NBN Species Dictionary to others.

The Right Answer would be to include Nature Navigator data as a checklist within the NBN Species Database, which would fold all of the Nature Navigator common name data in to the definitive Species Database. That’s a fair amount of work, but may well be within the scope of what the Taxonomic support project within OPAL (Open Air Laboratories – the parent project of the Biodiversity Observatory) will do. We’ll be pressing them to do that.

Of course, that almost certainly won’t happen in time for the launch of iSpot in the Summer, so we’ll need a stopgap solution of some sort … somehow I think converting taxonomists, biologists and field studies experts to a loose, Web 2.0 folksonomy approach is going to be beyond the scope of this project!

Backup on XP (geeky)

I asked the crowd via Twitter (and thence Facebook) about backup solutions for Windows XP, and got several responses, plus a few requests to hear what I found out, so this is to summarise that.

The particular problem I want to solve is backup on to an external huge hard disk.  This post gets a bit long and techie, but the short answer is I went with NTBackup, the backup tool built in to XP.

As the canard goes, backing up is a bit like flossing, in that everybody knows you ought to do it regularly but most people don’t. Except people who’ve been burned in the past.

Luckily, my then-technophobic mother taught me that particular lesson at an early age, when she wiped my first ever full-scale program by accidentally knocking the power cable out from the back of ZX Spectrum.  (I was trying to get her to test how user-friendly I’d managed to make it, and so I also learned the valuable lesson that real users can create whole categories of problems you did not anticipate.)

Backup is one of those things that in my head is a known solved problem.  There are two interesting problems to solve – the main one is how to back up the minimum amount of stuff but still cover everything; a secondary one is how to structure the backups to make it easy to get things back.

The ‘back up the minimum amount of stuff’ problem is essentially the problem that the rsync algorithm solves: how to find the minimum amount of data to cover the changes between an original and an updated chunk of data.  So any GNU/Linux installation can use rsync as the basis for an automated (or any degree of semi-automated) backup system.

And Unix-like file systems have another property that makes the secondary problem easy: hard linking. This essentially means you have a single file on the disk, but appearing in more than one place in the directory tree (folder hierarchy, if you prefer).  This is really really useful for backup, because it means you can do a full backup – copying everything – in to one directory on your backup disk, and then subsequently do an incremental backup (just the stuff that has changed) to another directory, adding hard links to the full backup.  And you can keep doing incremental backups like that. The clever bit is that each time you do a backup, the directory looks like a complete copy of whatever you are backing up, but the extra disk space taken up is only the difference between that backup and the last one.  Even better, you can delete (unlink) arbitrary backups without losing any other data. So, for instance, you could create a backup every hour, and delete backups on a rota so you end up with backups every hour for the last day, every day for the last fortnight, every fortnight for the last few months, etc.

(If you don’t have this system, you have to keep everything between the last full backup and the last incremental backup, or you’ve effectively lost your backup.  This is very fiddly to get right, and is a common cause of problems restoring from backups.)

If you’re a half-decent Linux geek, you can easily roll your own backup system with cron, rsync and a short shell script.  If you have a Linux box but that’s more fuss than you can be bothered with, there are umpteen Open Source graphical front ends to essentially the same system. These are of variable beauty and usability.

If you have a Mac, you get Time Machine, which has Apple’s beauty and usability built in to its interface, and the power/efficiency of the Unixy approach underneath. If you have an external drive to devote to it, it really is as simple as saying ‘Time Machine, do your thing on this drive’ and remembering to plug the drive in from time to time.  This is my dream backup system.

Alas, Windows XP doesn’t have this option. And my existing backup strategy (burn DVDs at pseudorandom times, keeping manual notes of what’s been backed up and what’s not) left a lot to be desired.

The problems run moderately deep, though.  Windows doesn’t come with rsync (though there are multiple ports, but you usually have to go half-way to a dual boot system (Cygwin) to make them work properly), and it doesn’t really do hard links (actually, it can, but not in a way that’s simple and straightfoward to the user, and so hardly any software does). It has its own system of flagging changed files (the archive attribute) which is fraught with problems.

So what’s to do?

The first solution suggested (thanks @andrew_x!) was to convert the Windows machine to a dual-boot system with Linux (e.g. Ubuntu), and use that to back up the Windows data.  That has the mathematician’s appeal of reducing it to a known solved problem.  If I wanted a dual boot system anyway and planned to spend most of the time in Linux, it’d be the top choice. But I don’t (I have other machines with Linux on).  Any backup regime that has ‘reboot in to a different operating system’ as step one is unlikely to be pursued as rigorously and regularly as it should.

The next set of solutions (thanks @elpuerco63 @hockeyshooter and others) is to buy some backup software.  There are plenty, from ECM/Dantz Retrospect (which is aimed at people with several Windows boxes to back up) and similar server-based packages, to the straight-up standard consumer packages like Symantec Norton Ghost, or Symantec Norton Save and Restore. (These two are the ones of the standard paid-for offline backup tools that Which? apparently rates as Best Buys.)  All of these, however, cost actual money, which I am very keen not to spend – partly because I have very little spare cash at the moment, partly because it seems silly to spend money on something when there are good Free/Open Source Software solutions, and partly because it’d mean I couldn’t get a backup done this weekend.

There’s a plethora of back-it-up-to-the-cloud solutions. I wasn’t interested in any of those because I have:

  • a) 120 Gb to back up and a capped Internet connection,
  • b) some nervousness about sending every last drop of my personal data in to the network,
  • c) a degree of skepticism about the reliability of such services, and
  • d) a vague, woolly echo of Richard Stallman’s political objection to cloud computing – though usually this is often balanced by a similarly vague, woolly echo of David Brin’s argument that a transparent society would be a good thing, and utterly outgunned by the siren call of Convenience.

Plus they cost real money for more than a few Gb, and my first two objections apply.  (If you do only have a few Gb of files to back up, I can heartily recommend Dropbox – free for <2Gb, syncs multiple machines and platforms easily.)

You also often get simple backup software bundled in with other things: Nero (the CD-burning package) apparently has a backup feature, and many external hard drives come with some toy backup software thrown in. Mine didn’t.

What I did manage to put my hands on, though, was NTBackup, the backup tool built in to Windows XP.  (In XP Home, it’s not installed by default – you need to get your original media and find and run NTBackup.msi in \Valueadd\Msft\Ntbackup.) It lives in Start | Accessories | System Tools.

It’s not world-class stuff: you can tell it was written for the original Windows NT 3.51.  Charmingly it defaults to writing the backup to A:\BACKUP.BKF (off the top of my head, I make it that I’d need over 80,000 floppy disks to back up my data, which would be a little tedious to insert). And the interface is almost wilfully ugly.

But (a) it didn’t cost me any more money, (b) it was to hand, (c) it has a handy option for backing up the system state (including the Registry), (d) it groks Volume Shadow Copy so can copy in-use files, and (e) it worked.

Scholarship of Teaching

Liveblog notes from a research-based symposium on the Scholarship of Teaching and Learning, 23 February 2009.

John Richardson introduces.  Starting point is always Ernest Boyer’s Scholarship Reconsidered. Boyer’s aim was to get administrators off the professoriate’s back. First time teaching considered as an activity for scholarly inquiry – brief chapter but influential.

Sue Clegg (Leeds Met), What do we mean by ‘theory’ in debates about the scholarship of teaching and learning?

Wants to muddy the waters, and pose some questions rather than supplying answers. Focus on link and theoretical frameworks to/from your discipline-of-origin. (I’m some way from mine!)

Pat Hutchins & Mary Taylor Huber (paper in special issue of AHHE) – theory is “the elephant in the room”, question of quality, basis for legitimacy claims – not a neutral question.  Mere descriptions of practice deprecated. Theory is at the higher level in disciplines and gets you the most credit (in sociology, at least). SoTL is highly democratic (in that all academics can do it), but researching it is becoming professionalised – journals are now just as competitive as any disciplinary ones. Usually a claim for superiority for one’s own version of theory – especially the approaches to learning. Graham Gibbs “we’ve cracked the theory”, now just need to tell people; she doesn’t subscribe to that.

Theories are variable and not unitary/singular; tied to fundamental ideas about epistemology and ontology. Look to the work the theory is doing for us (which is a question that depends on what your epistemology is, of course).  The complexities of HE, students, etc, mean that it’s ‘highly unlikely any one form of theory will suffice’ because a singular theory limits the scope of our understanding.

(Tension between eclectic/multi-theoretical and depth/rigour.)

‘Trading zone’ metaphor; not judgemental relativism. ‘Approaches to learning’ lit illuminates questions but doesn’t exhaust them, people in that tradition never claim it does.  Maryellen Weimar on reading lit within disciplines – see general pattern and singularity. Though is ‘more likely to produce insomnia than enlightenment.’ But we overestimate the difficulty of talking across disciplines.

Two arguments about disciplinary epistemologies and theories of SoTL. First about limits – philosophical point. Second socio-cultural about shape of disciplines.

Limits – humanities/social science easier to draw on for accounting for messy human stuff of teaching and learning. Experimental natural sciences frustrated by this messiness, desire for evidence stronger. Methodologies and approaches ‘are designed for dealing with different sorts of stuff’, because of the nature of the things being inquired in to. In (some) natural sciences can actually achieve experimental closure – very rarely the case except trivially in the social world. So disciplinary limits to using disciplinary approaches for SoTL. More controversial because of Governmental drives for evidence-based policymaking, which usually means RCT. (Has written a lot about it.)

Shape of disciplines – tend to move, break, split, emerge. New interdisciplinary areas in C19th/C20th; C21st ecology, globalisation, indigenous knowledges – the big challenges come from outside the Academy. Particularly in SoTL, challenges come from students. Discovery science is also in deep trouble too, though. Giddens etc on Mode 1 & 2 knowledge production. SoTL should aspire to be very broad in its approach.

Theory/practice links – Donald Schön on epistemology of practice. Positivism from industrial/military complex is useless for scholars, need to reinvent. Worth re-reading, pose different questions now. Gap between hands-on doing and abstract theory – law-like explanatory frameworks (‘approaches to learning’). Knowledge is created in the concrete practice and cannot be simply disconfirmed by abstract science since its knowing depends in large part on retroduction from practical experience. Andrew Collier, opera singing example – need to know about the mechanisms of voice production, but the act of singing is a visceral knowing.  Teaching (and other scholarly practices) are like that – you know when it works. (! interesting epistemological claim, verging on mysticism)

Tacit knowledge: SoTL has a tension in applying standards (peer review, evidence) as scholarship of discovery -gaps remain which are not resolved. Tricky questions about variability of applying standards – action research, teacher research, SoTL traditions all differing.  Other professions have had to wrestle with this too. Problem of the tacit and whether and how it can be represented.

SoTL challenge to teachers is to improve through evidence – parallel with discovery science. How to give scholars who teach the same status as those who research. Evidence that practice can improve without articulation – teachers who don’t reflect can improve more; not everything has to go through the loop of reflection. Real problem here. If purpose of SoTL is to improve teaching, scholarship is not the only route. So not sure that more theory will necessarily improve practice; might not be very good theory.

Papers aren’t very good because they’re “only descriptive”, not theoretical. (Me: theory isn’t the sine qua non, but you do need analysis.) Concrete/abstract, masculine/feminine, dualisms present in the debate. We should tolerate a little theoretical promiscuity and generosity before we start theoretical turf wars. We are a very young field. Creativity will come from the gaps.

Discussion

For some people, reflection doesn’t improve practice. Should we worry that it’s a distraction from trying to improve the thing you’re trying to do?   We know our students fake it – they simulate reflection. And we do it to – we give people what we want. Ask students to reflect ‘and it doesn’t have to be true’, to create space to write differently.

(Me: Socratic question of the unexamined life not being worth living – and my view that it’s more that you don’t know whether the unexamined life is worth living or not. And academia is fundamentally about knowing, so we have to examine life. Whether it makes practice better or not is in many ways immaterial.)

Genius born-or-made argument, practice at the root. (Malcolm Gladwell 10,000 hours to be an expert idea.)  We need to look at practice, at the tacit. Renewed interest in Craft practices. Social critique of the devaluing of craft – culturally important too.  If you ask students or colleagues, will tend to agree, but fail to articulate why. In SoTL we reward not the good teacher but the one who can talk the increasingly hegemonic language of SoTL. For some colleagues it’s not popular; not that they’re dinosaurs, but theory/practice gap.

Mick Healey, Exploring the Nature and Experience of the Scholarship of Teaching and Learning

Breslow et al 2004 – ‘One of the key ways to engage colleagues in their development as critical and reflective teachers […] is to stimulate their intellectual curiosity’ – appeal to professionality, not ‘it’ll make you a better teacher’.

Boyer’s four scholarships – discovery, integration, teaching (in the centre), application.

Activity – scan list of statements about scholarship of teaching, and rate (from Healey M, 2003). Raises whole issue of distinction between SoTL, scholarly teaching, excellent teaching, and so on. Tried this exercise in several contexts with people who are in the field/interested. Generally not a clear consensus – but some clear trends. 90% like Martin et al (1998), 75% liked Healey 2000 a,b ad Cross &Steadman 1996. <50% on some others.

Levels of engagement in pedagogic investigation (Ashwin and Trigwell 2004 p122) – purpose, evidence process, results: 1 to inform self, 2 to inform group, 3 to inform wider audience. All are SoTL, but researchers in level 3, but lot of SoTL is at levels 1 and 2. Not totally agreed with.  Mick Healey reckons going public – coming out – is the key element. Rare to have a positive conversation about teaching, but beginning to change.

Disciplinarity – pragmatic about a discipline-based approach. (Healey 2000)  If you can talk their language, they’ll listen. So in geography, use of case studies worked well to engage colleagues. Now starts with case study/example then introduces theory, more effective that way. (!) Start where the learner is.  But Anthropology Network set up to re-capture SoTL in their field in to their language; using their methods to investigate teaching.

Institutional cultures and reward systems vary. So SoTL experienced differently.  Also varies by nation – literature cited on SoTL differenet in North America (US) vs UK/Europe/Australasia. Large initiatives in UK, FDTL, CETLs, NTFs, HE Academy.

Four ways to link teaching and research: 1 do research on learning; learning about other’s research learning to do research; get students to do research.

Diagram – based on Healey 2005 – Curriculum design and the research-teaching nexus –  Two axes: students as participants vs audience, emphasis on research content vs processes and problems. Differentiates four approaches research-tutored (Oxbridge model), research-based, research-led and research-oriented. Inclusive diagram, all teaching fits in there somewhere.

Graham Gibbs – most significant of the processes for ehanching quality is the reward for teaching excellence. (1995). Then last week’s THE, HE Academy (2009), 92% thought teaching should be important in promotion, 43% thought it was.

Another exercise – nine case studies. People’s experience of SoTL varies. University of Sydney, very top-down – Scholarship Index – distribute 2-3% topslice on basis of points score based on e.g. 10 pts (recurrent) for a qualification in university teaching; 2 points (once-off) for refereed article. Is changing behaviour. Contrast with Liverpool Hope where they got interested people together over lunch.

SoTL is contested, differently experienced. Need to be aware of that. To take SoTL seriously is needs to be recognised in the promotions/reward processes.

Carolin Kreber (Edinburgh), Conceptions of SoTL: Envisaging a ‘Critical’ scholarship of teaching and learning

We tend to think of SoTL as the scholarship of discovery in the domain of teaching and learning, and that’s problematic.

Conceptualising SoTL – a socio-cultural model would predict that different disciplines would influence this. (Huber & Morreale 1997). Often SoTL conceptualised as pedagogical research; rarely as ‘learning about teaching’ and sharing what one has learned in less traditional ways.

Talk about teaching has increased, greater visibility. The dominant agenda is the valuing of research over teaching –  the structural problem is that this is not addressed because SoTL seen principally as research.

Lee Andresen (2000) on features of ‘scholarship’ – it’s what you’d aply to any proposition in the field of research or theory.

Aristotle’s three intellectual virtues – episteme/theoria (science, formal discovery of ‘truth’), techne (craft, what makes for best practice), phronesis (ethics). (Fiona Salomon 2003) Techne vs phronesis – techne aimed at establishing effective means to chosen ends; phronesis is discerning the desirability of ends.

Episteme – techne is the action-research/practice research challenge, very hard. But maybe Phronesis help select theory from Episteme used in practice/Techne.

‘Education is at heart a moral practice’ (David Carr 2000). Six ‘universal’ standards for scholarly performance (Glassick et al 2007) – but the goals themselves need to be examined too.

SoTL model: involves content, process and premise (critical) reflection on: teaching and assessment strategies; student learning; educational goals and purposes – with the aim of identifying and validating knowldge claims in these three domains. (from transformative learning theory Mezirow 1991; Kreber and Cranton 2000) Process reflection – instrumental learning (techne), communicative learning (linked to phronesis). Premise reflection – phronesis – leads to emancipatory learning.

The scholarship of teaching is concerned not so much with doing things better (‘techne’) but with doing better things (‘phronesis’). – Lewis Elton 2005.  (Very true in new technology context.) SoTL is/should be about questioning what we’re doing.

A ‘critical perspective’ – asking ‘Why do we do the things we do, this way? Is there a need to change?’. (Barnett and Coates 2005 on scholarship of curriculum … which seems separate from SoTL, shouldn’t be).

Implication for SoTL practice: address what students learn, and why they learn – as important as how they learn.

“In a time of global turmoil, what transcendent purposes will this ideal academy serve? In a time of great wrongs, what injustices will it right?” – David Orr 1990

Crucial question: At this time, in this context, what is it that deply matters to us with regards to the role of the university in society and the education or students receive.

One might argue that what is ultimately in the interest of society (and learners) is the achievement of learners’ sene of authenticity, ad move towards greater authenticity. Students moving to find their own voice, critically engaging and making it public to engage in critical dialogue with their peers.

Authenticity and motivation: To do what is rewarded (ext); To do what is personally rewarding (int); To do what is good (int).

Being authentic is a) to get clear about what one’s own deliberations lead one to believe, and b) to honestly and fully express this in public places. (Guignon 2006). Scholarship is (should be?) like this. Many ways of going public in SoTL other than refereed journal articles and conferences. E.g. critical engagement with colleagues.

John Nixon, Melanie Walker are doing this stuff but you don’t hear about it.

(Note to self: this makes SoTL an unabashedly political question, and a moral one. Don’t know much about politics as an academic discipline beyond intersection of history, philosophy and economics.)

Discussion

Doing what is in the interests of students – may require a teacher to go against the dominant culture of the department. Often as a teacher you are compromised. Is a problem; so need these more critical discussions. The postgrad teaching programmes can contribute to this.

Distinction between a good scholar and a good teacher? Value in practice and theory. Soon as talk about scholarship, not happy with scholar having all knowledge based pre/non-theoretically on experience. To qualify as scholarly/ship, must be informed by what we’ve come to understand. Often don’t look at the lit on the purpose of HE, not consulted widely, but is interesting. Also important to go public in some way; to engage in critical dialogue with peers where our knowledge can be contested. (Key point of epistemology in a general sense – James would be going on about Popper right about now.)

Challenge to us as a community to talk about the purposes of our institutions. Rather than claiming our authority from having happier students, happier administrators (more qualified students out), need to engage in a moral activity, phroenesis, which might not make them happy.  There have been periods where people have argued very hard about curriculum – e.g. impact of feminism – not taking on the institution but what we want to do with this group of students. These debates have gone on.

Contrast between access – here we have what we teach, we’ll make it possible for you to access it if you’re non-standard – and inclusive – where we reconsider what we teach to include more people, teach differently and different curriculum.  E.g. in South Africa, very sophisticated debate about this because of the situation.

Authenticity – problem where the teaching has two tiers (as in OU) – quite commonplace to have course team talking to each other and developing theories of what they’re doing, but less authenticity than perhaps those more constrained by actually meeting students and seeing how they respond (ALs). Institutional problem of authenticity for the OU.

(If SoTL is/should be about challenging curriculum … hard to have that embedded in reward processes. Resisting the co-option/appropriation of SoTL by management – e.g. points-based schemes – is bound to create resistance.)

Plenary discussion

Ongoing question – is SoTL research? For John Richardson, it’s research when I’m doing something I wouldn’t normally do. Students would know if I was doing teaching, and they’d know if I was doing something funny – e.g. giving out a questionnaire. And when it’s research, have to consider ethical dimension and standards and supervision. Distinction – what’s the purpose? SoTL it’s to enhance the teaching. Research it’s to improve understanding, may not make things better for students. Parallels with clinical research – benefit to future people. But problem with that is all about techne; equating empirical with research.  But agree that empirical research does need ethical analysis.  There is probably more unanalysed SoTL data (qualitative particularly) than can ever be analysed.

So John is not a scholar of T&L, that’s what teachers do to enhance their own practice, and university teaching and learning as a whole.

Issue with the appropriation of SoTL by university management. Tension between recognition (double-credit for SoTL journal articles) and the moral, political challenge to curricula, which isn’t going to get you recognition because you’ll ipso facto have to be taking on the institutional power.  And if scholarship of teaching is the scholarship of discovery you’ve lost a whole aspect. But done better in the US. (Cornell getting the NY Times ‘best college’ award for writing in the disciplines, overtly a teaching intervention – was Ivy League, copied elsewhere, would like that to happen in the UK elite.) Over 200 colleges signed up to CASTLE, many big ones. Conferences in the States, it’s not a badge of weakness to say you privilege teaching. Our problem (UK) is the single-source of funding (overwhelmingly government). Envy of the Liberal Arts College tradition – take teaching seriously.

Social media at the OU

Notes from OU eLearning Community event, 17 February 2009

Sarah Davies and Ingrid Nix are organising the events for the first part of this year.

New eLearning Community Ning site.

Social learning objects and Cloudworks – Chris Pegler

Juliette Culver is the developer of Cloudworks.

Chris draws a distinction between ‘social object’-oriented networks – delicious, Flickr etc where there’s a (learning?) object and more ‘ego-centric’ networks where it’s people connecting to people – e.g. Facebook, LinkedIn, etc.  Engeström claims that “social networks consist of people who are connected by a shared object”. Hugh McLeod “The object comes first”.  Martin Weller along these lines too.  You need something to talk about.

Cloudworks – supports finding, sharing and discussing learning and teaching ideas, experiences and issues. In alpha at the moment. Working well at conferences/events to use as a site for storing discussion and debate.

Wants to see  more social conversations around reusable learning objects (RLOs) – metadata.

The OU in Facebook – Stuart Brown and Sam Dick

Almost all of the room are on Facebook, fewer fans, only 3 or so have the OU Facebook app.

8.5m unique users (accounts) in the UK. Top or second-top site in OU. About 5000 studying/graduated from the OU. Bit report – New Media Consortium/Educause Horizon Report – “Students and faculty continue to view and experience technology very differently”.

Many motivations for OU in FB. Open University page.

Open University Library – set up a Facebook page. A lot of their Wall traffic (biggest focus) is students looking for others on the same course. Is it a failure of our official web presence/support systems? Or is it understandable that they want a non-official/personal route?  Survey of students – bimodal, some really keen on FB, some really hate it.  Forum gets traffic too, building up started by students. Analytics (Facebook); 66% female 34% male. (Meta-comment: Facebook does age segmentation 13-17, 18-24, 25-34, 35-44, 45+! Rather lower-focused than many.)

Future plans: staff profiles, resource, helpdesk online chat, find/recommend resources. OU Library alreayd has an iGoogle gadget for searching the catalogue; want to embed in Facebook.

OU profile page – (possibly) biggest UK university page, >15,600 fans.

OU Facebook apps: My OU story (283 users). Course Profiles (6,222 users – something like 5% of current students, I’d guess).  Course Profiles helps with the “who’s studying/has studied course X” issue – can specify previous courses studied, current, future plans. Each course gives you: course details, find a new study buddy, your friends on the course, recommend to a friend, OpenLearn content, comments Wall. My OU Story – mood update, gives you mood history graph too. Post ‘Story’ which is a comment on how you’re doing.

Useful page showing all places where the OU wants to have a conversation with people – i.e. social networks with an OU presence: Platform, OU podcasts, iTunesU, Facebook, YouTube, OpenLearn, Twitter, Open2.net, Course Reviews.

Data from Facebook apps is available for analysis … Tony Hirst is custodian (of course).

OU online services have a coordinating set of pages.

Setting up a social community site (Ning and Twitter) – Sarah Davies

Again with the division of social networks: object-centric, ego-centric, white-label.

Object-centric: Flickr, delicious, SumbleUpon, digg, imdb, LibraryThing, Meetup, SecondLife, World of Warcraft. Ego-centric: Myspace, Facebook, Bebo, LinkedIn. White-label: Ning, Elgg.  But categories are blurred.

Review of typical features of sites.  Analysis of sites as communities of practice – Lave and Wenger – Peripheral (lurker), inbound (novice), insider (regular) boundary (leader), outbound (elder).

Twitter overview. Tag tweets with #elcommunity to appear on eLC Ning site.

Ning overview. Demo of new eLearning Community Ning site. Originally set up for talk for ALs on Web 2.0 tools.

Work/social life mix. Intrusion/time intensity. Balance/tradeoff between VLE/OU-hosted stuff and external services.

Video is rubbish

I’ve had an idea in the back of my head for ages for a post on how fundamentally rubbish video is as a medium on the Internet. (Rough outline: it’s not the quality/bandwidth/storage capacity issue – that’s a problem still, but will fade. It’s fundamental to the nature of high-intensity visual media. You can’t skim. Reading speed is way higher than spoken speed. Audio suffers similarly but you are usually doing something else while you listen. In an attention economy, video is far and away the most expensive format. Lauren Weinstein, writing in RISKS earlier this year, makes the contrary case that you need video to capture subtleties of expression.)

I was prodded again by Martin’s recent discussion about David After Dentist, the latest viral video. (Outline notes: I think that what this shows is that viral stuff, especially videos, are the antithesis of what higher education is about – very surface, very little consideration or thought required.)

But now I’ve been prodded by some solid proof that video is terrible – just take a look at these videos of me on YouTube.

Here’s me talking about my talk last week on Scholarly Publishing 2.0:


Here’s me talking about the Biodiversity Observatory late last year:

And here’s me saying Learning Design is going to be big, five years ago at a Lab Group meeting:

On the plus side, these are all good ideas. (Latching on to LD five years ago looks prescient, though I hadn’t grokked that the thing that would get it to scale was to relax the stringent standards thing.)

On the negative side: it takes an awfully long time to grasp these ideas from the videos. That’s partly an artefact of my dreadful presentational style (I’m aware of my propensity to um and ah, but these make it painfully obvious – and note to self: look at the camera, dum-dum, not shiftily all over the place – and letting that experimental beard be captured for posterity was a terrible mistake). But it’s largely because video is such a slow medium.

Scholarly Publishing 2.0

I gave a short talk on the future of scholarly publishing at the OLnet/OU “Researcher 2.0” event last week, which I liveblogged in two parts (part 1, part 2.0).

You can see my slides:

You can watch a video of me talking about what I was talking about:


You can read Gráinne Conole’s liveblog of me giving the talk, which is part of the Cloudscape covering the entire event.

And … you can read this quick condensed text version: I argued that scholarly publishing is what scholars do when they make things public. I discuss some of the dramatic changes underway. I argue that they are quantitative (more and faster) rather than fundamental ones of type – but of course a quantitative shift on this scale is in itself qualitative. Determining what’s important and high-quality in the context of this information explosion is hard, but is essentially what peer review – broadly considered – is there to do. The Open Access movement is hugely important in social justice terms, but in terms of enabling access for researchers at well-funded institutions it’s small beer. (Thought it’s worth mentioning that there’s evidence that open-access material gets cited more, which is (a) a good thing, and (b) will get you REF points.)

Researcher 2.0 part 2.0

Further liveblog notes from the Researcher 2.0 event (see also notes on part 1).

(Interesting meta issue about blog vs Cloudworks. I don’t want my notes behind a login/search wall, I want them on Google! But Gráinne is doing an excellent job liveblogging there too. And maybe my notes aren’t so useful on a blog. Comments welcome! UPDATE: I’d got this wrong, it’s due to a bug, Cloudworks is *supposed to be* readable by everyone, indexed, the lot – you only need a login to post. *but at the moment new Cloud/scapes come up login-only.)

(Another meta issue is the multiple channel management.  It seems I can do two, possibly three, but not four and definitely not all five – f2f, Elluminate, blog notes, Twitter, Cloudworks – and still stay sufficiently on top of things to follow it. Especially as Elluminate has the whiteboard, the audio stream, the chat, and the participant list all in one.)

Martyn Cooper – Research bids 2.0

Research bidding support – some same for experience and novice bidders (process support, consortium negotiations, budgets, reviews of drafts, internal sign-off); novice bidders get extra (advice, confidence).

OU process based around the RED form.

Process – idea, workplan, consortium, bid, negotiate roles, set budget (often iteratively), final draft, sign off, submission.

Relationship is formed during the bid process; you will work with these people for years after (if you succeeed.)

Communication types – peer to peer, document/spreadsheet exchange, negotiation, redrafting and commenting, electronic sign-off and submission.

Most researchers could get more successful bids and be able to run more projects if they had more and higher-quality administrative support. Web 2.0 technologies could have a role in providing that support. However to date we under-use them.

At what stage do you make bids open to the world? Is the web 2.0 attitude affecting this? Martyn very happy to do that – he always has ideas in his back pocket. Has seen ideas taken up by others, whether by coincidence or copying is hard to say. Commercial partners keener to protect foreground knowledge and IPR, so perhaps harder.  But would be happy to do whole process on a public wiki.

Shailey Minocha (Shailey Garfield in 2L) – Second life research

3D virtual world – http://gallery.me.com/shailey.minocha#100016

Much more human environment than a 2D one; a real sense of being there. No story to them, there’s not a game, you can design it yourself.

Students found it difficult to critique/peer review each other’s work. Attributed to a lack of socialisation, lack of knowing each other well enough. So decided to get them to use 2L to provide opportunities for that.

Not much about how you should design learning environments in 2L.

2L to support research: meetings, virtual collaborations, seminars, conferences and shared resources

2L as a platform for conducting research: conducting interviews, observations, evaluate prototypes of concepts and designs, bringing in real data and developing simulations.

PhD supervision meetings and research interviews – runs regular meetings in 2L.  Real sense of visual presence and a sense of place. Large pool of participants. Also can keep transcript & audio – no need to do transcription.

Sense of realism in 2L which is hard to match in other environments – BUT steep learning curve (vs Skype, Elluminate, Flash Meeting), and demanding system requirements.

Question: are there extra issues in finding particpants in 2L? Yes. Issues about the avatars; don’t know who is behind them. Let the person fill out a form through normal email process first.

Kim Issroff – Business models for OERs and Researching Web 2.0

Definitions

Business model – framework for creating value … or, it’s how you can generate revenue.

OSS business models: Chang, Mills & Newhouse, about how to make money. Stephen Downes models for sustainable Open Educational Resources – distinction between free at point of delivery and cost to create/distribute. Models: Endowment, membership, donations, conversion, contributor-pay, sponsorship, institutional, Government, partnerships/exchanges.Clarke 2007 – “not naive gift economies”.

Intuitively, go for resources are free but charge for assessment.

Grant applications increasingly ask for business models/sustainability/how you carry on afterwards.

Implications – for design, how to engage. Differences between OSS and OERs as models. What happens when we get to OER saturation point? (I suspect it doesn’t exist – too much out there already, but also still worth putting new stuff out.) Can we quantify the social value rather than the economic value?

Take a trainful of people, see what each person is doing in terms of access to technology, to get a handle on everyone, rather than a minority we over-research.

Two thoughts: how much difference does the business model make? Is a financial business model appropriate for an educational organisation?

(I see a strong link to Kevin Kelly’s Better Than Free essay: eight things that are ‘better than free’.)

Can free things (end up) more expensive in the end?

Robert Schuwer from OUNL: their experience of subscription models, paying for extra support, books and so on. Inspired by mobile phone world, hope that once they have the payment every month set up, they forget to unsubscribe and keep up year on year – €25 a month.

Chris Pegler – OER beyond the OU

What OER offers: global opportunities, goodwill among researchers, IPR vanquished, unlimited reuse potential. Has highlighted Creative Commons – demolish IPR obstacles. Most funded repository projects flounder – or even fail – at some stage on IPR. But Creative Commons to the rescue!

Li Yuan whitepaper CETIS on OER is key. List of 18 current OER projects ‘out there’, from MIT Open CourseWare, GLOBE (includes MERLOT and ARIADNE etc), JorumOpen, etc. These are not quite what you’d envisage – some are e.g. mainly research-focused.

Interesting HEFCE/HEA/JISC call on OERs  £5.7m pilot, possibly £10m yoy in the future. Chris has £20k individual bid – making a 30pt course using web 2.0 tools around OERs. Also NTFS bid on RLOs and how we embed them in the academic practice courses at three institutions.

Questions around metadata – especially automatic metadata.

Patrick

Was more presentation-centric than perhaps ideal; but much captured on video, Twitter and Cloudworks. So next: small groups on producing a quick pitch for a bid about Research 2.0.

Researcher 2.0

Liveblog notes from Researcher 2.0 event – sponsored by the Technology Enhanced Learning research cluster (part of CREET) at the Open University, and the OLnet project.

Patrick McAndrew – intro

True Researcher 2.0s – weather not a barrier, see what technology to employ. So multiple channels. Elluminate, Twitter, Cloudworks. Video and audio capture. And face to face in the room!

The Cloudworks site for it, and remote people coming in via Elluminate –http://learn.open.ac.uk/site/elluminate-trial/ (if you are have an OU login, and then follow link Open Learning network trial ) OR http://elive-manager.open.ac.uk/join_meeting.html?meetingId=1232970332920 (if do not have an OU login). And Twitter using #olnet as a tag. Also professionals doing video, and amateurs with Flips and other videocams.  Hope to learn from this for future workshops.  Not fully planned out (but very 2.0/lazy planning stuff).

Patrick – Researcher 2.0: Research in an open world

Open world, many users, what does it mean? How does our technology link out to the many users? Came up for Patrick in the OER world, but true in many areas. Transform to world where there are many more options for what we can do, many more options.

How do we change to network with more people, network as researchers in a new way. Draw in people, use their willingness to co-operate. Gráinne opened up in a f2f workshop with a Twitter request for ideas to flow in, worked really well.

Also new ways to get data in – video, audio capture. But what to do with the data? Need to make it part of the routine. Who does the research? Distributed models.

Want to find out: What is Researcher 2.0, What are the big questions?

Researcher 2.0 – discussion about what it means.  Not a Microsoft product, like Web 2.0. Is snappy – new improved way of doing research, using better ways.

Discussion broke up, and went in to Cloudworks en masse to add comments. Many new clouds and comments and so on. Challenge of multiple channels a new technologies is clearly a challenge, even for this roomful of fairly-techie people.

Gráinne Conole – Exploring by doing: Being a researcher 2.0

Personal Digital Environment – like a PLE. Technologies used on a daily basis. Crosses boundaries of learning, work and research. Increasingly, if it’s not available on Google, it doesn’t exist – so what’s the point in putting it locked in to print-only?

Mentioned 2800 people signing up for online Connectivism conference – of whom 200 really active. Very lively, multiple channels. George and Stephen contacted people casually and asked for an hour-long session.

Changing landscape: a step-change over the last few years.

Reports which encapsulate things:

  • NSF Fostering Learning in the Networked World.
  • The Collective Advancement of Education through Open Technology, Open Content, Open Knowledge (Iyoshi and Kumar)
  • EU review Learning 2.0 Practices (ipts)
  • The Horizon Reports annually

Changing content. What does it mean to be more open? Distributed dialogue makes it harder to attribute ideas. Especially group consensus. Will need to change.

Mediation: co-evolution: Oral, symbols, technology-mediation.

Thinking differently: OU Learning Design initiative, Compendium/CompendiumLD/Cohere, Cloudworks, Pedagogy schema, OLnet.

The vision underpinning OLnet: analysing the cycle of OER development, and who’s involved. What tools and schemas do (could?) people use to select, design, use and evaluate open educational resources?

Discussion: How do information resources fit in? Issues of quality?  Need to develop new ways of digital literacy and competency. Not just using Google, how we use it. How do I make judgements about what you find?  Share practices.  Different in different disciplines? For computing, ACM Digital Library is the information repository for that community; Google is merely a nice addition.

Challenge for OU classic course-in-a-box; Tony Hirst’s uncourse model right up the radical opposite end. Martin Weller noting that his journal publishing has gone down as his blogging has increased. There’s major issues here about what we consider to be quality. How to blogs compare to articles? Depositing your articles in open access places increases citation count.  Not just communicating with the public – it’s more becoming part of communities that are attentive to things you’re saying, which gets your name/reputation recognised. Concern that it’s transient, forget it. Have to foster the skills of discernment in our students, particularly.

Martin Weller – Digital Scholarship

YouTube video of Guitar90 kid playing guitar … got 55m views.  We are all broadcasters now.  A fundamental change in society in general, and education too.

You can’t predict what will be useful to people.

iCasting – new coinage – simple stuff you can do from your desktop, you don’t have to be an expert. Anyone can create YouTube movies, blogs, slidecasts on SlideShare. Blog is the hub of all this: aggregate your content and share it with other people.

What about quality? Caravan – you have a certain amount of money to spend on a holiday.  One holiday in the Caribbean is about the same cost as 30 holidays in a caravan – trading quality for quantity.

The power of sharing – getting views in from Twitter.  Passed on ideas from one to the other – it’s the sort of resuse we always wanted from learning objects.

What is the fundamental aim when you publish something? We’ve lost that aim and started thinking it’s about getting RAE credits. But ultimately it’s about sharing ideas. Martin’s experience is you get much more feedback and benefit from sharing through the blogosphere and other online routes than from locking stuff away in a printed journal. Blog gets 1000 views, lucky if a journal article gets 20 readers.

The cost of sharing has disappeared, but we act as if it hasn’t. Example of mixtapes: you had to buy physical tape, spend ages with the buttons recording each song, then had to give the tape away. Now to share music you can do it via iTunes, share URLs through lots of services. No more time, effort to share.

What to do? Find your inner geek. You don’t need to go on a training course to learn how to use Flickr or Slideshare, just use it. (I’m starting to not be so sure about that for people in general, based on evidence at this meeting).

Have fun! YouTube video from JimGroom pretending to be an Ed Tech survivalist.

And Just Share – RSS, OPML, etc. Make sharing your default mode.  Currently writing a 10k article – instinct is to just post it on his blog to get more readers. But then no formal publisher will take it; and with REF credits want to get it there. So a tension between sharing and getting cash.

What can your university do for you? Provide support and guidance.

Danger of not doing it? Universities need to look relevant. Remember the Viz Pathetic Sharks, who couldn’t swim properly, were scared of water. Universities in danger of looking like that.

Current project: Year of Future Learning (on his blog) – a bottom-up way of trying to do distributed research. Anyone can join in. Multiple modes, multiple ways to contribute, support/facilitate discussions.

Is sharing the same as making public? Martin says share earlier in the process – at conceptual stage and then throughout, not just publishing at the end.

REF has implications for what we share as researchers, but also as teachers. What do we do? Easier when established; earlier in the career need to play the game a bit more to advance. And easier if you’re in the right domain (IET) where part of the day job is to explore this.  Critique on blogs is similar to expert peer review, but also different.  Issue of saving it for posterity – 25 years ago, paper document. Failing to leave a reliable paper trail if everything’s in blogs – not preserved in the same way. (!) Not saying burn all journals, but the peer review process ‘is over-rated’. You can publish anything on your blog, but if you’re trying to build up a serious reputation, you’ll be taken to task for what you put up. ‘Publication process is designed to remove anything interesting or engaging or challenging’ (not universal agreement). Example given by Giddens at his Pavis Lecture – Internet can be empowering, democratising versus trivialising.

Eileen Scanlon – Digital scholarship in science

Interest came up in MSc in Science Studies. Communicating Science course.  Gold standard community having radical shift in how they behave due to new tools. Main example of a transformatory tool is physicists’ pre-print repositories.

Interesting perspectives on peer review – Nature did an experiment on open peer review. So not just small scale journals.

Many recent articles in the June 2008 issue of Journal of Science Communnication. Open Science.  Eileen wrote a book with that title … which was about OU teaching practices, not this.

Recognition of e-science as a new way of doing things.

Zvivocic science blogger – commentary piece.  Predicted that journal paper of the future will be a work in progress, with collaborative development.  There are some very serious bloggers, based in major research institutes, discussing what’s happening. Tola science journalist – growth of blogging. Cozzini – e-scientist – massive investment in e-infrastructure (e.g. Grid computing), vast quantities of data for analysis. There are technical problems, and other challenges – but need some imagination to see new ways of working. This stuff is hard.

Proposal submitted to ESRC – understanding the changes in the communication and publication practices of academic resarchers in HE.  Christine Borgman book on Scholarship in the Digital Age. Two case studies: one team in an e-science area. How is the landscape changed, what do people do? Now at a stage to see what people are actually doing, not looking at the rhetoric.  Sub-questions about different forms of publication, how they relate to open peer review, how the i

Doug Clow on Scholarly Publishing 2.0

No blog notes from me! But the slides are on Slideshare. One point from my talk: big barrier to going all-open is perceived esteem of publishing in particular named journals with particular named publishers. Big money at stake. Also change in who might sign up for OU courses, given that currently they get access to all our journals while they’re registered.

Learning and Teaching at the OU

Presentation by Denise Kirkpatrick and Niall Sclater.  Or is it a presentation? It’s organised as a Human Resources Development Course – it’s an Open Insights Expert Lecture – with sign up, sign in and all the details going on the internal staff Learning Management System.  And there are feedback sheets to complete too.  “The subjects covered were:  relevant to my present work, background interest only, possibly useful for future work, of no interest”.  If it’s not relevant to my present work then either I or the OU have a bit of a problem.

Being told it’s aimed at new staff … which is news to me; perhaps I misread the course information?  Networking opportunities over coffee later.

Denise Kirkpatrick – Learning @ the OU

Welcomes new staff. We take the quality of our teaching and our student experience extremely seriously, we do it well but always want to try to do it better. QAA audit coming in March.

(Tony Hirst would be pleased to see the RSS logo prominently on her Powerpoint title slide. And I also note that it’s not using the OU Powerpoint template.)

Hard to draw a line between technologies for learning and teaching and those for the rest of your life; the line is blurred. But focus here is on learning and teaching.

Sets out generational view of technologies: BabyBoomers, GenX, NetGen/Millennials. Digital natives, who grew up using technology, it’s not seen as something different.  New generations approach technologies in a different way.  We as staff don’t come at the technologies in the same way as our (potential) students. A challenge.  Attitudes and ways of working are also important, NetGen are team based, they like to work like that.  Caveat: they’re broad categories, are exceptions.

Statistics – UK data – on tech use – from last year.  65% home internet (+7% on 07), 77% NetGen online daily, 91% NetGen use email (Wow – so 9% of them don’t?)  Childwise 2009 report – kids, much younger, are using techs a lot – 25% 5-8 year olds have net in their room, 13-16 almost all have mobiles.

We have mobiles, but we use them differently.  Some staff can’t work out why the hell you would want to deliver something to a device that’s so tiny.  But our students are so much more comfortable with mobiles. So we must investigate how to do it effectively.

Emerging themes in tech in ed: Blurring (f2f/online, in/formal); increased mobility; gaming; social networking; high-impact presentation/engagement techs; analytics, diagnostics and evidence-based ed; human touch; Learning 2.0?

Mobility – shows Google Trends on news about mobile learning.  iTunesU – new OU channel to deliver OU assets to students. (Interesting metaphor.)

Social networking – mentions social:learn, very exciting. Current and potential students are likely to use social networking in their daily life.

Mentions Twitter, virtual worlds – we have big opportunity to create social communities for our students who wouldn’t neesarily meet up.

Online learning gives us lots of data – we need to use that data, especially good with Quality hat on. (Big on analytics – again I can picture Tony Hirst smiling.)

Learning 2.0, don’t underestimate social aspect. Strongest determinant of students’ success is ability to form and partiipate in small groups (Light). ‘Learning to be’ supported by distributed communities of practice; productive inquiry; increasing connections & connectedness.

Has tech changed things? Leveraging potential of social learning (esp in distance ed); add community to content; acces to experts; access to peer review audience.

Examples; iTunesU, Openlearn, VLE, Learning designs project (Gráinne Conole, Cloudworks) – making teaching community-based, sharing practice.

Our challenge: towards a pedagogy of technology enhanced learning; and a scholarship for a digital age (esp for academics). We have always used technologies, for the last 40 years, but need to move that forward.

Q: How does the technology match against our current student age profile? We have a lot of baby boomers.

A: We deliver to the here and now, but our profile does have GenY and is increasing. Also planning for the future. Many baby boomers are confident tech users. Also many of our students – regardless of age – are demanding it. If we have evidence it’ll improve the learning experience, we should do it.

Q (Martyn Cooper, IET): Is there a qualitative difference between GenY’s use of social networking, rather than a quantitative one?

A: I’m not going to answer that one. We might think our quality is far superior, but … it’s a fertile area for research.

Q: Demographics, social advantaged versus disadvantaged – do technologies favour the socially advantaged? Tension with OU’s principles of open access to all.

A: Really important question, currently researching. Lot of unpacking needs to be done in to e.g. mobile phone ownership. Dilemma and a challenge, we have to keep tackling and pushing it. We put in resources to help our socially disadvanted students have access to the net. How much wider would the gap become if we don’t give people the opportunity to learn about that (tech) world?  It could disempower them to give them a route without tech. We have a wide range, it is possible to still study with us and have an almost predominantly print-based experience. But need to reconsider what access means and what our responsibilities are.

Q (Robin Stenham): How explicit are we making the use of social networking tools for group learning in terms of accreditation? Building transferable skills in to the learning outcomes.

A: An area we need to do more work. If we don’t expect access to tech, can’t base assessment on it. There are examples where people are starting to build that in. But haven’t done huge amounts of work, not widespread at this stage.

Niall Sclater

(presentation uses OU template)

Audience question: who brought a mobile? (nearly all)  Who ignored ‘turn off your mobile’? Two. (Including me.)  So please consider switching ON your mobile now.  (And lots of phone boot-up noises.) Impression given by ‘turn it off’ is the wrong one. Onus is on the presenter to make the presentation more interesting than the other competition for your attention (email on your laptop etc).

Focus of VLE is to make web the focus of student experience.  E.g. of old-school A3 print study calendar – contrast A103 and AA100 VLE view showing you the resources. The spine of the course is on the internet.

Encouraging collaboration: tools to help. Elluminate – audio conferencing, increasingly video too. Shared whiteboard. Quite a traditional class way – teacher writing down equations, something about maths that is best taught that way.  Online learning with maths this way, tutors have taken to it like ducks to

Maths Online (MOL) – eTutorial trial Feb 08 – 449 student, 136 staff. Most positive comments about interaction, tutor, convenience (being at home vs travel to tutorials), help. Least about preparation, software, good audio. Negative comments: mainly sound problems, but 50% nothing negative. Connection problems. (Niall has no broadband at home at the moment thanks to ISP problems.) Must bear in mind.  Positive feedback comments – ‘very close to the experience of a face-to-face tutorial’. Elluminate is not for a stand-up lecture with passive audience, it has tools for feedback (instant votes, etc). Give talk, move to next slide, monitoring IM chat backchannel and referred to it. Very skilled to do that; it’s completely different to what we’re used to. ‘gave me a feeling of belonging to a group’ – we couldn’t do this in the past.  If net gen are more collaborative (some evidence?) – is likely to be more important to our students. Evidence for many years that group learning can help.

Community building: Second Life, virtual worlds. Virtual worlds project about to kick off. (Great slide of people sitting down lecture-style in Second Life – only funny bit is that one audience member has wings, another is in fact a chicken.) Can try to replicate stuff lecture environment, everyone sitting in rows … or have something more interactive. Interesting how we transpose traditional models that aren’t necessarily appropriate – e.g. building copies of physical campuses, no need to visit an empty reproduction. So use spaces more imaginatively.

Building your online identity: Increasing student blogs. tags – research, wisdom, travel, karate. Personalisation.  Niall happy with LPs, cassettes, MP3s, transition across groups. Young people build identity through Facebook etc, tell the world their interests, relationships and so on. Gives you a much better network of people, professional and social relation brings you closer together.

Making content interactive: e-assessment with feedback, based on your answer. Use internet for what it’s good for.

Ownership and sharing: MyStuff – eportfolio system. Share documents, store for your benefit, tag them, share them with other students, tutors, future employer. Compile in to larger collection. Problems with MyStuff – user interface confusing to students, and is also very slow. Planning to replace, but will take a long time. Looking at e.g. Mahara (works with Moodle) and PebblePad, Google Apps for Education, Microsoft Live@edu.  Google Docs – instant speed even though hosted in US. We could use this for the content repository side easily.

Reflection: Templates for reflection on learning outcomes. (Glimpse of Niall’s browser toolbar – RSS feeds from Grainne, Tony, Martin, Alan Cann …)

Moodle grade book – rich data to tutors immediately after students have done test. Wiki report showing breakdown of activity/contributions – have some courses requiring use of wiki, this is one way of assessing.

Studying on the move – much hype, but we’re now having sophisticated platforms (iPhone, Android, etc). Can do so much more now. Many/most students will have very sophisticated device that will browse web, view course content, do quiz, etc, from wherever.

VLE and other systems – must be like accessibility, think about it from the start, ensure accessible from mobile devices. Like BBC sites at present – all our systems need to be built like that.

learn.open.ac.uk/site/lio Learning Innovation Office site, under development. Niall’s blog at sclater.com.

Thanks to Ben Mestel, Maths Online Team, Rhodri Thomas.

Q (Martyn Cooper): Accessibility and mobile learning. EU4All content personalisation responding to accessibility profiles and device profiles – optimise content based on both of those. Who reviews this?

A: We have a big project underway, want to bring you (Martyn) in, LTS.

Q: Diversity of devices very important for accessibility.

A: Indeed.

Q: (Carol ?, LTS): Google Apps. Why do we develop custom things when there are good apps already out there? It’s disadvantaging our students, less transferable.

A: Key questions grappling with. (mobile phone sound … but can’t find the source. Oh dear.)

Q: Not rude to turn off phones, it’s setting aside time. Would be rude to take attention away.

A: Maybe this is a net generation thing. Conferences have people using devices constantly; don’t find it rude any more, my duty to get people interested. But understand that people find it offensive.  Alas, experiment has failed.

Back to in-house vs external – have had endless debates with Tony Hirst and Martin Weller on this. Can create a ‘VLE’ online out of many things – but putting big burden on students to remember/learn many sites. Can’t assess accessibility.  Can’t guarantee service (but if ours we can do something).

Q: (Will Woods, IET): Students using Twitter, blogs, etc – staff stuck in email as main communication channel. Small clique at OU using Twitter. Can we improve internal channels? Cultural change?

A: Is an issue. Is a very email-based culture. Use it too much? Twitter … has its place, but can’t guarantee people are reading it. How do we move everyone on to new technologies? Should we try to? People understand internet is a bigger thing, less opposition to elearning. Thoughts in audience?

Q: Robin Stenham – Moodle tools give us many different tools to communicate, can share learning; forum tool vs Outlook. Moderating on forum can be very useful. E.g. using email ‘send in your expenses’ and everyone does reply-all. Misappropriating technologies. Gets 100 emails a day, of which 30-40 are streams/CC-in a discussion.

A: Yes, cognitive overload. Wiki a useful tool, putting some committee papers on wikis so don’t need them on the hard disk. (Denise) Points out that we’re encouraging people to use VLE tools themselves, so staff are experimenting with tools to understand how to use them with students. You can use VLE in your departments.

Q: Janet Churchill (HR Development): HR Development are trying to upskill staff in new technologies. Emailogic course from AACS to help people get most out of it, not inappropriately copying people in. Development opportunities now extending beyond trad training – now have secondlife presence for feedback sessions. ILM courses have online Plug – we have an induction process, online induction tool, looking for people to put in touch with external agencies to build an online induction tool that’s more engaging.

Move to general questions.

Niall: Interesting to analyse what’s going on in conferences. E.g. people commenting on and sharing what you’re saying. Can’t assume people are ignoring you.  But our experiment (on mobiles) has failed.

DK: Experiment hasn’t failed, just hasn’t given you the result you wanted.

Giles Clark, LTS: eTexts. Took view not to enhance our e-texts wrt print. Should we stay like that? Keep electronic version exactly as in print? Or further develop – insert animations, collaborative activities – or is that for surrounding VLE?

Niall: Is potential to do more with our online PDFs. Can’t stay still and go for common denominator. Paper will long have a role. Some quite happy to read on phone/device, could be generational.

Denise: Lots of exciting opps in tech, but accompanied esp for us with challenges. We as OU have to be able to do it at scale.  Can do sexy experiments with e.g. 30 students in a classroom.  But doing it with thousands of distributed students very different, scale. We need to be more efficient and economic, tough times. Hard decisions: nice bespoke examples, or go for scale for all courses. Must explore opportunities, cost out, see scalability – then answer.

Thanks to all.