OERs, radical syndication and the Uncourse attitude

Liveblog from a technology coffee morning, 17 June 2009, by Tony Hirst.

Please ask Tony what he does – he looks at web technologies and sees what can be done with them, being “dazed and confused”, then communicates them to people through blogs and presentations.

Information and technology silos – information gets stuck in repositories, the IET Knowledge Network.  They’re isolated from other stores.  They do have advantages, but crossing between them is hard. Tony wants to soften the barriers.  Technology silos likewise – using a particular technology may exclude other people.  Twitter is an example – if you’re in, a load of stuff is accessible, if not, then not. Another example is the no-derivatives option in CC licenses.

He’s also interested in representation and re-presentation of material.  Can be physical transformation of content – physical book, or on a mobile phone, could be the same stuff.

Also collage and consumption (mash up!) – lots of people use materials in different ways in different settings, in different media.

Useful abstraction (for Tony!) is content as DATA.  He’s not interested in what the content is.  Data in the news in the US, data.gov to open up Government stats.  Moves in the UK too, Government, Guardian, and research communities trying to share information.  Presentation ‘Save the Cows‘ making point that data in a chart is “dead data” – it’s an end result, not reusable.  Finished product being shipped makes it harder to reuse.

[He’s using the JISC dev8d service SplashURL to give web refs in his presentation – so giving http://bit.ly/9C9uZ and a QR code on screen to give links for the presentation above.]

Data is a dish best served raw – http://eagereyes.org.  Text in PDFs is hard to get out.

Changing expectations – Tony’s video mashup about expectations, rights and ‘free’ content. Statement at the end says “no rights reserved” but amusingly is stored on blip.tv with default rights – i.e. All Rights Reserved!

If you can’t extract content, you can embed it in other spaces, let other people move your stuff around – even to closed document formats.

RSS!  Tony’s favourite. Syndication and feeds – offers some salvation.  It’s like an extensible messaging service.  It’s feeds that let you pass content from one place to another, packaged very simply – title, description (e.g. body of a blog post), link (often back to original source), annotations (if Atom – additional fields, e.g. geoRSS tags for latitude/longitude information), and payload (e.g. images).  If you package it right, other software can make it easy to aggregate and use these.

We ignore RSS at our peril – examples of how to use RSS beyond just Google Reader.  Bit outdated but still useful.  RSS is a series of pipes/wiring.  (Silly aside: he’s almost saying that the Internet is a series of tubes! – Twitter comment from @louis_mallow: Get the slides and do a mashup with data from http://is.gd/14kDA.)

Jim Groom stuff on WordPressMU – a syndication bus – UMW blogs. Lots of feeds. Live workthrough of how to do it.

Scott Leslie – educator as DJ – educator searches, samples, sequences, records and then performs and shares what they find. Similar workthrough of how to do this stuff.

Problems: discovery (how people find stuff), disaggregation (how people sample/take out the bits they want), representation (how they stick it back together and get it out again).

Discovery: We work in a ‘Zone of proximal discovery’ – we generally use Google, most of the time, using keywords we’re happy with and already know.  “Have you done your SEO yet?)  The OU Course Catalogue – with course descriptions – uses terminology you’d expect to learn by the time you finish the course.  How is a learner going to find that?  You search the web and can only find the courses you’ve already done. Similarly an issue generally for OERs.

Disaggregation: is a pain. Embed codes, sampling clips from videos, and so on. Easier on YouTube, can deeplink in to a specific bit.  It’s painful, hard, which discourages you.  The technology you use makes a difference for others too – e.g. PDF, makes it hard to create derived works.

Open Learn – an example. It’s authentic OU content that he can fiddle around with in a way he can don’t with other live courses, “this is a good thing”.  He loves the RSS feed for all the course units – and a host of other packaging formats. Can subscribe to a course using Google Reader – could use e.g. on an iPhone.  Feeds available: all units, units by topic, unit content – also OPML unit content feed bundles by topic. (OPML is another sort of feed – it lets you transport a bunch of RSS feeds around together.)

openlearnigg – built on coRank – imported all the content titles from OpenLearn, lets you comment, vote on and promote course material.  Also daily feeds – give you one item from an RSS every day, regardless of when they were originally published. Grazr widget with an RSS feed for the whole course, can embed in all sorts of other places.

Yale – open courses feedified – Yale Opencourseware has courses, which have contents, which have structured sections – all templated.  It’s not published as RSS, but Tony built a screenscraper (using Yahoo Pipes) to turn the reguarly-formatted pages and turns them in to RSS feeds – repackaged.  Repackage in OPML (collection of RSS feeds), plug in to the Grazr widget, can embed the content elsewhere.

Also did one for MIT, but they keep changing their website so the screenscraper keeps breaking.

WriteToReply.org – on the back of the Digital Britain Interim Report. (Digital Britain Final Report is out today!)  Tony and Joss created a paragraph-commentable version of it, uses WordPress/CommentPress At the moment they have to cut-and-paste the content in.  Each page/post is a different section of the report. Each paragraph has a unique URL, and has comments associated with it.  And there are feeds for the comments to – can represent them elsewhere (e.g. in PageFlakes).  People from the Cabinet Office had set up their own dashboard too, and set up the feeds from that in as well.

YouTube subtitles – grabbed Tweets from people with the hashtag for a presentation (Lord Carter talking about Digital Britain), along with the timestamp, then imported those in to YouTube. So then you can play back the live Twitter commentary alongside the presentation when you come back to it.

Daily feeds – aka serialised feeds – turned all OpenLearn courses in to blogs, which gives you feeds.  Can turn e.g. Digital Britain report in to a daily feed – can consume the content at their own page.

Feeds are also for live, real-time feeds – XMPP – instant messaging protocol, but can use it as a semi-universal plug/connector tool.  WordPress has a realtime feed – can see comments in real time, immediately, without the RSS delay.

Weapons of mass distraction – easy to read far too many things.

Another feed is CSV – simple comma-separated values format.  Google Spreadsheets gives you a url for a CSV file, can also write queries which work like database queries – can plug in to e.g. manyeyes wikified – and instantly get charts. “There’s no effort” … although “it’s not quite there in terms of usability”.  Putting content in to a form that makes it easy for people to move it around and reassemble.

Digital Worlds – ‘an uncourse’ – inspired by T184 Robotics and the Meaning of Life.  You could imagine it’s presented on a blog engine, because of how it looks. Also inspired by the way people split content up, don’t read things in order.  Hosted on WordPress.com, used that as the authoring environment. Wrote 4 or 5 posts a week. On the front page, published like a blog in standard reverse-chronological format.  All posts in categories (not very mutable, almost a controlled vocabulary) and tags (much looser) – gives you feeds for all of those – which lets you create lots of different course views.  So you could see e.g. videos, or the Friday Fun, or whatever. Each category or tag becomes a mini-course.  Also custom views – e.g. all the posts about particular games developed in Game Maker.

Also extra bits.  First, a Google Custom Search Engine (CSE).  On a search engine, can search one specific domain (e.g. add site:open.ac.uk to search just  OU pages – can work better than OU search engine).  The Digital Worlds CSE extracts any links to external sites posted in the course, and then lets you search across not just the course content but any sites that the course content linked to.  All done automatically.  Also did a video channel – using SplashCast.

As he was writing, was informed by what he’d done before. When did a post with a link back to a previous post, a trackback link appears on that original post.  So you can see on any given post what later posts refer to it – ’emergent structure’.  He created graphs of how all the links worked within the course blog.  Could also see paths through the course beyond the fixed category structure.  ‘Uncourse structures can freely adapt and evolve as new content is written and old content is removed.’  They rely on the educator ceding control to the future and to their students.  We try not to do forward references in writing oU stuff … but in this environment, they are created automatically when you make a backward link.  Uncourses encourage the educator to learn through other people’s eyes.  Later comments prompt further discussion and posts, and so on.  It keeps things fresh.

Questions

“We call them students because we take their money”, as opposed to people, a general audience on the web.  More seriously, it’s engaging more as a peer process rather than a didactic one.

This stuff requires a lot of skill – how do we get those skills out to educators?  Tony is doing workshops with people, and writes recipes on his blog.  Problem that when he publishes a recipe for a mashup, people tend to read it for what it is, or get hung up on the specific tools, rather than as a general technique or the underlying pattern.  (This is a well-worn problem in teaching!  Especially at the OU in trad course design. Trying to help people move from the specific examples to the general principles. And when people are overwhelmed with new concepts, they tend to latch on to things that are familiar.  You have to very patiently build up from what they do know to where you are trying to get them.  Zone of proximal development stuff!) Book recently called Mash-up Patterns does this without being too technical.  Tony planning to more specific stuff.

As an educator, posting comments and responses and so on.  Could you organise a group of students to do this collectively? How much would they need to know?  Example of say Darrell Ince’s wikibook project – getting students to write a book, farming out particular topic questions in a very structured way, that works.  Less controlled version in stuff like Jim Groom doing with student blogs, then being aggregated.

‘Quick’ question: How do you get the university as a whole to buy in to this stuff?  Er, don’t know. One reason – after spending 15 weeks at half time preparing Digital Worlds stuff, then 4 weeks writing it, then editor doing 2.5 weeks work on it – not a huge input for a 10 week courses.

Dynamic courses is hard in our context.

Advertisements

Mashing up the PLE (Tony Hirst)

Notes from a seminar (slides) by Tony Hirst.

PLE=Personal Learning Environments.

Gilbert Ryle – notion of category mistakes (in The Concept of Mind); happens when people talk about PLEs as things – they’re not, they’re environments: you can’t point at them.  Also figure/ground illusion (vase/faces) – edges are the key.

Contrast to VLE – which is a thing (e.g. Moodle).  A PLE is not (just!) the personal version of one – but there’s a figure/ground thing, the VLE could be part of it.  A PLE is the students’ bag of stuff: literal stuff (laptop, phone, bits).

PLE is open, controllable, public; VLE is closed, private, you-can’t-edit.  [But: control/privacy to enable experimentation for learning – safe to get it wrong.]

Edges between VLEs and PLEs. OpenLearn has made a big effort to make the content portable.  Materials are stuff in a learning environment, and have alternative formats: print (single HTML file); XML; RSS feed; OU XML; IMS Content Package; IMS Common Cartridge; plain ZIP of all the html files and media assets; Moodle Backup. This export bit is the edge – can do the figure/ground swap here.

Mashups – using Glue Logic (not actual glue).  Live demo of sucking content from OpenLearn – leaving a trail of bookmarks as he goes on Delicious, tagged ‘elcple’. Copy RSS link from OpenLearn course/module.  Use in places like PageFlakes, Netvibes, iGoogle

Uncourses blog – trying to do in real time as a blogged course: ten weeks to study, so ten weeks to write. All done on WordPress at Digital Worlds. Category and tag feeds so it’s “self-disaggregating”. Link structure is emergent (in the sense that he didn’t plan it in advance).  Categories and tags are … basically confusing on WordPress.  Module coming to deliver posts (RSS items) as a drip-feed over time, starting when you want it.

(Flock and Firefox tip: can right-click on any search box on any site and ‘Add a keyword’ for that search.)

Mashups are not production systems, they’re flaky.  (Pageflakey.) – in response to having Yahoo Pipes problems in his PageFlakes setup.

Box.net is like MyStuff that works” – can share files, make them droppable, clicking in a browser will ‘just work’.

Grazr as an RSS reader on turbo – can wrap RSS feeds together in to OPML files.

Glue Logic – lives here http://ouseful.open.ac.uk/xmltools/dwCommentFeedsOPML.php (aka http://tinyurl.com/4vq4nt) – takes parameters and produces OPML feeds out of, say, all comments on posts with a particular tag. “It’s easy to use” [But not documented anywhere?]

Microsoft Live Search – you can add search results as a feed by adding &format=rss to the search URL.  E.g. orange smarties.

Autodiscoverable feeds – your browser can subscribe to it.

Tony’s OPML dashboard as a way of messing around with RSS/OPML files.

StringLE – a String-and-Glue Learning Environment.  The sample site sort-of works but is suffering from linkrot somewhat.

Pipework – Yahoo Pipes.  Live demo of taking Wikipedia data on city populations and putting them via a Googledocs spreadsheet on to a map.