Farewell Vista

The IT news today is full of reports that most purchasers of Windows PCs will from now be able to upgrade their system from Windows Vista to Windows 7, for little or no money, when it becomes available in October.  This – along with Windows 7’s ‘XP simulation’ mode – is indeed probably the death knell of Windows Vista.  Which will probably be unlamented by many.

That was such an appalling vista that every sensible person would say, ‘It cannot be right that these actions should go any further.’

That’s not about Windows, it’s actually Lord Denning’s fatuous reasoning for dismissing the  Birmingham Six’s application for leave to appeal in 1980, on the startling grounds that if they succeeded in overturning their conviction for pub bombings,  it’d make it clear to everyone that there had been the most shocking and extensive fit-up. Which, of course, there had been.  ‘Appalling vista’ became a bit of a buzzphrase among people campaigning for the Birmingham Six’s eventual release.  The phrase has been coming to mind again recently.

It remains to be seen, though, whether the loss of traction by Microsoft with Vista – coupled with the explosion of platforms that aren’t conventional desktop PCs – is a recoverable blip like with Windows ME, or a clear turning point in the history of IT.

I wouldn’t bet against Microsoft’s ability to sell software at scale – they are very good at it. Writing off a company that huge with that large a cash pile and that many smart people would be daft.

But I am sure, as I said in my Babel post, that multiple platforms are here to stay, and the times when you could assume that nearly everyone using a computer had Microsoft Windows are long gone.

(Though as people have pointed out in comments and directly to me, they never really existed anyway.)

OERs, radical syndication and the Uncourse attitude

Liveblog from a technology coffee morning, 17 June 2009, by Tony Hirst.

Please ask Tony what he does – he looks at web technologies and sees what can be done with them, being “dazed and confused”, then communicates them to people through blogs and presentations.

Information and technology silos – information gets stuck in repositories, the IET Knowledge Network.  They’re isolated from other stores.  They do have advantages, but crossing between them is hard. Tony wants to soften the barriers.  Technology silos likewise – using a particular technology may exclude other people.  Twitter is an example – if you’re in, a load of stuff is accessible, if not, then not. Another example is the no-derivatives option in CC licenses.

He’s also interested in representation and re-presentation of material.  Can be physical transformation of content – physical book, or on a mobile phone, could be the same stuff.

Also collage and consumption (mash up!) – lots of people use materials in different ways in different settings, in different media.

Useful abstraction (for Tony!) is content as DATA.  He’s not interested in what the content is.  Data in the news in the US, data.gov to open up Government stats.  Moves in the UK too, Government, Guardian, and research communities trying to share information.  Presentation ‘Save the Cows‘ making point that data in a chart is “dead data” – it’s an end result, not reusable.  Finished product being shipped makes it harder to reuse.

[He’s using the JISC dev8d service SplashURL to give web refs in his presentation – so giving http://bit.ly/9C9uZ and a QR code on screen to give links for the presentation above.]

Data is a dish best served raw – http://eagereyes.org.  Text in PDFs is hard to get out.

Changing expectations – Tony’s video mashup about expectations, rights and ‘free’ content. Statement at the end says “no rights reserved” but amusingly is stored on blip.tv with default rights – i.e. All Rights Reserved!

If you can’t extract content, you can embed it in other spaces, let other people move your stuff around – even to closed document formats.

RSS!  Tony’s favourite. Syndication and feeds – offers some salvation.  It’s like an extensible messaging service.  It’s feeds that let you pass content from one place to another, packaged very simply – title, description (e.g. body of a blog post), link (often back to original source), annotations (if Atom – additional fields, e.g. geoRSS tags for latitude/longitude information), and payload (e.g. images).  If you package it right, other software can make it easy to aggregate and use these.

We ignore RSS at our peril – examples of how to use RSS beyond just Google Reader.  Bit outdated but still useful.  RSS is a series of pipes/wiring.  (Silly aside: he’s almost saying that the Internet is a series of tubes! – Twitter comment from @louis_mallow: Get the slides and do a mashup with data from http://is.gd/14kDA.)

Jim Groom stuff on WordPressMU – a syndication bus – UMW blogs. Lots of feeds. Live workthrough of how to do it.

Scott Leslie – educator as DJ – educator searches, samples, sequences, records and then performs and shares what they find. Similar workthrough of how to do this stuff.

Problems: discovery (how people find stuff), disaggregation (how people sample/take out the bits they want), representation (how they stick it back together and get it out again).

Discovery: We work in a ‘Zone of proximal discovery’ – we generally use Google, most of the time, using keywords we’re happy with and already know.  “Have you done your SEO yet?)  The OU Course Catalogue – with course descriptions – uses terminology you’d expect to learn by the time you finish the course.  How is a learner going to find that?  You search the web and can only find the courses you’ve already done. Similarly an issue generally for OERs.

Disaggregation: is a pain. Embed codes, sampling clips from videos, and so on. Easier on YouTube, can deeplink in to a specific bit.  It’s painful, hard, which discourages you.  The technology you use makes a difference for others too – e.g. PDF, makes it hard to create derived works.

Open Learn – an example. It’s authentic OU content that he can fiddle around with in a way he can don’t with other live courses, “this is a good thing”.  He loves the RSS feed for all the course units – and a host of other packaging formats. Can subscribe to a course using Google Reader – could use e.g. on an iPhone.  Feeds available: all units, units by topic, unit content – also OPML unit content feed bundles by topic. (OPML is another sort of feed – it lets you transport a bunch of RSS feeds around together.)

openlearnigg – built on coRank – imported all the content titles from OpenLearn, lets you comment, vote on and promote course material.  Also daily feeds – give you one item from an RSS every day, regardless of when they were originally published. Grazr widget with an RSS feed for the whole course, can embed in all sorts of other places.

Yale – open courses feedified – Yale Opencourseware has courses, which have contents, which have structured sections – all templated.  It’s not published as RSS, but Tony built a screenscraper (using Yahoo Pipes) to turn the reguarly-formatted pages and turns them in to RSS feeds – repackaged.  Repackage in OPML (collection of RSS feeds), plug in to the Grazr widget, can embed the content elsewhere.

Also did one for MIT, but they keep changing their website so the screenscraper keeps breaking.

WriteToReply.org – on the back of the Digital Britain Interim Report. (Digital Britain Final Report is out today!)  Tony and Joss created a paragraph-commentable version of it, uses WordPress/CommentPress At the moment they have to cut-and-paste the content in.  Each page/post is a different section of the report. Each paragraph has a unique URL, and has comments associated with it.  And there are feeds for the comments to – can represent them elsewhere (e.g. in PageFlakes).  People from the Cabinet Office had set up their own dashboard too, and set up the feeds from that in as well.

YouTube subtitles – grabbed Tweets from people with the hashtag for a presentation (Lord Carter talking about Digital Britain), along with the timestamp, then imported those in to YouTube. So then you can play back the live Twitter commentary alongside the presentation when you come back to it.

Daily feeds – aka serialised feeds – turned all OpenLearn courses in to blogs, which gives you feeds.  Can turn e.g. Digital Britain report in to a daily feed – can consume the content at their own page.

Feeds are also for live, real-time feeds – XMPP – instant messaging protocol, but can use it as a semi-universal plug/connector tool.  WordPress has a realtime feed – can see comments in real time, immediately, without the RSS delay.

Weapons of mass distraction – easy to read far too many things.

Another feed is CSV – simple comma-separated values format.  Google Spreadsheets gives you a url for a CSV file, can also write queries which work like database queries – can plug in to e.g. manyeyes wikified – and instantly get charts. “There’s no effort” … although “it’s not quite there in terms of usability”.  Putting content in to a form that makes it easy for people to move it around and reassemble.

Digital Worlds – ‘an uncourse’ – inspired by T184 Robotics and the Meaning of Life.  You could imagine it’s presented on a blog engine, because of how it looks. Also inspired by the way people split content up, don’t read things in order.  Hosted on WordPress.com, used that as the authoring environment. Wrote 4 or 5 posts a week. On the front page, published like a blog in standard reverse-chronological format.  All posts in categories (not very mutable, almost a controlled vocabulary) and tags (much looser) – gives you feeds for all of those – which lets you create lots of different course views.  So you could see e.g. videos, or the Friday Fun, or whatever. Each category or tag becomes a mini-course.  Also custom views – e.g. all the posts about particular games developed in Game Maker.

Also extra bits.  First, a Google Custom Search Engine (CSE).  On a search engine, can search one specific domain (e.g. add site:open.ac.uk to search just  OU pages – can work better than OU search engine).  The Digital Worlds CSE extracts any links to external sites posted in the course, and then lets you search across not just the course content but any sites that the course content linked to.  All done automatically.  Also did a video channel – using SplashCast.

As he was writing, was informed by what he’d done before. When did a post with a link back to a previous post, a trackback link appears on that original post.  So you can see on any given post what later posts refer to it – ’emergent structure’.  He created graphs of how all the links worked within the course blog.  Could also see paths through the course beyond the fixed category structure.  ‘Uncourse structures can freely adapt and evolve as new content is written and old content is removed.’  They rely on the educator ceding control to the future and to their students.  We try not to do forward references in writing oU stuff … but in this environment, they are created automatically when you make a backward link.  Uncourses encourage the educator to learn through other people’s eyes.  Later comments prompt further discussion and posts, and so on.  It keeps things fresh.

Questions

“We call them students because we take their money”, as opposed to people, a general audience on the web.  More seriously, it’s engaging more as a peer process rather than a didactic one.

This stuff requires a lot of skill – how do we get those skills out to educators?  Tony is doing workshops with people, and writes recipes on his blog.  Problem that when he publishes a recipe for a mashup, people tend to read it for what it is, or get hung up on the specific tools, rather than as a general technique or the underlying pattern.  (This is a well-worn problem in teaching!  Especially at the OU in trad course design. Trying to help people move from the specific examples to the general principles. And when people are overwhelmed with new concepts, they tend to latch on to things that are familiar.  You have to very patiently build up from what they do know to where you are trying to get them.  Zone of proximal development stuff!) Book recently called Mash-up Patterns does this without being too technical.  Tony planning to more specific stuff.

As an educator, posting comments and responses and so on.  Could you organise a group of students to do this collectively? How much would they need to know?  Example of say Darrell Ince’s wikibook project – getting students to write a book, farming out particular topic questions in a very structured way, that works.  Less controlled version in stuff like Jim Groom doing with student blogs, then being aggregated.

‘Quick’ question: How do you get the university as a whole to buy in to this stuff?  Er, don’t know. One reason – after spending 15 weeks at half time preparing Digital Worlds stuff, then 4 weeks writing it, then editor doing 2.5 weeks work on it – not a huge input for a 10 week courses.

Dynamic courses is hard in our context.

A new Babel

There’s an explosion of platforms to develop applications on at the moment, which is exciting in many ways – lots of new environments and possibilities to explore.  But it makes life harder for everyone – people who are making things, and people who are choosing things.

Back in the mid to late 90s, it was pretty much a PC world.  If you wanted a computer, you knew that if you had a PC, then (apart from a few vertical niche markets), you’d have access to pretty much any new software or tool that came out.  People who made things could develop for the PC and know that nearly everyone (who had a computer) could use their stuff, apart from the small minority of people who’d deliberately chosen a computer that didn’t use what everybody else was using.

And then in the late 90s to the mid 00s, it’s was pretty much a web world.  For the most part, if you had a computer and an Internet connection, you’d have access to pretty much any new tools that came out.  People who made things could develop on the web and (with a bit of futzing around with browser-specific stuff), pretty much everyone (who had a computer and an Internet connection) could use their stuff.

But now there’s not just PCs, Macs and Linux computers, there’s not just Internet Explorer, Firefox and Safari, there’s also the iPhone, Android (G1 – HTC Dream etc), Windows Mobile, Symbian/S60  (e.g. Nokia N97 and N86, out today), and the entirely new environment (webOS) for the Palm Pre (due any minute).  All of these are separate environments to use and to make things for.

It’s a nightmare.  As a user, or a developer, how do you choose?  How do you juggle all the different environments and still get stuff done?

Because juggling multiple environments is where things are.

This is all part of an ongoing transition.  When computers first arrived, there were lots of people for every computer.  Microsoft started out with the then-bold ambition “a computer on every desk and in every home, running Microsoft software” – a computer for every person.  Now we’re well in to the territory of lots of computers for every person.

This makes for harder work for everyone – to get the best out of things as a user or developer,  you need to be polyglot, able to move between platforms, learning new tools routinely.

It’s also, though, a hugely exciting range of opportunities and possibilities.   We are very much still in the middle of a golden age of information technology.

New ways of interacting: Lessons from non-standard games controllers

I gave another IET Technology Coffee Morning talk this morning, on non-standard games controllers.

Abstract

How do computers get information from you? The standard keyboard and mouse setup has been widely available since the mid-80s. Things are moving on. Other talks in this series have covered touch-sensitive surfaces, but there are other developments. Games consoles in particular are pioneering a mass market for new ways for people to interact with computers, including wireless sensors for motion, orientation, micro-myograms and encephalograms. In other words, the computer knows how you’re holding something, where you’re pointing it, how you’re standing, which muscles are twitching, and even pick up your brain waves. Examples of all of these technologies are now retailing for £100 or less. In this session, Doug will provide a critical review of current consumer-grade HCI technologies. And then we might play some games. Er, I mean, there will follow an opportunity for participants themselves to critically evaluate some of these technologies in a direct experiential mode.

Slides

Further information

Here’s the Natal demo video that I showed – the “no controller required” play system from Microsoft announced yesterday at E3:

And here’s games legend Peter Molyneux talking about how wonderful Natal is for personal interaction experiences – more here of possible educational use than in the first video:

And if you’re interested in messing around with games controllers, have a look at Johnny Chung Lee’s blog – he’s famous for Wii remote hacks but apparently has recently been working with Xbox on Natal, “making sure this can transition from the E3 stage to your living room”.

And finally

I notice that I spotted the Emotiv EPOC being announced back in February 2008, “allegedly ready for mass sale next Christmas”.  The latest on the Emotiv website I can find is that you can reserve one for $299, and “We expect to be able to deliver the product to you in 2009”. We’ll see.