Web Squared

In the runup to the Web 2.0 Summit later this month, Tim O’Reilly and John Battelle have been outlining their vision of what comes after Web 2.0.  Their answer: Web Squared.  They’ve set this out in a white paper (also available as a 1.3Mb PDF), a webcast,  and a Slideshare presentation:

They say:

Ever since we first introduced the term “Web 2.0,” people have been asking, “What’s next?” Assuming that Web 2.0 was meant to be a kind of software version number (rather than a statement about the second coming of the Web after the dotcom bust), we’re constantly asked about “Web 3.0.” Is it the semantic web? The sentient web? Is it the social web? The mobile web? Is it some form of virtual reality?

It is all of those, and more.

They set out a vision in some detail – it’s well worth a read if you’re interested in what the leading lights of Web 2.0 think happens next.  In a nutshell (as you’d expect from O’Reilly) it’s  ‘Web 2.0 meets the world’. The boundary between the web and the real, physical world is in some ways clear, but in other ways very blurred, and the transition across it is one I am fascinated by.

As with Web 2.0, of course, lots of the things they proclaim as part of Web Squared can be seen going on right now.  As William Gibson said, the future is already here, it’s just not evenly distributed.

There’s smarter algorithms to infer collective intelligence from the ‘information shadow’ of real-world objects, cast in space and time by more sensors and more input routes; and smarter ways of visualising and exploring the outputs, and delivering them to people in more contexts and situations.  And all of this happening in ever-closer-to real time.

The ‘information shadow’ and ‘new sensory inputs’ is exactly the potential that Speckled Computing is mining and looking in to (and I’m very interested in pursuing for learning).  And the increased sensors/input routes, building collective intelligence from many individuals collaborating with low effort is the sort of thing that iSpot is doing – using geolocations and photos from a wide range of individuals to build a bigger picture.

(As a bit of an aside, one ‘key takeaway’ is that ‘A key competency of the Web 2.0 era is discovering implied metadata, and then building a database to capture that metadata and/or foster an ecosystem around it.’ – I’m certainly convinced that’s a more scalable system than one where humans do the hard work of marking data up semantically by hand.)

The potential for the web to learn more and better about the world is huge – and as the web learns more, we too learn more.  As they say, we are meeting the Internet, and it is us. And we’re getting smarter.

The murky issue of licensing

Over on the excellent new-look Cloudworks, there’s a debate going on about what to do about licensing of content on the site.  There’s two questions: one is what license to choose, and the other is what to do about the stuff that’s already on the site (!).  The latter I’m going to discuss over on that article, since it really only applies to Cloudworks, but what license is the Right One is a big and more general question.

This is far from a settled issue in the educational community. There’s reasonable consensus that Creative Commons licences are Good, rather than more restrictive ones, but as you probably already know, there are multiple versions of Creative Commons licenses.   The details are set out nicely on the Creative Commons Licenses page.  As someone releasing material, there are basically four conditions you can choose to apply:

  • Attribution (BY – must give credit)
  • Share Alike (SA – can make derivative works but only if licensed under a similar licence)
  • Non-Commercial (NC – only if not for commercial purposes)
  • No Derivatives (ND – can only distribute verbatim copies, no derivative works)

You can combine (some of) these to create six distinct licences:

  • Attribution (CC:BY)
  • Attribution Share Alike (CC:BY-SA),
  • Attribution No Derivatives  (CC:BY-ND),
  • Attribution Non-Commercial (CC:BY-NC),
  • Attribution Non-Commercial Share Alike (CC:BY-NC-SA)
  • Attribution Non-Commercial No-Derivatives (CC:BY-NC-ND)

There’s also a newish licence called CC0, which is intended as a way of unambiguously releasing a work into the public domain, free of any restraint.

So – assuming your aim is to promote the widest possible access to the material – what license should you choose?

There is a big, ongoing and fundamental argument going on in the Open Educational Resources (OER) and wider online educational community about this, with Stephen Downes and David Wiley perhaps the most articulate/notable exponents of two different positions.  To caricature horribly (and my apologies to both), Stephen Downes’ position is that the most effective licence is CC:BY-NC-SA, and David Wiley’s is that simple CC:BY is better (or even CC0).  This is overlaid (or perhaps underpinned) by a difference of approach, between a strong principled approach, and a pragmatic one.  (If you’re at all interested, I really do recommend digging in to their ongoing conversation about this.  A good starting place might be this post by Stephen Downes, and this one by David Wiley, mostly on NC – or these lists of the responses of one to the other.  If you’re going to Open Ed in Vancouver, they’re planning to debate each other face-to-face, which should be very illuminating.  A recent contribution to the ongoing debate is that Wikipedia has recently moved to CC:BY-SA.)

The argument for minimal licensing (CC:BY or less) is in essence that the other conditions create un-necessary barriers to reuse and distribution.  So, for instance, insisting on Non-Commercial would stop a company distributing printed copies of the work for profit, which might make it more available than it would otherwise be.  The arguments for more restrictive licensing include a fear that commercial interests will crowd out the free sources, using their greater marketing leverage, and that requiring Share-Alike keeps the ‘open-ness’ attached to the work.

There are obvious parallels with the Free/Open Source Software debate: there, the ideologically-pure line (what you might call the Richard Stallman view) has not been anything like as widely-adopted as the more flexible one (what you might call the Linux view).  Being widely-used, of course, does not mean that the approach is right.

For educational resources, my own current personal view is that CC:BY is the licence of choice, where possible.

It’s the least restrictive license and is the lowest barrier to sharing. All the other CC licenses create speedbumps (or worse) to people who want to use or remix  material.

We know that re-use is not widespread default practice in the educational community, and adding in extra barriers seems entirely the wrong tack to me.  If you’re wanting to use something, the extra conditions create headaches that make it  – for most practical purposes – at least look like it’s easier and quicker to just re-create your own stuff.  It’s hard enough persuading colleagues that it’s a good idea to re-use material where possible rather than re-creating it, never mind if they also need a long course in Intellectual Property Rights to understand what they can and can’t do with it.  Each of the qualifications to a simple CC:BY adds extra questions that the potential reuser needs to think through.

We can dismiss ‘No-derivatives’ fairly easily: it’s an explicit barrier to remixing or editing.   As a potential user, you have to think about things like how much you’re allowed to quote/reuse as fair use/comment.  And if you are prepared to simply copy it verbatim, what constitutes verbatim?  What if you change the font?  Or print it out from an online version?  Put a page number, heading or links to other parts of your course at the top or bottom?  Can you fix a glaring and misleading typo?

‘Non-commercial’ is also full of tricky questions.  Most universities are not commercial for these purposes … except not all university activities are covered.  What about using it on a website with ads on it?  Like, say, your personal academic blog that’s hosted for free in exchange for unobtrusive text adverts?   What about a little ‘hosted for free by Company X’ at the bottom?  A credit-bearing course where all the students are funded by the State is clearly not commercial in this sense … but what about one where (in the UK context) they’re all full fee-paying foreign students?  Or a CPD-type course where there’s no degree-level credit and the learners all pay fees?

‘Share-alike’ means you have to worry about whether the system you’re wanting to use the material on allows you to use a CC licence or not.  Does, say, your institutional VLE have a blanket licence that isn’t CC-SA compatible?  And what if you want to, say, produce a print version with a publisher who (as most do) demands a fairly draconian licence?

For any given set of circumstances, there are ‘correct’ answers to most of these questions.  (And they’re certainly not all ‘No you can’t use it’ in many situations that obtain in universities.)  But you need to be pretty savvy about IP law to know what they are.  And even then, a lot of it hasn’t been tested in the UK courts yet, so you can’t be certain. Worse, what you want to do with the stuff when you’re reusing it may change in future – you might start off making a free online course, but then it might take off and you want to produce a book … but you can’t because some of the stuff you used had NC attached.  Or you might want to transfer your free non-assessed online course to a more formal for-credit version in your University on the institutional VLE … but you can’t because some of the material had SA attached.

You can be a lot more confident about future flexibility if you stick to CC:BY material, and there’s a lot less to worry about whether you’re doing it right.  So my view is that if you want to release material to be re-used as widely as possible, CC:BY makes your potential audience’s life much easier.

Complete public domain release would – on this argument – be even better, except that as an academic, I see attribution as crucial and fundamental, so I can’t let go of that!

I’m not overwhelmingly ideologically committed to this position: it’s very much a pragmatic view of what is most likely to get the best outcome.  I certainly don’t dismiss the counter-arguments about the dangers of commercial, closed pressures: they are real.  But I think on the balance of probabilities that the ease-of-reuse argument outweighs those, and CC:BY is usually the licence of choice.

Future of the Net

Liveblog from a seminar on The Future Of The Net (Jonathan Zittrain’s book – The Future of the Internet and How to Stop It.), 20 March 2009, by John Naughton.

Update: Listen to the MP3 and see the useful concept map from John Naughton himself.

Audience small but quite high-powered (eight, including Tony Walton, Paul Clark, Andy Lane) . OU Strategy Unit trying to reach out to academic units and others.

train tracks with points set to go off a cliff

John  lost his physical copy … but rightly guessed it’d be available online as Creative Commons-licensed text.

Jonathan Zittrain was employed sight-unseen as a Unix sysadmin at 13, then by some process (probably involving Larry Lessig) became a lawyer.

Part of an emerging canon – Lessig’s Code 2.0, Benkler’s Wealth of Networks – heavyweight academic stuff. Two sorts of people – trailblazers and roadbuilders; Lessig is the first. Our role in OU (including Relevant Knowledge Programme) is to follow and be roadbuilders, which is an honorable activity.

Core argument of book: Internet’s generative characteristics primed it for success, and now position it for failure. Response to failure will most likely be sterile tethered appliances.

Transformation of the Internet in a blink of an eye from thinking it’s just “CB de no jours” to taken-for-granted. John’s message is don’t take this for granted.

Three parts: 1 rise & stall of generative network, 2 after the stall (including a long and good analysis of Wikipedia), 3 solutions.

Conjunction of open PC and open Internet created the explosion of creativity, but contains within it the seeds of its own destruction. Parallel with T171 You, Your Computer and the Net (Martin did the PC, John did the net) – but didn’t study what happens when you put them together, which Zittrain does here. Not about proprietary versus open source – PC was an open device, if you could write code you could program the device.

John says people don’t understand what we’ve got in the current Net. Knowing the history helps. Design problem (Vint Cerf, IETF etc) – design for apps that haven’t yet been dreamed of, given distributed ownership. If you’re designing for the future, you don’t optimise for the present. Architectural solution has two key points: anyone can join (permissiveness); dumb network, clever apps (end-to-end principle). The openness is a feature, not a bug. Contrast with the case of the Hush-a-Phone.

Zittrain equation: Open PC + surprise generator = generative system

Thought experiments from James Boyle – gave two talks recently, at the RSA and John’s Cambridge programme. Almost everybody has a bias against openness: when something free and unconstrained is proposed, we see the downsides. (Because you can imagine those, whereas you by definition can’t imagine what hasn’t been invented yet.)  Imagine it’s 1992 and you have to choose between: approved sites with terminals at the end (like teletext/Minitel); dumb, unfiltered, permissive network (the Internet) with general-purpose computers at the end. Who would invest in the latter? Second question, still 1992, have to design an encyclopedia better than Brittanica: broader coverage, currency. Options: 1 – strong content, vast sums of money, strong editorial control, DRM. 2 – I’d like to put up a website and anyone can post stuff. Who’d pick the latter?

Posits tension – or indeed tradeoff – between generativity and security. Consumers will become so worried about this that they’ll (be encouraged to) favour tethered appliances and heavyweight regulation.

(I wonder if I can’t bring myself to believe in the Net being locked-down out of all recognition because I’ve always had it around in my adult life. It’s probably easier for people who really knew a world without it to imagine it going away.)

Part 2 explores our likely response to these problems, then Wikipedia. “With tethered appliances, the dangers of excess come not from rogue third-party code, but from […] interventions by regulators into the devices themselves.”

Criticism of book – it underestimates the impact of Governments on the problem. Remembering 9/11, like JFK assassination. (John was on the phone to a friend who was there at the time!). John wrote in his blog on that day that this was the end of civil liberties as we knew them, and in many ways was right. (My memory was that it was the first huge news story that I got almost entirely from the web.) But – one day the bad guys will get their act together and we’ll see a major incident. Dry-runs with what happened to Estonia. But there will be something huge and coordinated, and that’ll evoke the same sort of response.

Rise of tethered appliances significantly reduces the number and variety of people and institutions required to apply the state’s power on a mass scale. John thinks it’s like the contrast between Orwell and Huxley – likelihood of being destroyed by things we fear and hate, or things we know and love.

Dangers of Web 2.0, services in the cloud – software built on APIs that can be withdrawn is much more precarious than software built under the old PC model.  Mashups work (except they’re always breaking – see Tony Hirst’s stuff, just like links rot). Key move to watch: Lock down the device, and network censorship and control can be extraordinarily reinforced.

iPhone is the iconic thing: it puts you in Steve Jobs’ hands. It’s the first device that does all sorts of good things and could be open but isn’t.  (What about other mobile phones?) Pew Internet & American Life survey – Future of the Internet III – prediceted that the mobile device will be the primary connection tool to the internet for most people in the world in 2020. So this could be a big issue.

Wikipedia analysis in the book is extensive.  Looks at how it handles vandalism and disputes – best treatment John’s seen. How it happens is not widely understood. Discussion about whether Wikipedia or Linux is the more amazing phenomenon. (My argument is that Linux is in some ways less startling, because you have some semi-independent arbitration/qualification mechanism for agreeing who’s a competent contributor and which code works.)

Part 3 – solutions to preserve the benefits of generativity without their downsides. “This is easier said than done”. The way Wikipedia manages itself provides a model for what we might do. (I think not – I think Wikipedia works because it can afford to piss off and exclude perfectly good and competent contributors.) Create and demosntrate the tools and practices by which relevant people and institutions can help secure the Net themselves instead of waiting for someone else to do it – badwarebusters.org.

Barriers – failure to realise the problem; collective action problem; sense that system is supposed to work like any other consumer device.

Nate Anderson’s review in ArsTechnica – three principles – IT ecosystem works best with generative tech; generativity instigates a pattern; ignore the downsides at your peril.

Criticisms: too focused on security issues and not on commercial pressures; not enough on control-freakery of governments; too Manichean – mixed economies; too pessimistic about frailties (and intelligence and adaptability) of human beings; over-estimates security ‘advantages’ of tethered appliances.

Discussion

Parallel with introduction of metalled roads. Crucial to economic development, move people and stuff around as a productive system.  Early days were a free-for-all, anyone could buy a car (if rich enough) and drive it, no need for a test.  Then increased regulation and control.  (Also to cars – originally fairly easily tinkerable with, now not/proprietary engine management systems.)  Issue about equity, as much as open/closedness.

Lessons of Wikipedia and the creators of malware. Malware creators only need to be small in number. To take down Wikipedia and make it undependable would take too much effort and coordination. (I disagree – a smart enough distributed bot attack would do it.)

I can’t imagine no Internet/generative/smart programmable devices because never not had them. Grew up on ZX81 onwards, had the CPU pinout on the connector.  Helps to have smart people around who have known the world before that.

South Korea got taken out by SQL Slammer, bounced back though – system is pretty resilient.

Manhattan Project perhaps a bad parallel for an effort to help here – it was the ultimate in top-down command-and-control project, with a clearly-defined outcome. And it was constrained and planned so tightly that it couldn’t actually work until people like Feynman loosened things up a bit to allow some degree of decentralisation.

How do you sign people up? Won’t do anything about e.g. climate change – until their gas bills shot up. Science and society stuff, well known that people only become engaged when it becomes real to them. Liberal is a conservative who’s been falsely arrested; conservative is a liberal who’s been mugged.

Surveillance – makes it unlikely that major public outrage leading to reaction is small, most people don’t realise their clickstream is monitored. It’s only if something happened that made people realise it that they’d say no.  Hard to imagine the scale of community engagement happening.

Case a few months ago – Wikipedia vs Internet Watch Foundation. Readymade community leapt in to action immediately.  But less likely where you don’t have such an articulate and existing community. Also photographer crackdown – they do have access to the media. Danger of the Niemoller scenario where they come for small groups one at a time.

It’s an argument about the mass of technology, not the small cadre of techies – iPhone can be jailbroken if you know what you’re doing. And there are more not fewer opps for techies and more techies than ever before. Most PC users in the 80s only used what they were given. In 1992 I could write an app for the PC and send it to anyone on the Internet. Except hardly anyone was on the Internet then, and even though most techies were among them, most Internet users then couldn’t write their own stuff – or even install software off the net.  Techies a small proportion still (even though bigger in number than before), so still vulnerable to this sort of attack.

Mobile devices are key here, consumerism. People just want stuff that works, generally.

Google as another example – they build very-attractive services, but on the basis of sucking up all our data.  Harness amoral self-interest of large corporations in this direction. Also (enlightened?) interest of Western Governments in promoting openness.

John uses example of bread mix and a recipe  to illustrate open source. Parallels with introduction of car (wow, I can go anywhere); PC (wow, I don’t have to ask people for most disk quota) and Net (wow, I don’t have to ask for more mail quota). These things have an impact on society, can damage it. So for instance, if you have an open machine, could damage other people’s computers, hence need to regulate ownership and operation. With car, annual check you have road tax, insurance, MOT; with a PC the surveillance needs to be continuous.

The 9/11 disaster scenario is instructive: why didn’t we have the same response to the Troubles? Because not transnational/non-State actors. The Provisional IRA have tangible, comprehensible political objectives that could be taken on. Whereas 9/11 terrorism is more vague.  And malware is different. Wasn’t a problem when it had no business model … but now it has. Can now take it on?

Is the Internet just (!) an extension of civil society and how you should regulate it, or is it something quite different?  Motor traffic law introduced absolute offences (no mens rea) – it’s an offence to drive over the speed limit regardless of whether you know you are going that fast or what the limit is) because quite different threat.  Internet is at least as new so likely to spur at least as revolutionary – and shocking – change to our legal system.  Ok, now I’m scared, so that’s a result.

But we’re only eighteen (nineteen?) years in to the web.  It’s idiotic for us to imagine we understand what it’s implications are.  So the only honest answer is we don’t know. John argues we’re not taking a long enough view. 1455, eighteen years after the introduction of the printing press. MORI pollster, do you think the invention of printing will undermine the authority of Catholic Church, spur Reformation, science, whole new classes, change of concept of childhood.  Web is a complex and sophisticated space, so to regulate it right can’t be done overnight.  Tendency for people to make linear extrapolations from the last two year’s trends.

In the long run, this won’t look like such a huge deal in the history of humanity. It’ll be a bit like what happened with steam. It looks like the biggest deal ever to us only because we’re in the middle of it.

So what do you do when you know that on a 20-year horizon you’re blind?

My answer: get moving now, plan to change and update regularly.  Expect to have to fiddle with it, throw great chunks of things away because they’re no longer relevant. Challenge to OU course production model! (Actually, I’m wrong to say throw away – more expect that things will become eclipsed and superseded – old technologies die very hard.)

We’ve become more open/diverse in our offer to bring in enough people. Which is hard – costs and scale versus personalisation.

Learning and Teaching at the OU

Presentation by Denise Kirkpatrick and Niall Sclater.  Or is it a presentation? It’s organised as a Human Resources Development Course – it’s an Open Insights Expert Lecture – with sign up, sign in and all the details going on the internal staff Learning Management System.  And there are feedback sheets to complete too.  “The subjects covered were:  relevant to my present work, background interest only, possibly useful for future work, of no interest”.  If it’s not relevant to my present work then either I or the OU have a bit of a problem.

Being told it’s aimed at new staff … which is news to me; perhaps I misread the course information?  Networking opportunities over coffee later.

Denise Kirkpatrick – Learning @ the OU

Welcomes new staff. We take the quality of our teaching and our student experience extremely seriously, we do it well but always want to try to do it better. QAA audit coming in March.

(Tony Hirst would be pleased to see the RSS logo prominently on her Powerpoint title slide. And I also note that it’s not using the OU Powerpoint template.)

Hard to draw a line between technologies for learning and teaching and those for the rest of your life; the line is blurred. But focus here is on learning and teaching.

Sets out generational view of technologies: BabyBoomers, GenX, NetGen/Millennials. Digital natives, who grew up using technology, it’s not seen as something different.  New generations approach technologies in a different way.  We as staff don’t come at the technologies in the same way as our (potential) students. A challenge.  Attitudes and ways of working are also important, NetGen are team based, they like to work like that.  Caveat: they’re broad categories, are exceptions.

Statistics – UK data – on tech use – from last year.  65% home internet (+7% on 07), 77% NetGen online daily, 91% NetGen use email (Wow – so 9% of them don’t?)  Childwise 2009 report – kids, much younger, are using techs a lot – 25% 5-8 year olds have net in their room, 13-16 almost all have mobiles.

We have mobiles, but we use them differently.  Some staff can’t work out why the hell you would want to deliver something to a device that’s so tiny.  But our students are so much more comfortable with mobiles. So we must investigate how to do it effectively.

Emerging themes in tech in ed: Blurring (f2f/online, in/formal); increased mobility; gaming; social networking; high-impact presentation/engagement techs; analytics, diagnostics and evidence-based ed; human touch; Learning 2.0?

Mobility – shows Google Trends on news about mobile learning.  iTunesU – new OU channel to deliver OU assets to students. (Interesting metaphor.)

Social networking – mentions social:learn, very exciting. Current and potential students are likely to use social networking in their daily life.

Mentions Twitter, virtual worlds – we have big opportunity to create social communities for our students who wouldn’t neesarily meet up.

Online learning gives us lots of data – we need to use that data, especially good with Quality hat on. (Big on analytics – again I can picture Tony Hirst smiling.)

Learning 2.0, don’t underestimate social aspect. Strongest determinant of students’ success is ability to form and partiipate in small groups (Light). ‘Learning to be’ supported by distributed communities of practice; productive inquiry; increasing connections & connectedness.

Has tech changed things? Leveraging potential of social learning (esp in distance ed); add community to content; acces to experts; access to peer review audience.

Examples; iTunesU, Openlearn, VLE, Learning designs project (Gráinne Conole, Cloudworks) – making teaching community-based, sharing practice.

Our challenge: towards a pedagogy of technology enhanced learning; and a scholarship for a digital age (esp for academics). We have always used technologies, for the last 40 years, but need to move that forward.

Q: How does the technology match against our current student age profile? We have a lot of baby boomers.

A: We deliver to the here and now, but our profile does have GenY and is increasing. Also planning for the future. Many baby boomers are confident tech users. Also many of our students – regardless of age – are demanding it. If we have evidence it’ll improve the learning experience, we should do it.

Q (Martyn Cooper, IET): Is there a qualitative difference between GenY’s use of social networking, rather than a quantitative one?

A: I’m not going to answer that one. We might think our quality is far superior, but … it’s a fertile area for research.

Q: Demographics, social advantaged versus disadvantaged – do technologies favour the socially advantaged? Tension with OU’s principles of open access to all.

A: Really important question, currently researching. Lot of unpacking needs to be done in to e.g. mobile phone ownership. Dilemma and a challenge, we have to keep tackling and pushing it. We put in resources to help our socially disadvanted students have access to the net. How much wider would the gap become if we don’t give people the opportunity to learn about that (tech) world?  It could disempower them to give them a route without tech. We have a wide range, it is possible to still study with us and have an almost predominantly print-based experience. But need to reconsider what access means and what our responsibilities are.

Q (Robin Stenham): How explicit are we making the use of social networking tools for group learning in terms of accreditation? Building transferable skills in to the learning outcomes.

A: An area we need to do more work. If we don’t expect access to tech, can’t base assessment on it. There are examples where people are starting to build that in. But haven’t done huge amounts of work, not widespread at this stage.

Niall Sclater

(presentation uses OU template)

Audience question: who brought a mobile? (nearly all)  Who ignored ‘turn off your mobile’? Two. (Including me.)  So please consider switching ON your mobile now.  (And lots of phone boot-up noises.) Impression given by ‘turn it off’ is the wrong one. Onus is on the presenter to make the presentation more interesting than the other competition for your attention (email on your laptop etc).

Focus of VLE is to make web the focus of student experience.  E.g. of old-school A3 print study calendar – contrast A103 and AA100 VLE view showing you the resources. The spine of the course is on the internet.

Encouraging collaboration: tools to help. Elluminate – audio conferencing, increasingly video too. Shared whiteboard. Quite a traditional class way – teacher writing down equations, something about maths that is best taught that way.  Online learning with maths this way, tutors have taken to it like ducks to

Maths Online (MOL) – eTutorial trial Feb 08 – 449 student, 136 staff. Most positive comments about interaction, tutor, convenience (being at home vs travel to tutorials), help. Least about preparation, software, good audio. Negative comments: mainly sound problems, but 50% nothing negative. Connection problems. (Niall has no broadband at home at the moment thanks to ISP problems.) Must bear in mind.  Positive feedback comments – ‘very close to the experience of a face-to-face tutorial’. Elluminate is not for a stand-up lecture with passive audience, it has tools for feedback (instant votes, etc). Give talk, move to next slide, monitoring IM chat backchannel and referred to it. Very skilled to do that; it’s completely different to what we’re used to. ‘gave me a feeling of belonging to a group’ – we couldn’t do this in the past.  If net gen are more collaborative (some evidence?) – is likely to be more important to our students. Evidence for many years that group learning can help.

Community building: Second Life, virtual worlds. Virtual worlds project about to kick off. (Great slide of people sitting down lecture-style in Second Life – only funny bit is that one audience member has wings, another is in fact a chicken.) Can try to replicate stuff lecture environment, everyone sitting in rows … or have something more interactive. Interesting how we transpose traditional models that aren’t necessarily appropriate – e.g. building copies of physical campuses, no need to visit an empty reproduction. So use spaces more imaginatively.

Building your online identity: Increasing student blogs. tags – research, wisdom, travel, karate. Personalisation.  Niall happy with LPs, cassettes, MP3s, transition across groups. Young people build identity through Facebook etc, tell the world their interests, relationships and so on. Gives you a much better network of people, professional and social relation brings you closer together.

Making content interactive: e-assessment with feedback, based on your answer. Use internet for what it’s good for.

Ownership and sharing: MyStuff – eportfolio system. Share documents, store for your benefit, tag them, share them with other students, tutors, future employer. Compile in to larger collection. Problems with MyStuff – user interface confusing to students, and is also very slow. Planning to replace, but will take a long time. Looking at e.g. Mahara (works with Moodle) and PebblePad, Google Apps for Education, Microsoft Live@edu.  Google Docs – instant speed even though hosted in US. We could use this for the content repository side easily.

Reflection: Templates for reflection on learning outcomes. (Glimpse of Niall’s browser toolbar – RSS feeds from Grainne, Tony, Martin, Alan Cann …)

Moodle grade book – rich data to tutors immediately after students have done test. Wiki report showing breakdown of activity/contributions – have some courses requiring use of wiki, this is one way of assessing.

Studying on the move – much hype, but we’re now having sophisticated platforms (iPhone, Android, etc). Can do so much more now. Many/most students will have very sophisticated device that will browse web, view course content, do quiz, etc, from wherever.

VLE and other systems – must be like accessibility, think about it from the start, ensure accessible from mobile devices. Like BBC sites at present – all our systems need to be built like that.

learn.open.ac.uk/site/lio Learning Innovation Office site, under development. Niall’s blog at sclater.com.

Thanks to Ben Mestel, Maths Online Team, Rhodri Thomas.

Q (Martyn Cooper): Accessibility and mobile learning. EU4All content personalisation responding to accessibility profiles and device profiles – optimise content based on both of those. Who reviews this?

A: We have a big project underway, want to bring you (Martyn) in, LTS.

Q: Diversity of devices very important for accessibility.

A: Indeed.

Q: (Carol ?, LTS): Google Apps. Why do we develop custom things when there are good apps already out there? It’s disadvantaging our students, less transferable.

A: Key questions grappling with. (mobile phone sound … but can’t find the source. Oh dear.)

Q: Not rude to turn off phones, it’s setting aside time. Would be rude to take attention away.

A: Maybe this is a net generation thing. Conferences have people using devices constantly; don’t find it rude any more, my duty to get people interested. But understand that people find it offensive.  Alas, experiment has failed.

Back to in-house vs external – have had endless debates with Tony Hirst and Martin Weller on this. Can create a ‘VLE’ online out of many things – but putting big burden on students to remember/learn many sites. Can’t assess accessibility.  Can’t guarantee service (but if ours we can do something).

Q: (Will Woods, IET): Students using Twitter, blogs, etc – staff stuck in email as main communication channel. Small clique at OU using Twitter. Can we improve internal channels? Cultural change?

A: Is an issue. Is a very email-based culture. Use it too much? Twitter … has its place, but can’t guarantee people are reading it. How do we move everyone on to new technologies? Should we try to? People understand internet is a bigger thing, less opposition to elearning. Thoughts in audience?

Q: Robin Stenham – Moodle tools give us many different tools to communicate, can share learning; forum tool vs Outlook. Moderating on forum can be very useful. E.g. using email ‘send in your expenses’ and everyone does reply-all. Misappropriating technologies. Gets 100 emails a day, of which 30-40 are streams/CC-in a discussion.

A: Yes, cognitive overload. Wiki a useful tool, putting some committee papers on wikis so don’t need them on the hard disk. (Denise) Points out that we’re encouraging people to use VLE tools themselves, so staff are experimenting with tools to understand how to use them with students. You can use VLE in your departments.

Q: Janet Churchill (HR Development): HR Development are trying to upskill staff in new technologies. Emailogic course from AACS to help people get most out of it, not inappropriately copying people in. Development opportunities now extending beyond trad training – now have secondlife presence for feedback sessions. ILM courses have online Plug – we have an induction process, online induction tool, looking for people to put in touch with external agencies to build an online induction tool that’s more engaging.

Move to general questions.

Niall: Interesting to analyse what’s going on in conferences. E.g. people commenting on and sharing what you’re saying. Can’t assume people are ignoring you.  But our experiment (on mobiles) has failed.

DK: Experiment hasn’t failed, just hasn’t given you the result you wanted.

Giles Clark, LTS: eTexts. Took view not to enhance our e-texts wrt print. Should we stay like that? Keep electronic version exactly as in print? Or further develop – insert animations, collaborative activities – or is that for surrounding VLE?

Niall: Is potential to do more with our online PDFs. Can’t stay still and go for common denominator. Paper will long have a role. Some quite happy to read on phone/device, could be generational.

Denise: Lots of exciting opps in tech, but accompanied esp for us with challenges. We as OU have to be able to do it at scale.  Can do sexy experiments with e.g. 30 students in a classroom.  But doing it with thousands of distributed students very different, scale. We need to be more efficient and economic, tough times. Hard decisions: nice bespoke examples, or go for scale for all courses. Must explore opportunities, cost out, see scalability – then answer.

Thanks to all.