Web Squared

In the runup to the Web 2.0 Summit later this month, Tim O’Reilly and John Battelle have been outlining their vision of what comes after Web 2.0.  Their answer: Web Squared.  They’ve set this out in a white paper (also available as a 1.3Mb PDF), a webcast,  and a Slideshare presentation:

They say:

Ever since we first introduced the term “Web 2.0,” people have been asking, “What’s next?” Assuming that Web 2.0 was meant to be a kind of software version number (rather than a statement about the second coming of the Web after the dotcom bust), we’re constantly asked about “Web 3.0.” Is it the semantic web? The sentient web? Is it the social web? The mobile web? Is it some form of virtual reality?

It is all of those, and more.

They set out a vision in some detail – it’s well worth a read if you’re interested in what the leading lights of Web 2.0 think happens next.  In a nutshell (as you’d expect from O’Reilly) it’s  ‘Web 2.0 meets the world’. The boundary between the web and the real, physical world is in some ways clear, but in other ways very blurred, and the transition across it is one I am fascinated by.

As with Web 2.0, of course, lots of the things they proclaim as part of Web Squared can be seen going on right now.  As William Gibson said, the future is already here, it’s just not evenly distributed.

There’s smarter algorithms to infer collective intelligence from the ‘information shadow’ of real-world objects, cast in space and time by more sensors and more input routes; and smarter ways of visualising and exploring the outputs, and delivering them to people in more contexts and situations.  And all of this happening in ever-closer-to real time.

The ‘information shadow’ and ‘new sensory inputs’ is exactly the potential that Speckled Computing is mining and looking in to (and I’m very interested in pursuing for learning).  And the increased sensors/input routes, building collective intelligence from many individuals collaborating with low effort is the sort of thing that iSpot is doing – using geolocations and photos from a wide range of individuals to build a bigger picture.

(As a bit of an aside, one ‘key takeaway’ is that ‘A key competency of the Web 2.0 era is discovering implied metadata, and then building a database to capture that metadata and/or foster an ecosystem around it.’ – I’m certainly convinced that’s a more scalable system than one where humans do the hard work of marking data up semantically by hand.)

The potential for the web to learn more and better about the world is huge – and as the web learns more, we too learn more.  As they say, we are meeting the Internet, and it is us. And we’re getting smarter.

Advertisements

Digital Scholarship Hackfest

A bunch of us got together yesterday and today at Bletchley Park for a Digital Scholarship Hackfest:  Martin Weller, Will Woods, Kev McCleod, Juliette CulverNick Freear, Antony McEvoy (a designer newly joined Other Parts of the OU spending a couple of days with us), and Lynda Davies.

P1010242

The setting is great for doing techie things, although I always feel slightly awed being at Bletchley Park for work.  When I’m just in tourist gawking mode it’s fine, but when I’m doing techie things I always feel a bit of an impostor. Alan Turing and the other wartime codebreakers were doing really really clever stuff, and here I am trying to do things in that line … it’s a tall order to live up to.  The place was open to visitors while we were working, and we fondly imagined that the tourists peering in through the window would think that we were hunched over our laptops (almost exclusively Macs) were busily engaged in modern day codebreaking.

The one major downside to the venue was the terrible wifi.  It was desperately slow, and not up to supporting a roomful of geeks.  I’m sure it’s been better at other meetings I’ve been to – but it may be that something special was laid on then.  It was just enough to keep us going, but I think we’d have been a lot more productive with more.

Digital Scholarship

The Digital Scholarship project is an internal one, led by Martin, with two aims:

  1. Get digital outputs recognised. (Martin’s posted about this sort of stuff already)
  2. Encourage academics to engage with new types of output and technologies.

There’ll be a programme of face to face events and sessions, but we want an online system to help support and encourage it all, and that’s what we’re here to do.

Principles:

  1. Based around social media principles
  2. Easily adaptable
  3. Inforporate feeds/third party content easily
  4. Not look and feel like a (traditional!) OU site

On that last one, we don’t want it to feel like a carefully honed, current-standard, usual-topbar OU site.  But we do want it to look like what the OU might become – what we’d like it to become – in the future.

The audience is OU academics (and related staff), but we (may?) want to make it more open to those outside later.

What we did

We spent the first day thrashing through what we meant by digital scholarship, and what the site might do for users, and what we could build easily and quickly.  We spent the second day getting down and dirty with building something very quickly.  I say ‘we’, but it was mostly the developers – Juliette and Nick – plus Antony, our designer.  Martin and I floated around annoying them and pointlessly trying to make the wifi work better.

Sadly, Martin rejected my offer to invent a contrived and silly acronym for the project, but my suggestion to call the site ‘DISCO’ (for DIgitial SChOlarship) seemed to be reasonable enough to run with.  We had a bit of light relief thinking about visuals for the site – all browns and oranges and John Travolta pointing at the sky – but I suspect Antony was too sensible to take on our wackier suggestions, and the final site will not feature a Flash animation of a rotating glitter ball in the middle of the page sending multicoloured sparkles all over the screen.

Digital Scholarship Profiles

While we were beavering away, Tony Hirst (armed with a better net connection, no doubt) was musing on what we might be measuring in terms of metrics.  Well, here goes with how far we got.

One aspect of the project I worked on particularly was a Digital Scholarship Profile (a Discopro?).  The idea of this is some way of representing digital scholarship activity, and working towards some metrics/counts.

What we want to be able to do is to show – for each person – the range, quantity and quality of their digital scholarship outputs.

This would serve several purposes.  Firstly, it’s a stab at representing activity that isn’t captured in conventional scholarship assessments.  Secondly, by showing the possibilities, and letting you link through to see people who’ve done good stuff, you make it easier for people to develop in new ways.

We could show, for each area of digital scholarship output, what each person was doing, and how ‘good’ that was (more of which later).  On your DISCO profile the site would show your best stuff first, and as you went down the page you’d get to the less impressive things, and then (perhaps over a line, or shaded out) would be the areas where you had no activity yet.  For each area, we’d have links to:

  • suggestions for how to improve in that area (and on to e.g. Learn About guides)
  • links to the profiles of people with very ‘good’ activity in that area
  • information about how we do our sums

P1010240

Of course, metrics for online activity are hugely problematic.  They’re problematic enough for journal articles, but at least there you have some built in human quality checking: you can’t guarantee that a paper in Nature is better quality (for some arbitrary generic definition of ‘quality’) than one in the Provincial Journal of Any Old Crap, but it’s not a bad assumption.  And any refereed journal has a certain level of quality checking, which impedes the obvious ways of gaming the metrics by simply churning out nonsense papers. (Though I wouldn’t claim for a moment that there has been no gaming of research metrics along these lines.)

How do you measure, say, blog activity, or Slideshare?  You can get raw production numbers: total produced, average frequency of production, and so on.  However, there’s only negligible bars to publishing there, and any half-techie worth their salt could auto-generate semi-nonsense blog posts.

But this is relatively straightforward to measure, and nobody in academia is going to be so stupid as to simply equate quantity of stuff produced with quality, so I think we can do that without too much soul-searching.

How can we assess quality? One approach would be to take a principled stand and say that peer review is the only valid method.  This view would see any metrics for academic output as irretrievably problematic at best, and highly misleading at worst  That stance is one that might appeal particularly in disciplines which outright rejected a metrics-based approach for the REF.  The downside, of course, is that peer review is hugely expensive – even for the most selective stuff academics do (journal articles), the peer review system is creaking at the seams.  There’s no way that we could build a peer review system for digital scholarship outputs.

There are, however, some – very crude – metrics for assessing (something that might be a proxy for) quality of online resources.  You can (often) get hold statistics like how many times things have been read, number of comments made, number of web links made to the resource, and so on.  As with the production numbers, you can game these up.  It’s not entirely trivial to do more than a handful – but most academic blog posts (in the middle of the distribution) will be getting handfuls of pageviews anyway, so getting your mates to mechanically click on your posts would probably have a noticeable effect.  And these are proxy measures for quality at the very best.  The sort of stuff that’s likely to get you large amounts of online attention is not (necessarily) the sort of stuff that is of the highest academic quality.  I can guarantee that a blog post presenting a reasoned, in-depth exposition and exploration of some of the finer points of some abstruse discipline-specific theory will get a lot less views than, say, a blog post promising RED HOT PICS OF BR*TN*Y SP**RS N*K*D, for instance.  Less starkly, the short, simple stuff tends to get a lot more link love than long, heavyweight postings – which is, alas, an inverse correlation with academic rigour (though not a perfect one).

There’s also an issue that statistics in this domain will almost certainly be very highly unequal – you’ll get a classic Internet power-law distribution, where a small number of ‘stars’ get the huge overwhelming majority of the attention, and a long tail get next to nothing.  We can probably hide that to some degree by showing relative statistics – either a rank order (competitive!) or perhaps less intense by showing, e.g. quintiles or deciles, with some nice graphic to illustrate it.  We mused about a glowing bar chart, or a dial that would lean over to ‘max’ as you got better.

P1010231

This is an experiment, and we want to explore what might work, so we don’t have to solve this problem.  And in a two-day hackfest to get something going, we’re going to shortcut all that careful consideration and just see what can be done quickly – knowing that we’ll need to tweak and develop it over time.  Or even through it away entirely.

So what could we possibly measure, easily?

The model we’re running with is that there are several categories of things you might produce (research papers, blog posts, photos, etc), and for each category, there’ll be one or more service that you might use to host them – so, for instance, you might put presentations on Slideshare or on Prezi.com.  And then for each service, we can measure a whole range of statistics.

Here’s an outline of what we’re thinking:

P1010230

Categories:

  • Research paper repositories: Open Research Online and/or other institutional repository, subject-specific repository, and so on
  • Learning resources: repositories – e.g. OpenLearn, MERLOT, etc etc
  • Documents:  Scribd, Google Docs, OU Knowledge Network
  • Websites: Wikipedia (contributions – not your biography for fear of WP:BLP), resources, etc
  • Blogs: individual personal blogs, group blogs (could get feed for each one), etc
  • Presentations: Slideshare, Prezi, etc
  • Lifestream: Twitter, Tumblr, Posterous, FriendFeed, etc
  • Audio/video: podcasts, iTunesU, YouTube, Blip.tv, etc
  • Links/references: Delicious, Citeulike, Zotero, etc
  • Photos/images: Flickr, Picasa Web Albums, etc

The idea for these categories is that they’re a level at which it makes some sort of sense to aggregate statistics.  So, for instance, it makes some sense to add up the number of presentations you’ve put up on Slideshare and on Prezi … but it probably makes no sense at all to add up the number of photos you’ve posted to Flickr and the number of Tweets you’ve posted on Twitter.

Statistics – production statistics:

  • Count of number of resources produced
  • Frequency of resources produced (multiple ways of calculating!)

Statistics – impact/reception statistics:

  • Total reads/downloads/views of resources (sum of all we can find – direct, embed, etc) (also show average per resource)
  • Count of web links to resource (we generate? via Google API)
  • ‘Likes’/approval/star rating of resources (also show average per resource)
  • Count of comments on the resource (also show average per resource)

Statistics – Composite statistics

  • h-index (largest number h such that you have h resources that have achieved h reads? links? likes?)

I really quite like the idea of tracking the h-index: it takes a bit of understanding to suss how it’s calculated, s0 not everybody instantly understands it.  But it’s moderately robust and it’s a hybrid production/impact type statistic.  The impact component needs a little thought, and it might well vary from service to service.  There’s less symmetry in online statistics than there is in citations: if you get a few hundred citations for a paper, you’re doing really very well, but it’s not that hard to get a few hundred page views for an academic blog post.  A few hundred links, however, might be equivalently challenging.

We’re imagining some sort of abstraction layer for the profile, so we can plug in new services – and new categories – fairly easily.  One key point we want to get across is that we’re not endorsing a particular service or saying that people ought to use them: we’re trying to capture the activity that’s going on where it’s going on.

We’ll need to keep a history of the statistics, and also careful notes about our calculation methodologies and when they change (as they no doubt will).  Nice-to-have down the lines features could then include graphs, charts, trends (trend alerts!) and so on.

There’s no way that we can get all of these things up and running in two days of hacking – highly skilled as our developers are.  So we’re going for a couple of example ones to get the idea across, and will add others later.

We want to produce feeds of all this stuff and/or expose the raw data as much as possible.  But again, that’s one for later rather than the proof-of-concept hack we’re putting together just now.

Sadly, the wifi connection at the venue was a bit flaky and slow, so we did the hacking on local machines rather than somewhere I can point you to right now – but expect a prototype service to be visible soon!  Unless, alas, you’re outside the OU … one design decision we made early was to keep it behind the OU firewall at least initially until the system is robust enough to stand Internet scrutiny – both in terms of malicious attacks, but also in terms of getting our ideas about what this should be thrashed through.

There’s the eternal online educational issue of open-ness versus security: making things more open generally makes them more available, and (with the right feedback loops in place) better quality; but on the other hand, people – especially people who don’t live in the digital world, like our target audience – often appreciate a more private space where they can be free to take faltering steps and make mistakes without the world seeing.  We’re trying more up the walled garden end to start with, but will revisit as soon as the site has had more than two academics look at it.

Next steps

We didn’t quite have a working site when we finished, but ended up with this list of things to do to get the site up and working:

  • order URL (disco.open.ac.uk?)
  • get Slideshare embeds working (problem with existing)
  • put on server – integrate design, site (Juliette), profile (Nick)
  • integrate with SAMS
  • finish coffee functionality – Juliette
  • finish barebones profile functionality – Nick
  • allow users to add link (in Resources)
  • check of site (and extra content added) by Martin
  • put ‘alpha’ on the site

And this list of longer term actions:

  • support
  • extended profile/statistics – API/feed/data exposure
  • more integration with OU services
  • further design work
  • tag clouds / data mining
  • review of statistics/profile
  • review the (lack of) open-ness
  • get more resource to do more with the site

For now, though, the best picture of the site I can give you is this:
P1010238

(There’s more photos of our flipcharts and the venue in this photoset on Flickr.)

Future of the Net

Liveblog from a seminar on The Future Of The Net (Jonathan Zittrain’s book – The Future of the Internet and How to Stop It.), 20 March 2009, by John Naughton.

Update: Listen to the MP3 and see the useful concept map from John Naughton himself.

Audience small but quite high-powered (eight, including Tony Walton, Paul Clark, Andy Lane) . OU Strategy Unit trying to reach out to academic units and others.

train tracks with points set to go off a cliff

John  lost his physical copy … but rightly guessed it’d be available online as Creative Commons-licensed text.

Jonathan Zittrain was employed sight-unseen as a Unix sysadmin at 13, then by some process (probably involving Larry Lessig) became a lawyer.

Part of an emerging canon – Lessig’s Code 2.0, Benkler’s Wealth of Networks – heavyweight academic stuff. Two sorts of people – trailblazers and roadbuilders; Lessig is the first. Our role in OU (including Relevant Knowledge Programme) is to follow and be roadbuilders, which is an honorable activity.

Core argument of book: Internet’s generative characteristics primed it for success, and now position it for failure. Response to failure will most likely be sterile tethered appliances.

Transformation of the Internet in a blink of an eye from thinking it’s just “CB de no jours” to taken-for-granted. John’s message is don’t take this for granted.

Three parts: 1 rise & stall of generative network, 2 after the stall (including a long and good analysis of Wikipedia), 3 solutions.

Conjunction of open PC and open Internet created the explosion of creativity, but contains within it the seeds of its own destruction. Parallel with T171 You, Your Computer and the Net (Martin did the PC, John did the net) – but didn’t study what happens when you put them together, which Zittrain does here. Not about proprietary versus open source – PC was an open device, if you could write code you could program the device.

John says people don’t understand what we’ve got in the current Net. Knowing the history helps. Design problem (Vint Cerf, IETF etc) – design for apps that haven’t yet been dreamed of, given distributed ownership. If you’re designing for the future, you don’t optimise for the present. Architectural solution has two key points: anyone can join (permissiveness); dumb network, clever apps (end-to-end principle). The openness is a feature, not a bug. Contrast with the case of the Hush-a-Phone.

Zittrain equation: Open PC + surprise generator = generative system

Thought experiments from James Boyle – gave two talks recently, at the RSA and John’s Cambridge programme. Almost everybody has a bias against openness: when something free and unconstrained is proposed, we see the downsides. (Because you can imagine those, whereas you by definition can’t imagine what hasn’t been invented yet.)  Imagine it’s 1992 and you have to choose between: approved sites with terminals at the end (like teletext/Minitel); dumb, unfiltered, permissive network (the Internet) with general-purpose computers at the end. Who would invest in the latter? Second question, still 1992, have to design an encyclopedia better than Brittanica: broader coverage, currency. Options: 1 – strong content, vast sums of money, strong editorial control, DRM. 2 – I’d like to put up a website and anyone can post stuff. Who’d pick the latter?

Posits tension – or indeed tradeoff – between generativity and security. Consumers will become so worried about this that they’ll (be encouraged to) favour tethered appliances and heavyweight regulation.

(I wonder if I can’t bring myself to believe in the Net being locked-down out of all recognition because I’ve always had it around in my adult life. It’s probably easier for people who really knew a world without it to imagine it going away.)

Part 2 explores our likely response to these problems, then Wikipedia. “With tethered appliances, the dangers of excess come not from rogue third-party code, but from […] interventions by regulators into the devices themselves.”

Criticism of book – it underestimates the impact of Governments on the problem. Remembering 9/11, like JFK assassination. (John was on the phone to a friend who was there at the time!). John wrote in his blog on that day that this was the end of civil liberties as we knew them, and in many ways was right. (My memory was that it was the first huge news story that I got almost entirely from the web.) But – one day the bad guys will get their act together and we’ll see a major incident. Dry-runs with what happened to Estonia. But there will be something huge and coordinated, and that’ll evoke the same sort of response.

Rise of tethered appliances significantly reduces the number and variety of people and institutions required to apply the state’s power on a mass scale. John thinks it’s like the contrast between Orwell and Huxley – likelihood of being destroyed by things we fear and hate, or things we know and love.

Dangers of Web 2.0, services in the cloud – software built on APIs that can be withdrawn is much more precarious than software built under the old PC model.  Mashups work (except they’re always breaking – see Tony Hirst’s stuff, just like links rot). Key move to watch: Lock down the device, and network censorship and control can be extraordinarily reinforced.

iPhone is the iconic thing: it puts you in Steve Jobs’ hands. It’s the first device that does all sorts of good things and could be open but isn’t.  (What about other mobile phones?) Pew Internet & American Life survey – Future of the Internet III – prediceted that the mobile device will be the primary connection tool to the internet for most people in the world in 2020. So this could be a big issue.

Wikipedia analysis in the book is extensive.  Looks at how it handles vandalism and disputes – best treatment John’s seen. How it happens is not widely understood. Discussion about whether Wikipedia or Linux is the more amazing phenomenon. (My argument is that Linux is in some ways less startling, because you have some semi-independent arbitration/qualification mechanism for agreeing who’s a competent contributor and which code works.)

Part 3 – solutions to preserve the benefits of generativity without their downsides. “This is easier said than done”. The way Wikipedia manages itself provides a model for what we might do. (I think not – I think Wikipedia works because it can afford to piss off and exclude perfectly good and competent contributors.) Create and demosntrate the tools and practices by which relevant people and institutions can help secure the Net themselves instead of waiting for someone else to do it – badwarebusters.org.

Barriers – failure to realise the problem; collective action problem; sense that system is supposed to work like any other consumer device.

Nate Anderson’s review in ArsTechnica – three principles – IT ecosystem works best with generative tech; generativity instigates a pattern; ignore the downsides at your peril.

Criticisms: too focused on security issues and not on commercial pressures; not enough on control-freakery of governments; too Manichean – mixed economies; too pessimistic about frailties (and intelligence and adaptability) of human beings; over-estimates security ‘advantages’ of tethered appliances.

Discussion

Parallel with introduction of metalled roads. Crucial to economic development, move people and stuff around as a productive system.  Early days were a free-for-all, anyone could buy a car (if rich enough) and drive it, no need for a test.  Then increased regulation and control.  (Also to cars – originally fairly easily tinkerable with, now not/proprietary engine management systems.)  Issue about equity, as much as open/closedness.

Lessons of Wikipedia and the creators of malware. Malware creators only need to be small in number. To take down Wikipedia and make it undependable would take too much effort and coordination. (I disagree – a smart enough distributed bot attack would do it.)

I can’t imagine no Internet/generative/smart programmable devices because never not had them. Grew up on ZX81 onwards, had the CPU pinout on the connector.  Helps to have smart people around who have known the world before that.

South Korea got taken out by SQL Slammer, bounced back though – system is pretty resilient.

Manhattan Project perhaps a bad parallel for an effort to help here – it was the ultimate in top-down command-and-control project, with a clearly-defined outcome. And it was constrained and planned so tightly that it couldn’t actually work until people like Feynman loosened things up a bit to allow some degree of decentralisation.

How do you sign people up? Won’t do anything about e.g. climate change – until their gas bills shot up. Science and society stuff, well known that people only become engaged when it becomes real to them. Liberal is a conservative who’s been falsely arrested; conservative is a liberal who’s been mugged.

Surveillance – makes it unlikely that major public outrage leading to reaction is small, most people don’t realise their clickstream is monitored. It’s only if something happened that made people realise it that they’d say no.  Hard to imagine the scale of community engagement happening.

Case a few months ago – Wikipedia vs Internet Watch Foundation. Readymade community leapt in to action immediately.  But less likely where you don’t have such an articulate and existing community. Also photographer crackdown – they do have access to the media. Danger of the Niemoller scenario where they come for small groups one at a time.

It’s an argument about the mass of technology, not the small cadre of techies – iPhone can be jailbroken if you know what you’re doing. And there are more not fewer opps for techies and more techies than ever before. Most PC users in the 80s only used what they were given. In 1992 I could write an app for the PC and send it to anyone on the Internet. Except hardly anyone was on the Internet then, and even though most techies were among them, most Internet users then couldn’t write their own stuff – or even install software off the net.  Techies a small proportion still (even though bigger in number than before), so still vulnerable to this sort of attack.

Mobile devices are key here, consumerism. People just want stuff that works, generally.

Google as another example – they build very-attractive services, but on the basis of sucking up all our data.  Harness amoral self-interest of large corporations in this direction. Also (enlightened?) interest of Western Governments in promoting openness.

John uses example of bread mix and a recipe  to illustrate open source. Parallels with introduction of car (wow, I can go anywhere); PC (wow, I don’t have to ask people for most disk quota) and Net (wow, I don’t have to ask for more mail quota). These things have an impact on society, can damage it. So for instance, if you have an open machine, could damage other people’s computers, hence need to regulate ownership and operation. With car, annual check you have road tax, insurance, MOT; with a PC the surveillance needs to be continuous.

The 9/11 disaster scenario is instructive: why didn’t we have the same response to the Troubles? Because not transnational/non-State actors. The Provisional IRA have tangible, comprehensible political objectives that could be taken on. Whereas 9/11 terrorism is more vague.  And malware is different. Wasn’t a problem when it had no business model … but now it has. Can now take it on?

Is the Internet just (!) an extension of civil society and how you should regulate it, or is it something quite different?  Motor traffic law introduced absolute offences (no mens rea) – it’s an offence to drive over the speed limit regardless of whether you know you are going that fast or what the limit is) because quite different threat.  Internet is at least as new so likely to spur at least as revolutionary – and shocking – change to our legal system.  Ok, now I’m scared, so that’s a result.

But we’re only eighteen (nineteen?) years in to the web.  It’s idiotic for us to imagine we understand what it’s implications are.  So the only honest answer is we don’t know. John argues we’re not taking a long enough view. 1455, eighteen years after the introduction of the printing press. MORI pollster, do you think the invention of printing will undermine the authority of Catholic Church, spur Reformation, science, whole new classes, change of concept of childhood.  Web is a complex and sophisticated space, so to regulate it right can’t be done overnight.  Tendency for people to make linear extrapolations from the last two year’s trends.

In the long run, this won’t look like such a huge deal in the history of humanity. It’ll be a bit like what happened with steam. It looks like the biggest deal ever to us only because we’re in the middle of it.

So what do you do when you know that on a 20-year horizon you’re blind?

My answer: get moving now, plan to change and update regularly.  Expect to have to fiddle with it, throw great chunks of things away because they’re no longer relevant. Challenge to OU course production model! (Actually, I’m wrong to say throw away – more expect that things will become eclipsed and superseded – old technologies die very hard.)

We’ve become more open/diverse in our offer to bring in enough people. Which is hard – costs and scale versus personalisation.

Social media at the OU

Notes from OU eLearning Community event, 17 February 2009

Sarah Davies and Ingrid Nix are organising the events for the first part of this year.

New eLearning Community Ning site.

Social learning objects and Cloudworks – Chris Pegler

Juliette Culver is the developer of Cloudworks.

Chris draws a distinction between ‘social object’-oriented networks – delicious, Flickr etc where there’s a (learning?) object and more ‘ego-centric’ networks where it’s people connecting to people – e.g. Facebook, LinkedIn, etc.  Engeström claims that “social networks consist of people who are connected by a shared object”. Hugh McLeod “The object comes first”.  Martin Weller along these lines too.  You need something to talk about.

Cloudworks – supports finding, sharing and discussing learning and teaching ideas, experiences and issues. In alpha at the moment. Working well at conferences/events to use as a site for storing discussion and debate.

Wants to see  more social conversations around reusable learning objects (RLOs) – metadata.

The OU in Facebook – Stuart Brown and Sam Dick

Almost all of the room are on Facebook, fewer fans, only 3 or so have the OU Facebook app.

8.5m unique users (accounts) in the UK. Top or second-top site in OU. About 5000 studying/graduated from the OU. Bit report – New Media Consortium/Educause Horizon Report – “Students and faculty continue to view and experience technology very differently”.

Many motivations for OU in FB. Open University page.

Open University Library – set up a Facebook page. A lot of their Wall traffic (biggest focus) is students looking for others on the same course. Is it a failure of our official web presence/support systems? Or is it understandable that they want a non-official/personal route?  Survey of students – bimodal, some really keen on FB, some really hate it.  Forum gets traffic too, building up started by students. Analytics (Facebook); 66% female 34% male. (Meta-comment: Facebook does age segmentation 13-17, 18-24, 25-34, 35-44, 45+! Rather lower-focused than many.)

Future plans: staff profiles, resource, helpdesk online chat, find/recommend resources. OU Library alreayd has an iGoogle gadget for searching the catalogue; want to embed in Facebook.

OU profile page – (possibly) biggest UK university page, >15,600 fans.

OU Facebook apps: My OU story (283 users). Course Profiles (6,222 users – something like 5% of current students, I’d guess).  Course Profiles helps with the “who’s studying/has studied course X” issue – can specify previous courses studied, current, future plans. Each course gives you: course details, find a new study buddy, your friends on the course, recommend to a friend, OpenLearn content, comments Wall. My OU Story – mood update, gives you mood history graph too. Post ‘Story’ which is a comment on how you’re doing.

Useful page showing all places where the OU wants to have a conversation with people – i.e. social networks with an OU presence: Platform, OU podcasts, iTunesU, Facebook, YouTube, OpenLearn, Twitter, Open2.net, Course Reviews.

Data from Facebook apps is available for analysis … Tony Hirst is custodian (of course).

OU online services have a coordinating set of pages.

Setting up a social community site (Ning and Twitter) – Sarah Davies

Again with the division of social networks: object-centric, ego-centric, white-label.

Object-centric: Flickr, delicious, SumbleUpon, digg, imdb, LibraryThing, Meetup, SecondLife, World of Warcraft. Ego-centric: Myspace, Facebook, Bebo, LinkedIn. White-label: Ning, Elgg.  But categories are blurred.

Review of typical features of sites.  Analysis of sites as communities of practice – Lave and Wenger – Peripheral (lurker), inbound (novice), insider (regular) boundary (leader), outbound (elder).

Twitter overview. Tag tweets with #elcommunity to appear on eLC Ning site.

Ning overview. Demo of new eLearning Community Ning site. Originally set up for talk for ALs on Web 2.0 tools.

Work/social life mix. Intrusion/time intensity. Balance/tradeoff between VLE/OU-hosted stuff and external services.

Scholarly Publishing 2.0

I gave a short talk on the future of scholarly publishing at the OLnet/OU “Researcher 2.0” event last week, which I liveblogged in two parts (part 1, part 2.0).

You can see my slides:

You can watch a video of me talking about what I was talking about:


You can read Gráinne Conole’s liveblog of me giving the talk, which is part of the Cloudscape covering the entire event.

And … you can read this quick condensed text version: I argued that scholarly publishing is what scholars do when they make things public. I discuss some of the dramatic changes underway. I argue that they are quantitative (more and faster) rather than fundamental ones of type – but of course a quantitative shift on this scale is in itself qualitative. Determining what’s important and high-quality in the context of this information explosion is hard, but is essentially what peer review – broadly considered – is there to do. The Open Access movement is hugely important in social justice terms, but in terms of enabling access for researchers at well-funded institutions it’s small beer. (Thought it’s worth mentioning that there’s evidence that open-access material gets cited more, which is (a) a good thing, and (b) will get you REF points.)

Researcher 2.0 part 2.0

Further liveblog notes from the Researcher 2.0 event (see also notes on part 1).

(Interesting meta issue about blog vs Cloudworks. I don’t want my notes behind a login/search wall, I want them on Google! But Gráinne is doing an excellent job liveblogging there too. And maybe my notes aren’t so useful on a blog. Comments welcome! UPDATE: I’d got this wrong, it’s due to a bug, Cloudworks is *supposed to be* readable by everyone, indexed, the lot – you only need a login to post. *but at the moment new Cloud/scapes come up login-only.)

(Another meta issue is the multiple channel management.  It seems I can do two, possibly three, but not four and definitely not all five – f2f, Elluminate, blog notes, Twitter, Cloudworks – and still stay sufficiently on top of things to follow it. Especially as Elluminate has the whiteboard, the audio stream, the chat, and the participant list all in one.)

Martyn Cooper – Research bids 2.0

Research bidding support – some same for experience and novice bidders (process support, consortium negotiations, budgets, reviews of drafts, internal sign-off); novice bidders get extra (advice, confidence).

OU process based around the RED form.

Process – idea, workplan, consortium, bid, negotiate roles, set budget (often iteratively), final draft, sign off, submission.

Relationship is formed during the bid process; you will work with these people for years after (if you succeeed.)

Communication types – peer to peer, document/spreadsheet exchange, negotiation, redrafting and commenting, electronic sign-off and submission.

Most researchers could get more successful bids and be able to run more projects if they had more and higher-quality administrative support. Web 2.0 technologies could have a role in providing that support. However to date we under-use them.

At what stage do you make bids open to the world? Is the web 2.0 attitude affecting this? Martyn very happy to do that – he always has ideas in his back pocket. Has seen ideas taken up by others, whether by coincidence or copying is hard to say. Commercial partners keener to protect foreground knowledge and IPR, so perhaps harder.  But would be happy to do whole process on a public wiki.

Shailey Minocha (Shailey Garfield in 2L) – Second life research

3D virtual world – http://gallery.me.com/shailey.minocha#100016

Much more human environment than a 2D one; a real sense of being there. No story to them, there’s not a game, you can design it yourself.

Students found it difficult to critique/peer review each other’s work. Attributed to a lack of socialisation, lack of knowing each other well enough. So decided to get them to use 2L to provide opportunities for that.

Not much about how you should design learning environments in 2L.

2L to support research: meetings, virtual collaborations, seminars, conferences and shared resources

2L as a platform for conducting research: conducting interviews, observations, evaluate prototypes of concepts and designs, bringing in real data and developing simulations.

PhD supervision meetings and research interviews – runs regular meetings in 2L.  Real sense of visual presence and a sense of place. Large pool of participants. Also can keep transcript & audio – no need to do transcription.

Sense of realism in 2L which is hard to match in other environments – BUT steep learning curve (vs Skype, Elluminate, Flash Meeting), and demanding system requirements.

Question: are there extra issues in finding particpants in 2L? Yes. Issues about the avatars; don’t know who is behind them. Let the person fill out a form through normal email process first.

Kim Issroff – Business models for OERs and Researching Web 2.0

Definitions

Business model – framework for creating value … or, it’s how you can generate revenue.

OSS business models: Chang, Mills & Newhouse, about how to make money. Stephen Downes models for sustainable Open Educational Resources – distinction between free at point of delivery and cost to create/distribute. Models: Endowment, membership, donations, conversion, contributor-pay, sponsorship, institutional, Government, partnerships/exchanges.Clarke 2007 – “not naive gift economies”.

Intuitively, go for resources are free but charge for assessment.

Grant applications increasingly ask for business models/sustainability/how you carry on afterwards.

Implications – for design, how to engage. Differences between OSS and OERs as models. What happens when we get to OER saturation point? (I suspect it doesn’t exist – too much out there already, but also still worth putting new stuff out.) Can we quantify the social value rather than the economic value?

Take a trainful of people, see what each person is doing in terms of access to technology, to get a handle on everyone, rather than a minority we over-research.

Two thoughts: how much difference does the business model make? Is a financial business model appropriate for an educational organisation?

(I see a strong link to Kevin Kelly’s Better Than Free essay: eight things that are ‘better than free’.)

Can free things (end up) more expensive in the end?

Robert Schuwer from OUNL: their experience of subscription models, paying for extra support, books and so on. Inspired by mobile phone world, hope that once they have the payment every month set up, they forget to unsubscribe and keep up year on year – €25 a month.

Chris Pegler – OER beyond the OU

What OER offers: global opportunities, goodwill among researchers, IPR vanquished, unlimited reuse potential. Has highlighted Creative Commons – demolish IPR obstacles. Most funded repository projects flounder – or even fail – at some stage on IPR. But Creative Commons to the rescue!

Li Yuan whitepaper CETIS on OER is key. List of 18 current OER projects ‘out there’, from MIT Open CourseWare, GLOBE (includes MERLOT and ARIADNE etc), JorumOpen, etc. These are not quite what you’d envisage – some are e.g. mainly research-focused.

Interesting HEFCE/HEA/JISC call on OERs  £5.7m pilot, possibly £10m yoy in the future. Chris has £20k individual bid – making a 30pt course using web 2.0 tools around OERs. Also NTFS bid on RLOs and how we embed them in the academic practice courses at three institutions.

Questions around metadata – especially automatic metadata.

Patrick

Was more presentation-centric than perhaps ideal; but much captured on video, Twitter and Cloudworks. So next: small groups on producing a quick pitch for a bid about Research 2.0.

Researcher 2.0

Liveblog notes from Researcher 2.0 event – sponsored by the Technology Enhanced Learning research cluster (part of CREET) at the Open University, and the OLnet project.

Patrick McAndrew – intro

True Researcher 2.0s – weather not a barrier, see what technology to employ. So multiple channels. Elluminate, Twitter, Cloudworks. Video and audio capture. And face to face in the room!

The Cloudworks site for it, and remote people coming in via Elluminate –http://learn.open.ac.uk/site/elluminate-trial/ (if you are have an OU login, and then follow link Open Learning network trial ) OR http://elive-manager.open.ac.uk/join_meeting.html?meetingId=1232970332920 (if do not have an OU login). And Twitter using #olnet as a tag. Also professionals doing video, and amateurs with Flips and other videocams.  Hope to learn from this for future workshops.  Not fully planned out (but very 2.0/lazy planning stuff).

Patrick – Researcher 2.0: Research in an open world

Open world, many users, what does it mean? How does our technology link out to the many users? Came up for Patrick in the OER world, but true in many areas. Transform to world where there are many more options for what we can do, many more options.

How do we change to network with more people, network as researchers in a new way. Draw in people, use their willingness to co-operate. Gráinne opened up in a f2f workshop with a Twitter request for ideas to flow in, worked really well.

Also new ways to get data in – video, audio capture. But what to do with the data? Need to make it part of the routine. Who does the research? Distributed models.

Want to find out: What is Researcher 2.0, What are the big questions?

Researcher 2.0 – discussion about what it means.  Not a Microsoft product, like Web 2.0. Is snappy – new improved way of doing research, using better ways.

Discussion broke up, and went in to Cloudworks en masse to add comments. Many new clouds and comments and so on. Challenge of multiple channels a new technologies is clearly a challenge, even for this roomful of fairly-techie people.

Gráinne Conole – Exploring by doing: Being a researcher 2.0

Personal Digital Environment – like a PLE. Technologies used on a daily basis. Crosses boundaries of learning, work and research. Increasingly, if it’s not available on Google, it doesn’t exist – so what’s the point in putting it locked in to print-only?

Mentioned 2800 people signing up for online Connectivism conference – of whom 200 really active. Very lively, multiple channels. George and Stephen contacted people casually and asked for an hour-long session.

Changing landscape: a step-change over the last few years.

Reports which encapsulate things:

  • NSF Fostering Learning in the Networked World.
  • The Collective Advancement of Education through Open Technology, Open Content, Open Knowledge (Iyoshi and Kumar)
  • EU review Learning 2.0 Practices (ipts)
  • The Horizon Reports annually

Changing content. What does it mean to be more open? Distributed dialogue makes it harder to attribute ideas. Especially group consensus. Will need to change.

Mediation: co-evolution: Oral, symbols, technology-mediation.

Thinking differently: OU Learning Design initiative, Compendium/CompendiumLD/Cohere, Cloudworks, Pedagogy schema, OLnet.

The vision underpinning OLnet: analysing the cycle of OER development, and who’s involved. What tools and schemas do (could?) people use to select, design, use and evaluate open educational resources?

Discussion: How do information resources fit in? Issues of quality?  Need to develop new ways of digital literacy and competency. Not just using Google, how we use it. How do I make judgements about what you find?  Share practices.  Different in different disciplines? For computing, ACM Digital Library is the information repository for that community; Google is merely a nice addition.

Challenge for OU classic course-in-a-box; Tony Hirst’s uncourse model right up the radical opposite end. Martin Weller noting that his journal publishing has gone down as his blogging has increased. There’s major issues here about what we consider to be quality. How to blogs compare to articles? Depositing your articles in open access places increases citation count.  Not just communicating with the public – it’s more becoming part of communities that are attentive to things you’re saying, which gets your name/reputation recognised. Concern that it’s transient, forget it. Have to foster the skills of discernment in our students, particularly.

Martin Weller – Digital Scholarship

YouTube video of Guitar90 kid playing guitar … got 55m views.  We are all broadcasters now.  A fundamental change in society in general, and education too.

You can’t predict what will be useful to people.

iCasting – new coinage – simple stuff you can do from your desktop, you don’t have to be an expert. Anyone can create YouTube movies, blogs, slidecasts on SlideShare. Blog is the hub of all this: aggregate your content and share it with other people.

What about quality? Caravan – you have a certain amount of money to spend on a holiday.  One holiday in the Caribbean is about the same cost as 30 holidays in a caravan – trading quality for quantity.

The power of sharing – getting views in from Twitter.  Passed on ideas from one to the other – it’s the sort of resuse we always wanted from learning objects.

What is the fundamental aim when you publish something? We’ve lost that aim and started thinking it’s about getting RAE credits. But ultimately it’s about sharing ideas. Martin’s experience is you get much more feedback and benefit from sharing through the blogosphere and other online routes than from locking stuff away in a printed journal. Blog gets 1000 views, lucky if a journal article gets 20 readers.

The cost of sharing has disappeared, but we act as if it hasn’t. Example of mixtapes: you had to buy physical tape, spend ages with the buttons recording each song, then had to give the tape away. Now to share music you can do it via iTunes, share URLs through lots of services. No more time, effort to share.

What to do? Find your inner geek. You don’t need to go on a training course to learn how to use Flickr or Slideshare, just use it. (I’m starting to not be so sure about that for people in general, based on evidence at this meeting).

Have fun! YouTube video from JimGroom pretending to be an Ed Tech survivalist.

And Just Share – RSS, OPML, etc. Make sharing your default mode.  Currently writing a 10k article – instinct is to just post it on his blog to get more readers. But then no formal publisher will take it; and with REF credits want to get it there. So a tension between sharing and getting cash.

What can your university do for you? Provide support and guidance.

Danger of not doing it? Universities need to look relevant. Remember the Viz Pathetic Sharks, who couldn’t swim properly, were scared of water. Universities in danger of looking like that.

Current project: Year of Future Learning (on his blog) – a bottom-up way of trying to do distributed research. Anyone can join in. Multiple modes, multiple ways to contribute, support/facilitate discussions.

Is sharing the same as making public? Martin says share earlier in the process – at conceptual stage and then throughout, not just publishing at the end.

REF has implications for what we share as researchers, but also as teachers. What do we do? Easier when established; earlier in the career need to play the game a bit more to advance. And easier if you’re in the right domain (IET) where part of the day job is to explore this.  Critique on blogs is similar to expert peer review, but also different.  Issue of saving it for posterity – 25 years ago, paper document. Failing to leave a reliable paper trail if everything’s in blogs – not preserved in the same way. (!) Not saying burn all journals, but the peer review process ‘is over-rated’. You can publish anything on your blog, but if you’re trying to build up a serious reputation, you’ll be taken to task for what you put up. ‘Publication process is designed to remove anything interesting or engaging or challenging’ (not universal agreement). Example given by Giddens at his Pavis Lecture – Internet can be empowering, democratising versus trivialising.

Eileen Scanlon – Digital scholarship in science

Interest came up in MSc in Science Studies. Communicating Science course.  Gold standard community having radical shift in how they behave due to new tools. Main example of a transformatory tool is physicists’ pre-print repositories.

Interesting perspectives on peer review – Nature did an experiment on open peer review. So not just small scale journals.

Many recent articles in the June 2008 issue of Journal of Science Communnication. Open Science.  Eileen wrote a book with that title … which was about OU teaching practices, not this.

Recognition of e-science as a new way of doing things.

Zvivocic science blogger – commentary piece.  Predicted that journal paper of the future will be a work in progress, with collaborative development.  There are some very serious bloggers, based in major research institutes, discussing what’s happening. Tola science journalist – growth of blogging. Cozzini – e-scientist – massive investment in e-infrastructure (e.g. Grid computing), vast quantities of data for analysis. There are technical problems, and other challenges – but need some imagination to see new ways of working. This stuff is hard.

Proposal submitted to ESRC – understanding the changes in the communication and publication practices of academic resarchers in HE.  Christine Borgman book on Scholarship in the Digital Age. Two case studies: one team in an e-science area. How is the landscape changed, what do people do? Now at a stage to see what people are actually doing, not looking at the rhetoric.  Sub-questions about different forms of publication, how they relate to open peer review, how the i

Doug Clow on Scholarly Publishing 2.0

No blog notes from me! But the slides are on Slideshare. One point from my talk: big barrier to going all-open is perceived esteem of publishing in particular named journals with particular named publishers. Big money at stake. Also change in who might sign up for OU courses, given that currently they get access to all our journals while they’re registered.