Digital scholarship: Advanced technologies for research (1)
First set of liveblog notes from Digital scholarship: Advanced technologies for research – a JISC-sponsored roadshow held on 10 March 2010 in the Ambient Technology Lab, Jennie Lee Building, The Open University.
Lots of examples of resources and projects doing new and exciting things with electronic resources supporting research from David Ferguson. Then presentation of a project at the OU researching and promoting digital scholarship from Nick Pearce and Martin Weller.
David Ferguson, National e-Science Centre, University of Edinburgh
National eScience Centre – open meeting place for researchers using computing; examples, projects, workshops, training. David has special responsibility for training and outreach. Was originally a protein biochemist.
Background to eResearch is a substantial socio-technical change – ubiquity, scale and power of digital technologies enabling a deluge of data, and community intelligence.
Google use nearly half a million consumer spec PCs to support their site, spread out across the world, which enables them to ‘follow the sun’ by moving their main processing of queries to where people are sending queries. The dark side of the world is moved to maintenance mode. Amazon also provides web services which have grid-like environments (Werner Vogels) built out of fairly ordinary machines. eBay as well.
Online gaming platforms are also now a research environment – e.g. 2005 use of World of Warcraft to model the spread of disease.
British Trust for Ornithology – Bird Atlas 2007-11. Crowdsourcing, community intelligence – they’re not new. BTO have been compiling the central standard information about bird distribution and migration in the UK entirely by a large network of volunteers – for decades. Motivating, enthusing, focusing and directing the activity of non-professional scientists to create datasets you simply couldn’t collect by conventional science techniques.
Another example – Protein Data Bank (RCSB PDB) – started in 1973, community database of protein structures. Originally set up and held on punched cards. Sequence databases for DNA are now crucial to research in that area – you can’t publish a paper without including hte accession number for the sequence in the database.
HealthMap – Global disease alert map. Tracking outbreaks of diseases e.g. malaria, West Nile virus.
At the other end of the scale – CERN’s Large Hadron Collider. Couldn’t physically get enough processing power to analyse it on site, and couldn’t politically get the money for it either. So huge, decentralised, multinational distributed virtual network for doing it.
Not just super computing but high throughput computing – lots of small jobs; democratising the process of analysis. Ecosystem of resources: research group owns clusters of PCs; department owns high performance clusters; institution owns higher performance clusters; then national resources of supercomputers. Harder to get access further up the chain – from no bother up to full peer-reviewed bid.
Another technology – AccessGrid – can work with people remotely, sharing slides and data, interactive activities and so on. Videoconferencing on steroids.
MyExperiment – adapting Facebook approach to scientific research context. Sharing research protocols as workflows.
NGS – National Grid Service – provides the glue that helps people get access to big-ticket experiments like the LHC or the diamond light source. Also JISC-supported resources like EDINA and MIMAS, and other collaborative grids. EDINA – maps and map data of GB to UK tertiary education. MIMAS has information to support teaching, learning and research – includes Jorum, JSTOR, CrossFire, Intute. UK Data Archive (UKDA) is national centre that curates the largest collection of digital data in the social sciences and humanities in the UK.
Martin: Why are there so many sites? Nobody’s cracked it the way Facebook or Google have for non-research activities.
David: Because the activities are very diverse and complex. General solutions to all of them are perhaps not possible. All researchers and academics do different things.
Linda: Computing centre draws in from many disciplines – is it all of them? Arts and humanities has history of lone researchers.
David: Rate of change is different in different areas but is happening everywhere. Certainly examples in the arts and humanities. It’s not the bread-and-butter thing that everyone does the whole time the way it is in (some) sciences. This stuff is not always easy; doesn’t want to hype this. It takes a lot of work but the return can be great.
Martin Weller and Nick Pearce, IET, The Open University
Nick Pearce is on a dual mission – to try to define digital scholarship, and also to try to promote it inside the OU. Skeptical distance as well as an evangelist – a critical cheerleader. Nick has background in sociology of technology.
Old technologies often persist – for good reasons. Like books.
Web 2.0 is a rapidly changing field – services come and go with great speed. But publication of journal articles is often so slow that things are very dated by the time they get out.
John Naughton: What’s the evidence that small percentage of users contribute?
Nick: E.g. Wikipedia has small number of active editors.
John: Many urban myths, but it varies from service to service.
Someone: Zipf distributions likely (power law) likely.
Matt Jukes (via Twitter): Forrester Social Technographics ladder might be useful here.
Definition of digital scholarship taken from Boyer (1990) – scholarship is what scholars do (rather than what they create) – four types: Discovery, Integration, Application, Teaching. Each leads to possibilities for digital scholarship.
Open Access publishing – open publishing. Open education – Open Courseware, iTunes U, etc.
Jan Parker: Mendeley for social referencing?
Colin – it’s like last.fm for research; people can see what you’re reading, it’ll make suggestions for what you might like to.
Nick: You have a playlist that’s your bibliography.
He wanted to be a digital scholar before he knew what that was. Gets more hits on his blog than ever read his journal articles in print, and has a conversation about them. Has changed the nature of experience of attending conferences – much more community. So interested on impact on his practice as an academic. Becoming a blogger was the most significant academic decision he’s made.
Has internal funding from The Open University for digial scholarship project – to promote digital scholarship (assuming it’s a Good Thing); working on recognition (promotion/reward); research current practice (what is it, and is it a Good Thing?)
The OU is defining a new personality, which will be a digital personality. To be successful, academics must engage with that – so digital scholarship right at the heart of the new OU.
Has created a site, called DISCO (Digital Scholarship) – can register your online presence, build up your digital scholarship identity, and promote it.
Promotion criteria at the OU for this year’s round of promotions now explicitly recognise web 2.0 type output. Planning to develop metrics and case studies. We don’t yet have good understandings of what makes for a ‘good’ blog, but are building that up. Vision of a mega-metric that measures all of your stuff – your activity is very distributed, but DISCO will pull it together. Metrics are potentially dangerous – can be gamed/distort activity. But might help individual digital scholar to see what other people are doing that’s valuable.
Podstars project – to help people become podcasters or vloggers, but has interest from people wanting a cheaper, easier way of producing content for courses than the traditional OU production system.
Skills can be subtle – not just technical things; more things like finding the right voice for a blog – softer stuff as well.
John Naughton: Really interesting and hard question is what’s a good blog (in this context)? Has no idea what the answer is, but it’s a key issue. Risk is always that we go for brain-dead metrics. Thinking about what he values on the net. There’s much more interesting things on the net than off. Why? Because some people have the capacity to change the way you see something. Some are very well informed (e.g. Ed Felton at Princeton); linked to academic credibility in the old system, though.
Martin: You can’t get away from peer review – people you respect say this is good.
John: The difficulty is choosing the peers!
Colin: Even in more traditional outputs – it’s not that obvious what a good journal article is. A highly-cited one isn’t necessarily a good one. E.g. the BMJ MMR paper (now withdrawn) has been cited thousands of times, but to rubbish it totally.
David: Web 2.0 system is about reputation, it’s about how to capture how that has been established.
John: Stephen Fry is an interesting case. Celeb so in the Britney Spears case; but is very knowledgable. And destroyed the Blackberry Storm with a single tweet – he’s rich enough to buy it, but very astute and informed. Mainly because his reputation. Wrote about this in his column – Vodafone’s lawyers contacted the Observer to demand a retraction. John said write back asking if they could find a single tweet from someone who liked it – and they shut up. Rory Cellan-Jones interviewed Fry about that and his power.
[Much later admin note, May 2012: This post has become a serious attractor of spam comment and linkback bait over the years (possibly the circuit board picture?), and it's getting worse again. So I've closed comments and trackbacks to this post only. If you want to comment, feel free to use the previous or next post.]
This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission needed to reuse or remix (with attribution), but it’s nice to be notified if you do use it.