More student data, but later

We need to seriously consider doing a lot less with student data right now. Stopping data logging will reduce the impact on our systems and, more importantly, on our students.

As a longstanding learning analytics researcher, I don’t say this lightly.

Computer Problems
“Waiting for Moodle to render this page is taking longer than it takes to get to the lecture theatre on the other side of campus.”

The Covid-19 coronavirus crisis is profoundly changing society, including universities. There’s been a mad dash to online teaching, and a mad dash to online assessment close behind. Those of us who’ve been enthusiasts for online learning for a long time know that this may be a huge success in some places, but that it isn’t going to go terribly well in many others. It’s easy for our eyes to light up at the thought of all that interesting data that all that online activity could generate.

But hold up. In a crisis, we need to prioritise what’s most important. Frankly, the benefit to students of most of our data gathering is not sufficient to justify it getting priority in a crisis. And our evidence of what benefit there is has improved since I wrote a rather despairing paper with Rebecca Ferguson about it, but not hugely. I do believe it’s worth pursuing. But it absolutely can wait, so it should wait.

A big turn-off

What if we just turn all that data logging off for the duration of the crisis?

We’d reduce the impact on our systems. Online learning systems are under massive strain as IT staff and suppliers struggle valiantly to deal with a completely unprecedented spike in demand. With a well-designed and well-tuned system, data logging needn’t be a huge drain on front-facing server resources. But when you’re rushing to scale up, you don’t have the time to tune it well and built a robust and separate data architecture. It will make the IT people’s life much easier if we just drop those requirements for now. It would, at least, be one less thing for them to worry about. And it might well materially improve performance, particularly on hastily-deployed systems where there hasn’t been time to optimise them.

We’d also reduce the impact on our students. Most academics are only in the student data business to make things better for students – but there are other interests at play too. Students are quite reasonably concerned about how their data is being used at the best of times. There isn’t the time to do all the engagement around data privacy that good practice requires, and that you need to properly address understandable and quite reasonable concerns. We could just steamroller them in to it. This seems to be happening a lot, and there’s even been some commentary from UK ministers about the GDPR that might be useful political cover for it. Or we could just … not do that, and give them a break. Deal with their worries about data privacy by sharply reducing the amount of data we collect. I think, given all that this cohort is putting up with, and is going to have to put up with in the near future, they badly need any break we can give them.

What can’t wait?

There will be some exceptions. Obviously, where you have a cognitive tutor setup, it would be nonsense to turn off the logging – and, not coincidentally, that’s where we have the best evidence of direct student benefit.

More widely, I’d argue for saving the last login information for each student so their tutor can see who’s been able to access the system and who hasn’t. I can’t instantly think of good papers showing this, but my strong hunch from practical experience with predictive modelling is that a huge chunk of the benefit that can come from such systems is increasing awareness among tutors of which of their students hasn’t been able to study for a while. We can do that directly with a lot less impact on students and servers.

And obviously, some data has to be recorded to operate an online learning system at all.

More later

For the avoidance of doubt, I am not for one minute arguing that learning analytics should close down and give up. I do still believe that there is huge potential from using students’ data to improve their learning, and that there’s more to be gained in future than has been done so far. I am arguing that we should be humble about what we can offer and prioritise the benefit to students. That is, after all, the whole point of learning analytics.

Learning analytics researchers and practitioners have never been in more demand in their organisations. We understand the practicalities of online learning in ways our more traditional colleagues don’t. It’s not like we’d be short of stuff to do if we spend the next months prioritising support for them and for student than our data-gathering projects.

We should do a lot more with student learning data … and we should do it later, when all this is over.

COVID-19 coronavirus and data

How does COVID-19 coronavirus illustrate timeless truths about data?

This is a version of two Twitter threads: one on how it illustrates timeless truths about data and the other about authoritative information on the outbreak

NB I do have some health and medical background but I am a data and learning professional, not a clinician, epidemiologist, or public health person.

Canarian Raven
“The virus is called COVID-19. No R. Call it CORVID one more time and I’ll peck your eyes out.”

Data science, BI and in fact any statistics all start with simple counting. That first bit is surprisingly hard, and getting it right is often most of the work. The current COVID-19 coronavirus outbreak shows this up nicely.

You can make as sophisticated a model as you like for an outbreak, to estimate things like the basic reproduction ratio (R_0, how many new cases each case is expected to cause, on average, which tells you how far it’s spreading), or the case fatality rate (how many people who catch it die of it), or the likely extent of the outbreak (how many people might catch it), and all sorts of stuff about how fast this is all happening or likely to happen.

But all that crucially depends on simple counting: how many have it at a given point, how many have died, etc. With the latest figures, we see how hard that is. The number of new cases in China jumped from 2,000 on Weds to 15,500 Thurs – because counting methodology was changed.

The numbers have changed retrospectively, too – I copied those numbers down yesterday, but today it looks like it was a 400 increase on Weds and a 15,100 on Thurs. This would be hard to get right even if there wasn’t a massive health crisis there.

Counting infections is always tricky, but surely deaths are easier? Turns out there are surprisingly difficult edge cases at the edge of life, but those don’t come up often, and almost everyone agrees about most deaths.

But even then, you’re probably getting data from multiple sources and combining them and that can lead to problems. Like today’s news that 108 deaths have been removed from the figures because they were double-counted.

It’s easy as a data scientist to say we need to invest in better data, and sometimes that’s right. But getting good basic counting data is hard, and expensive, and cannot be the absolute priority. The data you’re dealing with will always be messy to some degree.

Speaking of degrees, this crops up in education and learning. ‘How many learners do we have right now?’ is the basic question that is the denominator for pretty much any learning or teaching metric you care about.

And that is surprisingly hard to answer sometimes. There are late registrations, retrospective registrations, de-registrations, retrospective de-registrations, provisional versions of all those, and that’s just dealing with individuals.

When you have organisations buying in learning, it gets even worse: how many are provisionally ordered, how many are finally ordered, how many are catered for, how many show up, and how many are invoiced for are all different, and not the same as how many learned anything.

Speaking of invoices, cash at least should be easy to count? It should be clear when a customer paid us, right? Oh, my sweet summer child. That sound you hear is the entire accounting profession sniggering.

Suffice to say that the same payment can legitimately have different dates for cashflow, annual accounting, VAT, other taxes, and who knows what other purposes. Organisations are incentivised to manipulate this data, and most organisations respond to incentives.

Summarising a wide ramble: Even the simplest of data, like ‘How many people have COVID-19?’ can be surprisingly hard to get authoritatively. Getting better data is rarely a business imperative. Be cautious about interpreting your advanced statistical models.

Also, be kind to people who are working hard to do really difficult jobs in really difficult circumstances. And don’t make it harder by spreading misinformation.

Check with authoritative sources before passing on information. It does not help to spread stuff you think might be dodgy or far-fetched ‘just in case’. Most people who pass on misinformation don’t mean to cause problems. Check it’s right first. You can help protect your friends and colleagues from this hazard.

So, having said that, how do you know what’s right? Check with authoritative sources, like Wikipedia always tells you. Wikipedia has excellent info, which is rapidly changing as the situation changes rapidly:

In the UK the risk of infection is very low, as in most places outside Hubei.

In short: If you think you may have the virus, stay where you are and call 111.

Outside the UK, there’s authoritative sources like the US CDC and the WHO

For hard research info, there’s stuff on the WHO site (currently under ‘technical information’ and  ‘global research’) and many publishers have made research freely available, and some have free-access portals on the topic, e.g.

If you like statistics and numbers, here’s some good aggregation from Johns Hopkins University, with a few visualisations and a link to a well-maintained GitHub CSV.  

This data is pretty good, but don’t treat it – or any data! – as representing the objective truth.

This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission is needed to reuse or remix it (with attribution), but it’s nice to be notified if you do use it.

Project start checklist

What do you need to have thought through before you start a project?

In a large organisation, like I left last year, there’s often a lot of heavyweight project management overhead. The downside is filling out lots of paperwork, but the upside is that it does make sure that the project is in good shape at the start. If it’s good project management paperwork, of course.

Working in smaller contexts lets you be much more agile and swift. Now I’m an independent consultant, I can start a project the moment I think it’s a good idea.

But how do I know it’s a good idea, and how do I know it needs more work, or is better not done at all? Obviously, using my skill and judgement developed over many years of working on projects. And that tells me that if it’s more than a teeny tiny bit of work, it’s well worth spending some time up front, to systematically work through what needs to be in place at that stage.

Some people would be happy to simply do that in their head and be done. But I am a huge fan of checklists. This post isn’t about the wonders of checklists. Suffice to say, the more experience I get, the more I think they are wonderful. They help make sure that the obvious things actually do get done.

So I have found myself wanting a project start checklist: a list of things that need to be thought through before I start a new project. It’s here, at the bottom of this post.

The checklist is deliberately skewed towards freelancing and consultancy, rather than internal projects inside large organisations that already have more project management overhead than they perhaps need. So, for instance, it doesn’t have any of the “set up project board” and “identify project sponsor” things you’ll find on other project start checklists. If you need to do those things, you (should!) already have a formal project management process. This checklist is for when you don’t have that. It would also work for small-scale guerilla projects inside a large organisation that don’t need official sign-off. It assumes the project is highly custom and not a standard one, which covers pretty much everything I do, but it is way more than you’d need if you are a freelancer doing well-established work – like, say, you’re a graphic designer and simply doing another logo for a new client. (Although even then some of the elements will be useful.)

The way I use it is to go down the list, systematically, and make sure I have a good answer to each question. To work the checklist magic, it is important to do them in order and not to skip any of them. (Obviously, if you don’t think an item belongs on the checklist, ever, delete it and never worry about it again.) If I don’t have a good answer to a question, a good argument for why it doesn’t matter in this context will also do. If I can’t answer a question with no good excuse, that’s a prompt to find out more before I start. Not knowing can be a red flag that this project is ill-conceived. Sometimes it’s the right decision to start a project before it’s all tied up and thought through. But better to do so having that potential risk in mind, and even better to mitigate that risk before it happens.

On to the list itself!


Is this project worth doing?

Ideally, it’s worth doing because I believe it will make the world a better place.

The underlying idea here is to avoid doing things that would be better not done. I like Peter Drucker’s distinction between “doing the right thing” and “doing things right”. Most of project planning and delivery is focused on “doing things right”. It is all too easy – and indeed necessary a lot of the time – to forget about the big picture, get your head down, and get it done and delivered. But the time before a project starts is a golden opportunity to pause and ask the big questions, like “What’s the point of this? Why bother?”.

Even if you think the project’s a terrible idea, it may be worth doing for other reasons. Not everyone can afford to be picky about what work they do, in which case, “it is worth doing because it will pay me money I need and I don’t have a better alternative” can absolutely make it a yes here. But if you’re doing that, it’s good to be aware that you’re in it for the cash (or the exposure, or the experience) not for the project itself. That way you’ll be in a better position to work to get more of what you want and minimise what you don’t.

Who actually wants this project to happen? Who directly benefits?

This is related to the first question, but different. I draw a distinction between someone who actually wants a project to happen because they themselves want the results, and someone who wants a project to happen so they can say that something like it has been done, for presentational or organisational reasons. So, for example, an organisation might want to be seen to be doing something about an issue, but they don’t actually care about it. They set up a little project, perhaps engaging an external consultant, so they can point to that as having done something, but they are not engaged in the issue. That doesn’t make it a no, but it is a different context to one where there are actual direct beneficiaries.

Fundamentally, it is often the case that people who are intent on doing something for someone else’s benefit have a different view of what would constitute a benefit to the  person themselves. I might sincerely want you to do good for you, but you will usually have a better insight in to what is good for you than I do.

This isn’t to say that I’m against building new things that people don’t yet know they want: far from it. (Think of the apocryphal story of Henry Ford saying that if he’d asked people what they wanted, they’d have said better horses.) But you need to know when you’re doing that, and how you will know that you have given them something that they do want.

Who are the other stakeholders and what should happen to them?

To many British people of a certain age, banging on about stakeholders makes you think about Tony Blair and the 1990s, or about frustrating project management paperwork, or both. It can be a little overworked, but I do think it’s worth thinking through who else will care, or ought to care, about a project you’re involved in, and what you’re going to do to them. Or with them. With them, totally.

Where is the value generated? Where is the money coming from?

This is again related, but may give different answers. For working in the commercial sector, it’s vital to understand where the business makes its money and how the project will impact on that. For other public and third-sector work, it’s vital to understand how they are funded and for what purposes, and how the project will impact on that. Grant funders – whether that’s the European Union, a research council, a foundation or a charity – will have a set of conditions on their funding, but there will also be a more a more implicit set of ideas about how projects they fund ought to work.

Will I get paid, and when?

This is, of course, the freelancer’s main question.

The obvious thing is to make sure the mechanics are in place. Is there an agreement to pay me? Does there need to be a purchase order, and if so, has it been raised? Have we agreed the invoicing pattern, and do I know who is responsible for paying the invoices? I don’t need much paperwork: I log every potential client, and I log every project. But the client and/or funder may have paperwork or processes to follow before the project can start. I can start work on my own initiative, but that may raise the risk that the project never officially happens, and I don’t get paid.

Underlying the mechanics (or overlaying them?!), there’s the question of whether they will pay when the invoices come in. What’s their cashflow situation? What’s their payment track record? This can crop up at all levels. A small company may have cashflow problems and have to defer paying you. A large company may have an imperative to juice this quarter’s financials and defer paying you. A university may defer paying because it is extremely bureaucratic and it’s nobody’s job to make sure you are paid in a timely manner.

Why do they need me?

Usually, this is because they need to get something done but don’t know how to do it, and so they’re bringing me in because I do. That will usually mean part of my role will be explaining what I’m doing and why. Sometimes it might be that they do have the skills but they don’t have the capacity, in which case there’s less need for explanation.

There are also projects with an aspect of management consultancy to them. Anything involving organisational change falls in to this category, but so too does most work on training and development. Here, it’s very important to understand the political context within the organisation before starting.

Do we have a shared vision for the final outcome?

If they’re bringing me in because I have skills and knowledge they don’t, my experience will be very different to theirs, so it is almost certain that what I imagine will not be quite the same as what they imagine. We need to do the work to ensure we agree what we’ll have at the end.

I can be happy to go without this, so long as the project plan has some way of bringing our visions together – although if that’s the case, I’ll usually prefer to have a break or review after the converging-vision phase.

Does the project plan make sense?

This is the bread-and-butter thinking through the project and planning it, or understanding the plan if someone else has produced it, and working through what my role will be. I need to work through what I’ll be doing, and how, and exploring all aspects of the project iron triangle (quality/scope, time, resource). This also includes how it will dovetail with my other commitments.

This question is where most of the planning effort goes in, but it doesn’t need extensive reminders on a checklist.

What is out of scope?

Obviously, a complete list of things out of scope of any given project is going to be pretty large. However, I do like to explicitly write down the things that one might reasonably think were included, but are not. This can be really useful to clarify with the client or funder, particularly if I can get it in to the paperwork.

What will you end up doing anyway?

Sometimes I want to do something to a certain standard of work and the client doesn’t want me to spend all that time (and/or pay me for it), so they say don’t bother with that. In many cases that’s fine. It can be important part of making sure we’re getting best value out of the work. Not everything has to be done to world-class research standards, and outside academia, done quickly is usually more valuable than done perfectly.

However, there are some things I simply can’t shortcut. One example for me is preparing for a presentation, talk or speech. I will always put the work in to be prepared to my own standards, even if that means skipping things I really want to do or staying up absurdly late. And I have tried but failed to make something without checking what similar things other people have done already. I don’t need to do a full lit review before starting a project, but if I havent spent at least a few hours exploring what’s been done in the area recently … I know I will end up doing that anyway. And I am an incorrigible data nerd, so if I collect some data, whether quantitative or qualitative, I know I will spend a fair amount of time getting to know it, regardless of whether I’m being paid to.

It’s better if I know to expect this than have it bite me yet again.

What if things are harder than expected?

A bit of thought ahead of time can help a lot here, and again the iron triangle (quality/scope, time, resource) applies: What aspects of quality or scope could I cut? Where could I find extra time? How could I get extra resource? How would I communicate and renegotiate if I can’t address the issue myself?



What could go wrong?

I like to do a project pre-mortem. This post isn’t about the wonders of pre-mortems, but they are a very useful tool. The idea here is you imagine that the project has failed and you’re working out what went wrong. How did it happen? It’s a cognitive flip: instead of only thinking about how it will succeed, you assume that it has gone wrong, and come up with ideas for how that could have happened. This can be very useful for spotting things that you are half-deliberately hiding from yourself because you don’t want the project to fail.

I’ve done a bit of flying in light aircraft, and like many aviators, I read a lot of air accident investigation reports. Often, when you these reports, you can see that bad judgement was present at the start. So a useful question to ask when preparing to fly is “How would this look in an accident report?”. That can keep you on the straight and narrow, and out of obvious, well-known mistakes.

So, in this context, how would you talk about this phase of the project if it later turned out to be a disaster? What were the red flags, the early warning signs, the classic blunders, the usual procedures avoided?

What would huge success look like?

This is a question I picked up from a cheesy talk some time ago. It’s not my usual style: I’m quite undramatic and practical. I think a lot of massive success is luck. But I do believe in making sure you’re keeping the door open to runaway success should it show up, and not closing it off as a possibility so it never does. This often leads to practical decisions like making things easily scalable, being open, and so on.


What are the Benefits, the Risks, the Alternatives, your Intuition, and what would happen if you do Nothing?

This checklist has already covered the benefits, but the risks and the alternatives need to be explored, as does the do-nothing option. This should also cover the opportunity cost of taking this project on. If I didn’t do this, what would I be doing instead?

And I always need a reminder to check what my intuition says. What does your gut say? What does your heart say? I am very much a brain sort of person, but instincts arise for a reason, and it’s worth paying attention if my analytical brain is saying this is a great idea but my emotional brain is reacting like it’s a terrible one.

(BRAIN is an acronym/method I have shamelessly stolen from decision-making around childbirth. It’s fair to say I have more experience of bringing new projects in to the world than new people, but I have found this exercise to be a useful one when faced with any major decision.)
Brain Waves

What about personal information?

This is the GDPR question. What personal information will be generated, used, and managed in the project? And what needs to be done about that? This can be a very big question, and can roam well beyond a quick checklist, but it needs addressing on pretty much any project.

Luckily for me, this is one of my interests, so it’s not too hard for me to do. If you don’t have that background, it’s worth getting some advice if you’re not sure.

What about intellectual property?

The main IP in my projects is copyright. Almost everything I produce in the project – writing, code, interfaces, graphics, diagrams – will have associated intellectual property rights. What is going to happen to them? The client is paying for me to produce them, but what scope will there be for me and them to use them later?

I am a big fan of free and open source software and of Creative Commons licensing. As an idealistic youth, my first enthusiasm for them was about the value of increasing access to things. But as my experience has grown, my main enthusiasm now is about the immense value of an open license in making sure that everyone involved will be able to build on their previous projects in the future.

A project brings people together. If the products of that project are available only under a closed license, it can be difficult to all but impossible to get the necessary paperwork together to prove that it is Ok to build on those products to make something even better. But if it was licensed openly, there’s no such problem: the license says anyone can build on them – and anyone includes the original contributors!

However, I’m a pragmatist and I’m very much of the view that not all projects are suitable for release under an open license. If so, how are we going to manage the IP generated?

As well as the stuff generated by the project, it’s worth thinking through what is happening with pre-existing intellectual property: the stuff that I am bringing in, and stuff that others are bringing in. Do we need an explicit agreement about that?

In my line of work it’s less common for patents, designs, and trade marks to be involved than copyright, but it’s worth thinking through if anything in that area is going to come up and deal with it up front.

Again, this can be a complex area, but luckily for me it is one of my particular interests. This does vary considerably between jurisdictions – for instance, I know the US has work-for-hire laws that set a very different context.

What will I learn?

One of the things I enjoy most is learning about an entirely new-to-me area of human endeavour, so if there’s an opportunity to do that and get paid for it, I’m going to be very keen.

But even when it’s well within areas I’ve worked in before, there’s almost always the opportunity to learn something new, pick up a new tool, get better at a particular task, or something in that line.

My hope is that using this checklist will help increase the chance of learning positively from a project, and decrease the chance of it being the old joke of “another bloody learning opportunity”.

This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission is needed to reuse or remix it (with attribution), but it’s nice to be notified if you do use it.


As I said I would, I have left the Open University, and I’m working as self-employed consultant. I’ve had several interesting proposals and discussions, some of which turned out to be too interesting to turn down.

One of the really liberating things is the wide range of possibilities for interesting work that are coming up. I’ve always had a broad range of interests, and really enjoy finding out new things and understanding new organisations, processes, and systems. Which is very handy for consultancy work where that’s often the first step!

Mam Tor

I’m also very much enjoying escaping large organisational bureaucracy, although of course for large clients I still have some of that. Best of all, for me, is the sense of proper responsibility for finances. I’ve plenty of experience in budgeting and monitoring spend on large projects, but it’s always been in a constrained framework and with other people’s money. Now, if I think an expense is justified, I can simply spend the money. If I need to travel (and the contract doesn’t have expenses separately, which is how I prefer it), I can just book the travel and accommodation immediately, instead of having to book via a travel agent I’m not allowed to talk to directly, who insists, via the intermediary, that the flight I want doesn’t exist, until I send a screenshot of Expedia back, and even then says it’ll be an extra £100 on top. If it’s feasible to travel by train rather than by plane to save carbon emissions, I can do that, even if the train journey works out more expensive. Even better, if I find a way to save money, it doesn’t become a potentially problematic underspend, it goes straight in my pocket (after the tax people have their cut).

I would like to get back to some regular blogging on here, but obviously paid work gets priority.

I do have a little limited capacity at the moment, with more in the new year, so if you’re interested in engaging me for a consultancy, you can read a bit more about what I can do on Hire Doug Clow, or simply get in touch.

New directions

I’m looking for a new career, building on my skills and experience in working with people, technology, data science, and artificial intelligence. At the moment, I’m wide open to ideas for what that might be, and in what domain. As most people who know me may be aware, I have very broad interests (everything is interesting!), which makes narrowing things down a challenge. But a good one!

This means that I’ll be leaving the Open University at the end of May, after 20 years. My plan is to leave academia too, after an even longer time.

Arthur’s Seat, Edinburgh, viewed from Pollock Halls (CC)-BY-SA David Monniaux

Although it’s been nominally the same job, more or less, for two decades, it’s not felt like it. I’ve been lucky enough to do new and different things all the way through. I’ve worked on projects from tiny ones in the margins of my own time to huge multi-million pound international collaborations, and all sorts of things in between. I’ve worked in education, languages, health and social care, business, and all sorts of science. I’ve worked with so many wonderful people, including developers, statisticians, analysts, senior managers, technologists, project managers, event organisers, support staff, media specialists, editors, architects, building contractors, accountants, policymakers, librarians, and all sorts of researchers, scholars, teachers and learners. It’s been fantastic to learn from all of them.

I may write more later about the OU, but I will always have a special place in my heart for its mission, and for its amazing staff and students. I’m very sorry to leave all that – but it’s the right time for me to change direction. So when a voluntary severance scheme was announced, I leapt at the chance. Things have been quite rough for the OU over the last few years, but I do think things are starting to improve, and the future looks bright. I wish the Open University all the very best as I head off in a different direction.

Here’s to a new start!

CALRG40: Final session

Liveblog notes from the final session at CALRG’s 40th anniversary event, 19 October 2018.

Fall Reflections

Visions Reflections

Tim O’Shea and Mike Sharples

After a contribution by videolink from Mike Sharples in Shanghai, Tim launched in to a series of entertaining anecdotes from the early days of CALRG and IET. Sadly I missed most of the start of this part.

The group has the MIT demo culture. Even if you aint got a theory, build something that works and see it. Build it and test it.

The group applied the course team model in to research. Many projects with 4, 5, 6 people in them. That was imported from the course team model, an incredibly healthy thing. That makes it such high quality.

40y ago we were outlaws. A previous VC called me a flim-flam artist. And others tried to shut us down. Now there’s a message from the VC, and a PVC in the front row nodding. We were hiding in huts trying to make sure the university didn’t remember we existed and had lots for money.

The Open University is too small for its claims and potential. John Daniel’s first book, there were 10 as of his writing. All of which are a bit bigger, to the Chinese TV universities. It’s too wee. It needs to scale, to bootstrap.

I was invited to Georgia Tech to a symposium on affordable learning at scale. Has an online masters in CS, 8,000 enrolled students, approaching 10,000. Illinois online MBA with a few thousand, Michigan, Arizona State, a lot of activity. The ladder from MOOCs to micromasters to masters. At Edinburgh, many MOOCs, not at scale yet. Look at what Harvard are doing, Arizona State has set up a global freshman academy. Online first year taking people who wouldn’t get in to an American university. Large numbers. At this symposium, it has to be a four-figure number. They’re inventing the OU course team. Bright-faced people talking about bring a range of people, doing some QA, it made our courses better. It’s being reinvented to make online affordable learning. Target is generally a quarter of the regular fee. That wants looking at.

I really think you should look at computer games. They are at scale.

You should really look at teacher bots. Sian Bayne has done wonderful work on teacher bots, programs responsible for different parts of teaching. Edinburgh bots are frank and open and you can tell they’re not people. Nanotutors, Jill Watson. MSc in CS. Two of the tutors were not humans. They were programs. Georgia Tech, when there are FAQs or easy questions, the nanobot answers. If you’re a human, you pause to see if the nanobot will answer it for you. E.g. what is the pass grade, what is the best reading. Jill Watson was not only not identified. The students were told two were robots, but didn’t spot her. But one of the human tutors not only didn’t spot she was a robot but asked her out for a date. [argh!]

Learning analytics is very important. I take pride in having worked in the university that educated Thomas Bayes. LA would not work without Bayes. He was wanting to improve his performance at gambling, despite being a protestant minister. Once you have cohorts of thousands, you can apply Bayes’ rule in a way you can with 50.

If, ten years from now, the OU is 10x the size, then you will have been successful.

Richard Joiner – At the start, you said we were all rebels. Do you think the CAL group needs to be more rebellious?

Tim – It makes me nervous to get a message from the Vice Chancellor. The CAL group ought to be making the senior management uncomfortable. There should be an element of public bolshiness. A formative experience was being taken to meet the maths faculty at the OU. Like the gulag. A big man with a beard, said young man, what do you think your purpose is? I said, to improve the quality of maths learning. He raved that I was impudent, OU had the best set of maths teachers on the planet, there was no way that I could improve it. The existence of the group is pointing out it’s not perfect.

Mike – I want to ask Tim a question.

Tim – I recruited you twice, sunshine, so be careful!

Mike – I was at RMIT where our former VC is now. Martin Bean, was like Tim, we need to scale up universities. If there are some innovative tech-aware VCs, why aren’t they getting together? Why not more collaboration and coordination around the world?

Tim – UUK, the British VCs, a friend hated being there with men in suits telling each other lies. VCs are paid to boast for the universities. It’s incredibly hard for a VC to collaborate with another, it really doesn’t go with the job description.

Eileen – Thanks Tim and Mike for their interesting and rebellious comments.

Keynote – Neil Mercer

Should digital technology ‘transform pedagogy’?

Neil is at Oracy Cambridge, Hughes Hall, University of Cambridge.

I’ve worked at the OU more than anywhere else. I didn’t mean to leave, someone made me an offer I couldn’t refuse. I’m glad to be back here. Hearing things that’ll help the university in to a better future. We all with it well.

One strand of CAL research here is different to what’s been mainly talked about: the use of digital technology in classrooms. That’s what I’m going to talk about. You’ll get a different perspective.

I’m getting imposter syndrome, since I’m not a techie, I worked with people who were much more technologically adept than I am. Early work with Rupert Wegerif, Lyn Dawes, both had PhDs on computers in schools. I didn’t. In fact, when I completed my PhD at Manchester, there was one computer for the whole university and I had to book time on it in the middle of the night to do my statistics. I feel like the ordinary person on the team.

News story – ‘New technology to spearhead classroom revolution’, with Education Secretary talking about tech industry launching it. Responses from teachers: is it time to set up BECTA? A wee reminder that teachers in other jurisdictions have a manageable workload through fewer working hours spent in the classroom teaching” It felt like deja vu. When will they ever learn? Go back to Secretary of State for Education in 2004, tech has already revolutionised learning. Earlier still, it will revolutionise schools and teaching. Cuban (1986), teachers have acquired reputation as Luddites. This notion recurs, there’s wonderful technology offered to teachers who want to smash the looms. It’s a seriously misguided point of view.

What has research told us? Tech-led mode of introduction is very likely to create problems. Little research on how, if at all, they influence […] educational goals. (Electronic whiteboards example.)

We ought to be humble about what computers really can do, what they’re good at, and what they’re not good at. Excellent film he saw last night, mentioned Kurt Vonnegut: “a computer teaches a child what a computer can do. An educated human being teaches a child what a child can become”.

Schools and classrooms is different from reaching people through e.g. distance learning.

What should digital technology do for classroom education? Just give them a load of gear and it’ll revolutionise what they do. Should it? We’ve got excellent teachers, really good teachers, whose practice is not faultless, but is good. Should the tech help them do it more easily and effectively? Technology is a tool, we invent them to help us get jobs done more effectively.

Some software designers say, this software’s great, it’s got good activities in it. Software doesn’t have activities. What you’ve got in a computer is resources for designing an actual activity. You’ve got the basis, but the activity happens in the intersection of learner, learner, teacher and computer – not in the software.

Research in classroom-based education ought to address the affordances of any tech, teachers’ pedagogical practices, pupils’ learning requirements, what we know about effective classroom teaching. Often tech offered – especially hardware – that was second-hand, designed for others primarily. If teachers are good at teaching, tech should enable them to do it more efficiently, not change it. Should be focused on the learning requirements, not the content you can give them. We know a lot more about effective classroom teaching. Have just finished the largest ever classroom pedagogy study in Britain, looking at frequency of teacher activities and outcomes. We know a lot more about this, that should be the basis of designing electronic things for schools.

There are some generalities from those studies. One of them, the Education Endowment Foundation’s summary of useful things to know, one says – individualised learning [with technology] may not be as helpful as small group learning with technology or collaborative use of technology.

Here’s some of my projects that have tried to exemplify this approach. One of the CALRG ethics is researching with teachers, collaboratively. What do the best teachers do? How can the technology help them to do it? Bart mentioned this – we want to know what the best teachers do.

Showed a photo of an early project, in a school in Wolverton, old style large grey box computers.

Project with Sara Hennessy, Paul Warwick – Using interactive whiteboards (IWBs) to orchestrate classroom dialogue. Vygotskian approach to dialogue. We know that the ways teachers talk to students makes a different to learning outcomes, and how they talk back – it’s the whole dialogue that counts.

Example 1 – Using the IWB to link lessons. Give a sense of going somewhere, so they can perceive the journey. Teacher uses pictures of previous lesson as a resource. [Photo of teacher, showing classic current British classroom – whiteboard with projector on it, and walls absolutely covered with brightly coloured things – a very visually noisy environment.] We didn’t find negative features of this use.

Example 2 – Using block reveal – show a blank on the screen, e.g. three words blanked in a poem. Can encourage thinking, structure teaching, and maintained pace.  Once the teacher’s designed it, it’s harder to change the structure of the lesson, content less flexible, and tends to be used to set up ‘closed questions’. Good teachers also ask open questions.

IWB enables teachers to provide engaging whole-class learning experience, integrate diverse resources, save and revisit resource to improve continuity and coherence – all more easily than without an IWB. [Interestingly, he says ‘IWB’ as the abbreviation – which is five syllables compared to six for saying ‘interactive whiteboard’.] But teaching is not ‘transformed’ by the IWB, it can dominate events or disrupt when it fails. The effective use of the IWB depends not only on tech skill, but also on teachers’ teaching skill.

Second project, Ingvill Rasmussen and Paul Warwick, Digitalised Dialogues Across the Curriculum (DiDiAC). Based on ‘Thinking Together’ approach. A brower-based microblogging platform Talkwall. Small number of schools in Norway and UK.

Asked teachers about group work, teachers asked students to consider purpose and value of talk for learning, class agreed ground rules for Exploratory Talk (this is known to be really good for getting them to talk productively), used Talkwall to record ideas and decisions. (Important to get them to agree what they’re saying and write it down, and not just talk.) Consensual social norms governed use, not just features of the tech.

Can share this between schools. The collective thoughts of the students become a common resource, the teacher can build on them, the students can refer to them. It’s helping teacher to improve the quality of groupwork.

Project not finished then. It does help. It’s not flash or innovative, but it helps a more group-focused collaborative learning environment. It improves the variety and quality of questioning and talk by the teacher and children.

General conclusions

Not controversial to you, but might be to a Secretary of State for Education.

Transformational, tech-led approach is misguided and disfunctional. Digital tech should not be designed to replace teachers, but to assist them – good teachers doing their job well. Need to assess this using educational, not technological perspective. It’s a toolkit for pursuing educational goals, not determining them. It’s particularly good for supporting collaborative learning. We need to assess the educational strengths and weaknesses fo any new tech – especially when it arrives as 2nd hand kit primarily designed for other users. If developed from this perspective, it has so much to offer. It’s great. That’s what we should be aiming for.


Stylianos – The context of this research. Compulsory education, characteristic there is the development of a community. Students spend a lot of time with the teacher in the same environment, facilitates these activities. Are there any lessons we can learn from your research for people who teach in HE, where the engagement is not at the same level? There’s a speed-dating type contact between tutors and students.

Neil – Not sure I can give a clever answer. I agree HE is a different context. In achieving the balance between authoritative presentations like this one, and true dialogue, it’s difficult. At Cambridge, we have supervisions – it’s not cheap but it does it.

Kim – Talked about using good teaching to design tech, but what about the converse, what teachers find difficult or can’t do?

Neil – They’re aware of the richness of multimedia resources. Most of them aren’t natural techies. They want things that’ll make that more possible. There are ways that can be better. Kids are missing out if they’re not making the most of those resources. Other things, like citizen science, they find it hard to coordinate children. The one Mike Sharples did with bird feeders, the technology enabled coordination of individual records across several schools, it was great with primary kids. They’d have found that impossible. It’s a good way to look at it.

Patrick – Back to the first question. I was looking at the list of general conclusions, it applies to what we’re trying to do in HE. I feel that a lot of the research we do, it’s a challenge we face when we work with schools, but it is transferable. Is there something we do in HE that has no lessons, that’s so different, or should we be looking at this across all of the contexts?

Neil – It should be research with potential to apply across contexts. But if you ignore the particularity of the contexts you might go wrong. The aim of a primary teacher isn’t to reach beyond the 26 students they have. That’s going to affect what works. The scaling Tim mentioned, I can see why that’s beneficial for this institution and the students, but that isn’t the issue here. You don’t want to scale up a primary classroom to 100 kids. You do want to scale up to help 100 teachers see how they can do better. It depends what your question is.

Patrick – The Development goals does go across all education, not just HE. The pedagogies are around sharing experiences, broadening beyond the classroom, even though you start there. You’re underplaying your research.

Tim – Reinforce and comment about the second-hand kit. Had the experience of visiting a large computer corporation about educational strategy. Had a sequence of this would work well for schools, things designed for commercial context, but not pedagogical intent at all. The problem education has, it’s a small underfunded market. If you compare education to world finance industry, it’s very small. Cyclops was ahead of its time, BBC micro, Cicero, why are they no longer extant? They never reached the numbers the IBM PC or MS-DOS did. There isn’t an economic structure around to sustain. If you buy a spreadsheet it’s going to be Excel, not one designed for learning. A problem like the civil servants of education ministers have is not realising that education is small beer compared to defence or finance.

Neil – That’s plausible, except you see Pearson is phenomenally wealthy, and Cambridge University Press pays for half of what Cambridge does. There’s money somewhere in it. How come the books sell?

Tim – Books are easier, you just print ’em and flog ’em.

Andrew – Falling in to the trap of simplifying the problem. The issue with scaling is context. Learning occurs differently in different contexts. We can’t avoid the fact that context defines. Steve Draper’s “Niche-based success in CAL” – tech adopted when it solves a problem. Need to be more sophisticated. Scalability doesn’t happen because learning contexts are different – e.g. VLEs. I’ve been paying more attention to understanding the learning problem before we think about the solution. Learning is a complicated thing, we can’t just wade in with tech without disciplined analysis of the problem.

Neil – I agree entirely.

Wayne Holmes – I like your final conclusions, particularly number 2 (tech designed to assist teachers, not replace). Any examples where the attempt to introduce tech shows up problems?

Neil – Sara?

Sara – The whiteboards is one example. We weren’t unaware of the pedagogical issues. But when available worldwide, clear they’re not being used effectively, despite every single classroom having one. It takes a lot longer to address the problems [than to reveal them]

Bart – I’m not in habit of quoting David Willets, but he saids one problem is most educationalistst are amateurs, trying to play round the fiddle, use a very basic approach to the black box of learning. Do we need to fundamentally upskill the methodologies to show the complexities of learning?

Neil – No. I think it’s politicians who don’t understand learning. [laughter. We know a lot more about the process of teaching and learning.

Richard Joiner – Whiteboards, teachers not using them interactively, you need to change the pedagogy that teachers use?

Neil – What we do is find out what really good teachers do and distil it down to its essence. Some of the teachers said they did do things, others hadn’t thought about it. I’ve such respect for teachers, I don’t want to become a Gove knocking teachers. There are some who are not so good, need to be trained to do what we know is a good thing.

Sara – Introducing any tech, has to be preceded by support and professional development for the teachers. We’ve known this for decades. The whiteboard initiative was an example, the digital microscopes in secondary schools, they were just left in cupboards. We have lots of examples.

27/365 Present - Regalo

Poster competition winners

The poster competition winners were announced by Liz Fitzgerald, Wayne Holmes, and Doug Clow. I haven’t written them down here so as not to spoil the result.

This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission needed to reuse or remix (with attribution), but it’s nice to be notified if you do use it.

CALRG40: Afternoon

Liveblog notes from the afternoon at CALRG’s 40th anniversary, 19 October 2018.
Rays of Creusa

Vision 3 – Teams can successfully teach any number of students at a distance

Patrick McAndrew

(Rebecca Ferguson is sadly unable to attend and present)

Berrill is a challenge for presenting – you can see the people in the room, yawning, and you also know that people will come to you the next day.

The OU is one of the largest universities – in terms of students, in terms of full-time equivalents. And coming up, we are nearly in to the top 20 of students studying full time!

Patrick shows a photo from Rebecca of me (Doug) at a statue showing someone pouring knowledge in to their head and reading a book.

Teaching is when someone acts to help someone learn. “To make someone less inclined to do something” – as in “I’ll teach you to throw rocks at my windows”. Amusing. But here, teaching is causing someone to learn.

What underpins that are a series of beliefs, with a lot of evidence: distance learning works, and anyone can learn. A more open approach to being a university can be effective. Distance education works. That was a challenge for this university. The push to ensuring we have sufficient quality has driven us for many years. One was the formation of an Institute of Educational Technology. Being effective at teaching at a distance, and using technologies. Now synonymous with small devices and computers, but more broadly is how we approach how we teach. Our belief is anyone can learn. We are open to any student studying with us. Have put up work from the early 2000s with a model of pass rates, connects to Bart’s work, predictive models and machine learnings. What can we predict about what the student body does? Jim Peard has done this work, annual update. Three groups of factores – student previous experience, student motivation, module difficulty/match. One key factor is whether they have previous experience of study. As soon as they’ve done one module with us, their previous experience drops down the list of predictors.

Foundations on Supported Open Learning. But also understanding how we design. Derek Rowntree’s work. A lot of work on communication. If you’re not familiar with Diana Laurillard’s conversational framework, get familiar with it.

Also, working at scale. Currently (2011, UNESCO work by John Daniel), there were 165m people in tertiary education, projected to peak at 263m in 2025. Would require more than four major campus universities (30,000 students) opening every week for the next fifteen years.

Support that scales. Several people have come to the OU to find out how we manage, the magic that enables the university to teach at the scale it does. The material design and the support. We have a resilient but not cheapest answer. Building in support, 1:20 or 1:25, if your process can operate at roughly that level, it’s probably going to work. It’s hard to make a resilient system that operates above that ratio. Enables us to give support to our students. A major part of what makes a difference for our students is the support from their ALs. High satisfaction levels around student feedback. The student doesn’t necessarily meet the person who gives the feedback, but we get the highest ratings for the feedback. Bart’s delivered another punchline – what we design makes a difference. The learning design matters.

Success at (moderate) scale. Martin chaired one of the first really large OU courses – 15,000 students studying for credit. T171 You, Your Computer, and the Net. 55% of OU students in England come from disadvantaged backgrounds, only 29% of full-time students do. This was an era when things were able to expand more easily, we could grow supported by the systems around us.

This was also when we started to operate at a truly massive scale – OpenLearn, launched in 2006, hit the 60m visitors mark last week. Open University on YouTube and iTunesU. We have more and more routes to get content through to people. The design is important. We’ve taken away one of our pillars of success – the support – and that means we lose some of the resilience and certainty in what will happen.

I said I’d come back to some of the challenges. There’s the challenge to us: for PT entrants to study in England, there’s been a 61% drop since the introduction of the high cost student loan system. We are no longer as big a university as we were at the end of 2010. The other set of challenges is what happens to our students – from the number who start to register, then who confirm they are starting study, then actually starting, then paying their fee, then keeping going, then completing, then passing. Not everyone makes it to the end point of that journey [something like half by eye from his bar chart].

We are changing our approach. Work on the right start, so students find out in the first three weeks they’re on the wrong course, we haven’t necessarily supported them well. Targeted communication, bridging from one piece of study and the next, and different measures of success – a degree isn’t the right qualification for all our students – a Certificate in HE can be a major achievement.

There’s also being more radical. THe OU was radical in the beginning, and a radical move in 2012 setting up FutureLearn. Are universities the right structure? Changes in support models? Alternative pedagogies? Different attitudes? This is beyond our control.

Pedagogy at scale through MOOCs – led by Mike Sharples. And Rebecca Ferguson has led work looking at what research tells us, within the OU’s own FutureLearn provision. A lot of the lessons are the same: make use of effective distance learning pedagogies; offer well-designed assessment. The largest ever MOOC was the British Council English language course on FutureLearn, with (from memory) 500,000 students on a single presentation.

Rebecca’s also looked at how educators at other univerities experience developing MOOCs. Developing expertise: develop educator teams; identify and share effective learning design. FutureLearn makes the communication element more visible. How do we make this part of the education ecosystem? People still want accreditation. We’re working with FL towards accreditation.

Other pieces of work. Beck Pitt has been working with BizMOOC, to produce a MOOC Book. Shi-Min Chua, how people communicate and how we can support discussion at scale. Francisco Iniesto, how accessible are MOOCs, looking at producer, and learner perspectives. Tina Papathoma looking at the educator perspective.

We are struggling with how to maintain our scale in the current environment. Reaching out to the world, carrying out good research to support it.


Tim – What’s the claim for superiority of FutureLearn compared to others based on?

Patrick – It is owned by the OU [applause and laughter] It has a different approach to pedagogy. More involving, can see it impacting. It’s the pedagogy, the ownership, and the partners.

Eileen – When we visited Coursera, they were interested in FL’s good design for mobile devices. Because of the timing, it was a year after Coursera, and understood that mobile devices were important. Also, the idea of pedagogy around social learning at all. Nobody else had thought that through. Mike has an interesting slide on the use of the forum equivalent, and the forums not being separate but in the design. Also, Diana says it’s the best one.

Diana – From experience.

Science World Reflection

Vision 4: Learners engage enthusiastically with science learning

Eileen Scanlon, Thea Herodotou

If you join an interdisciplinary group, your birth discipline can leak out. Long list of collaborators, and past projects. One was Collaborative Learning in Primary Science – originally turned down by a reviewer who couldn’t see the point of learning in groups. Conceptual Change in Science, Alternate Reality Kit, The Talk Factory, Personal Inquiry.

Many theses – from Eileen’s own (Modelling Physics Problem Solving) to Citizen Inquiry in informal settings. We’re digitising our theses, and are going to make that avaiable online.

Some old photos from news in schools, an original Mac SE.

Considering a trajectory for new learners – informal to formal, passive to active, solitary to sharing, learners to teachers. One key step on informal to formal is the Open Science Lab. Originally, a mix of experiments to support formal teaching, but also informal science, an iSpot link. Use of technology to make such experiences available to others is important to us.

The PI project was a bit of bricolage. 2008/9 was just when mobiles were becoming available to help students connect experiences outside the classroom with those inside. Developed an approach to personal inquiry, could be an interest in aspects of yourself, of your environment, or of your community. Personalisation, and inquiry learning needs a lot of scaffolding. Outcome includes a good book, and exhibition at the Royal Society.

With the work for school kids, in the first instantiation of the OpenScience lab, we did some experiments, with Simon Kelley, around moon rocks, to see if the approach worked pedagogically in OU teaching. Just because you’ve worked with school children and technology, doesn’t mean they’re not applicable in OU teaching. We, CALRG, always took a broad approach to which approaches would inspire us and apply to our work within the Open University.

Another experiment, also with Simon Kelley, is some experience with the Virtual Microscope. Thea and Maria Aristeidou (and Mike and Eileen) have done some evaluation. It aimed to explore how students use and engage with virtual microscopy, and what teaching approaches work better for students who study practical science. Contrasted blended learning condition with wholly online one. Mixed methods study, learning analytics, survey, interviews. Blended students were more satisfied with how the virtual microscope is integrated in the module and greater perceived learning improvements (observation skills) than online students. There was a big difference in how it was used – in blended, it was introduced by a tutor and was complementary to a physical one. Online it was the only way to see the images; the learners felt a need for tutor to complement the activities.

Several projects on science learning across settings – Citizen Inquiry. Between citizen science and inquiry learning. New book on Citizen Inquiry, recently published. Active engagement in science, citizens defining their own research agenda. Not just natural and physical sciences, can be social and applied sciences. There are 12 case studies of citizen inquiry.

Started with nQuire-it project, funded by Nominet Trust. Website, and Sense-it application, can capture data using your phone. Whatever data you capture can be uploaded to the nQuire-it platform. A community developed around weather, Maria Aristeidou led. Also worked with the Centre for Policing Research and Learning, to help police understand what the community needs. Latest collaboration, was with the BBC Tomorrow’s World, redesigned the platform. It has open and closed missions, and others. Presented at the Royal Society Summer Science Exhibition. An example – My Life, My Data, #MyTomorrow. Run with the BBC.

Latest work, collaborating with organisations across the world, NSF, Wellcome and ESRC. Three museums – London, SF, and LA, and UCD, OU, and Oxford. LEARN Cit Sci . Museum-led citizen science programme, coastal biodiversity, uploaded to iNaturalist.

The vision – learners engage enthusiastically with science learning. Open learning, inquiry learning, mobile learning, enabling the learner’s transitions between formal and informal settings and contexts can offer benefits for science learners.


Andrew Ravenscroft – the word ‘disadvantaged’, how do you define that?

Thea – Chidlren or young people who don’t have opportunities ot go to museums, or families not well educated. Museums have activities to go to those communities, to raise their science capital.

Kim Issroff – How science has changed over 40y, and how pedagogical approaches have changed.

Thea – Eileen?

Eileen – A good point, particularly in relation to last points Thea made. Citizen Science as a credible activity. Muki Haklay was tweeting furiously because people were questioning the quality of cit sci data. The notion of being an active science learner has been around for 40y. One of the issues around practical work experience is engagement with practical science became difficult in the secondary curriculum. Noticed as we went round schools, we worked with geography rather than science teachers. They enthusiastically bit our hand off for pilot projects to their whole year groups rather than just 20 or so students. They had got in to science investigation as a way to help young people develop their understanding of geography. They assumed students knew what a hypothesis was. In science teaching, a move away from data collection. There were enquiries, but the activity was managing data that had come from elsewhere. Interesting mismatch cross disciplines about what we assume children will have learned in different contexts. Anyone should understand what a scientific investigation might deliver from them, whether a community group measuring pollution in a creek, or a group of kids deciding they wanted to look at how food rots if you leave it outside a fridge.  Science as even more relevant to daily life. How we include people has changed over the intervening period.

Thea – Research shows people find STEM as not interesting, or boring, or for the gifted ones. What was happening was only a few were involved in science. It wasn’t accessible to the citizens. The idea developed that science was just for a few. We are trying to show there are way to learn science, and give the support to do so.

Tim – You’re too diffident. You can be stronger about the claims you make for nQuire. Look at Zooniverse. Citizen Science and nQuire work is an unambiguous success. The question is how do we get citizen mathematics, or citizen philosophy. The one that has the unambiguous tick is the collective study and engagement in science.

Eileen – We’ve managed to broaden the types of inquiry. Koula has used nQuire to do cultural heritage enquiries. There’s a lot of interest in our work from other museums than just science museums. That’s a strong direction the work might go in.

Diana – The Muki Haklay, it was about data that is used as a research output to demonstrate things about London. There is an issue about how oyu assure the quality, comprehensiveness of the data you collect. You’ve been talking about citizen science as an educational process. But when it’s a process of data gathering, you do have to address data quality, but he didn’t.

Eileen – We do both with iSpot – it’s for learning, and for real science data gathering. The solution is to have seeded experts on your platform to help with identification, then by continuing participation on the platform, through a reputation management platform, develop a more robust way of QAing the observations.

Janice – Before we launched iSpot, we engaged the expert biological recording community. A big job was signing them up, getting their expertise on. That’s the model we’ve used. Get the experts on board. They assist with the verification process. We badge them with the specific expertise they have.

Eileen – In 1975, as a science faculty course manager, we had measurement of SO2 pollution. People filled in a form, we sent them kits where they measured SO2 in their local area. They sent the forms to me, I checked the readings were at least within 100 of where they should be. We had a live BBC programme to report their feedback every year from 1975 onwards. Paper in New Scientist that credits the cohort of OU students.

Thea – With Zooniverse and iNat, partners on LEARN Cit Sci. On iNat, developed a machine learning technique that gives you ideas about the correct answer. Zooniverse, the identification is cross-checked across at least three volunteers.

Diana – That’s a much better answer.

Allison – Do you think the fact that questions, or tasks in science can generally be broken down in to smaller ones, so many people can come together, makes it in some ways simpler to have citizen science rather than citizen philosophy? How do we move forward to bring this together?

Thea – We try to break this down through the design on the site. We structure the steps of the inquiry process. The challenging thing is what kind of questions they can put down – so they’re do-able and can be answered. So young people had ideas about going to space and doing things that couldn’t be done. It needs some scaffolding from experts to narrow them down to link to a conclusion.

This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission needed to reuse or remix (with attribution), but it’s nice to be notified if you do use it.

CALRG40: Morning

The Open University’s Computers and Learning Research Group (CALRG) is 40 years old! To celebrate, we’re having a one-day conference, on Friday 19 October 2018, in the Berrill Lecture Theatre, The Open University, Walton Hall, Milton Keynes. These are my liveblog notes.


Welcome & Introductions

Eileen welcomes everyone and says she’s acting as air hostess, pointing out the exits and so on. She’s had many nice messages. When you look at the collection of old photographs outside, if you want to contribute to the yellow notebook, please do. The goody bags include 10p off coffee!

Mary Kellett – Acting Vice Chancellor, OU

Mary appears in a recorded video. She welcomes everyone. The CALRG has a special place in the history of the OU. In 1978, the OU was still young, and only in its 8th year. From the outset, the potential of the computer to enrich our students’ experience was recognised. [And then there was a technical hitch that made the video stutter and freeze.] The OU is the C20th’s greatest educational experiment, so an ideal testbed for the judicious and imaginative use of computer to help our students. A strong international reputation for international excellence has persisted and flourished. A large cohort of PhD students is crucial, so a special welcome to current and past students who’ve come back today. The group has an enviable record of external funding, well-cited publications, and contributions to the success of the university in research assessment exercises. Their unique position comes from being in a university whose raison d’etre is around using technology in education. Congratulations and have a wonderful day!

[There was some informed discussion in the crowd about the nature of the problem, and the irony of this group being disrupted.]

Hazel Rymer – Pro-Vice-Chancellor, OU

Welcome everyone to this 40th year of CALRG activity! In the room and online. We’re celebrating the story of comimtment, sheer bloody-mindedness, to developing the reputation of the OU for world-leading educational technology. It rests on current and former members who have spread the word nationally and internationally. Welcome particularly our keynotes – Prof Diana Laurillard and Prof Neil Mercer. The group was started a cross-university group of staff, led by IET. First director and founder of the group was Prof Sir Tim O’Shea. Fun stats: over 100 PhDs awarded, at least 10 former students hold professorships, including the OU’s only Regius Professor (and the UK’s only one in education). Excellent results in five research exercises. Not (just) about looking back, but forward. The programme is about visions that underlay the group’s work. Let’s get on with it!

Eileen Scanlon – Regius Professor of Open Education

Thanks for the kind words. Forty seemed quite special, so we were a little more ambitious. We’re looking at work that has persisted over 40 years. We have time for four visions today, but there are a lot more than that. I do love the quote from Voltaire – history is a pack of tricks you play on the dead. We’re playing a trick in trying to retrofit some group visions. You don’t have to agree with us about them, but it’s an attempt to link the past, present and future of the group. We’re hoping it’ll be interactive, and where you’ll disagree and be quite forceful. Tim O’Shea and Ann Jones are the only two from the founding group who are here. Tim, in his interview at the OU, was recruited to work with the Maths faculty, but was told not to do any of the educational computing he was famous for. Within a year we had CALRG. Hazel mentioned bloody-mindedness, it’s a great trait across the group. There’s also the sense of community, of sharing endeavours. Students are the key glue that helps a research group. One common acknowledgement in theses was thanking members of the research group, “past, present and future”.

Four visions, located within the OU.

  • Vision 1 – Learning is accessible for everyone. A strong line here. Not just for people with disabilities. Openess, inclusion.
  • Vision 2 – Adapting teaching.
  • Vision 3 – Learning at scale. The biggest course the OU had was about 15,000 students. So this notion of scale and reach is one we can explore.
  • Vision 4 – Science learning. Where Eileen herself started. “Teachers will bite your hand off, Eileen”.

What matters? Many varieties of openness. Transparency and collaboration are key indicators of the health of a group. This has been a cross-university collaboration, led by IET. We make a point of being open, transparent and collaborative.

It’s more than sheer bloody-mindedness. Background to the visions. In 2013, a cross-university group, worked on “Beyond Prototypes” (, an in-depth examination of the processes of innovation in technology-enhanced learning. What makes a difference that lasts? Many case studies, looking at interviews with researchers and industry.

Found three things, key factors: persistent intent working through successive projects and and understanding of the complexity of the infrastructure round Technology Enhanced Learning (the TEL complex) and the process of bricolage. Bricolage is my favourite, it’s something like tinkering, the work you do is influenced by the tools available, it’s of its time – we push the cutting edge – but some of the choices are to do with timeliness in terms of the technology. It sounds a bit theoretical, which I’m not famous for. But in an in-flight magazine I found a definition of bricolage, so thought maybe it’s OK to use.

We’re going to try to produce a book, using persistent intent and bricolage as a frame on which to hang the description of our work.

It’s been exciting, but complex. Demands collaboration. Bricolage sounds like tinkering, creating tools that are no use in the future – but no, one of the key things in this area, even when the product you produce is of its time, the fact that you made your teaching ideas explicit, inspectable and testable – whether RCTs or more qualitatively – that’s what makes the area exciting. Make your ideas concrete, and people can inspect them. There isn’t a distinctive methodological approach in the group, which can be a problem, but so can sticking to one. Studying in teams that are interdisciplinary is part of the challenge.

Finally, a plug for OpenTEL. The CAL group is great, but I have to thank another group, which extends a little further, a priority research area, called openTEL. Has a lot of the features and people of CALRG, but draws even more people in to our work. A link between research and practice – the synergy between TEL research, and the mission and strategic interests of the University. There’s also our OpenAI group, and others. The openTEL group has funded the reception at the end of the day, so thanks to them for that.

Thank you.

Eileen then welcomes Diana. Her contribution to the OU and HE and ed tech policy is huge. We’re delighted she’s back. We used to pack CALRG in to a room and have tea and cake every day.

Keynote – Diana Laurillard

Reflecting on the CALRG vision and its diaspora

Chair of Learning with Digital Technologies, UCL Knowledge Lab.

Thank you for inviting me back. It was such fun working here. Wishing you a happy birthday. 40 years! That was 1978!

An image to reflect on that, and what it was like, and it was this. A British Council visit to Yugoslavia, a conference on ed tech, I was to do a presentation on interactive graphics for computer simulations to teach science. I knew Tim and Eileen’s work. The idea was to take this to Belgrade. The technology hadn’t caught up with our ambitions. I took with me a microcomputer consisting of three enormous boxes. One was the computer, one was the screen, a gigantic TV with a CRT, and another with keyboard and other guff. The plane landed in Belgrade, I watched the luggage trolley with my boxes on top, and I worried it would fall off. Customs were suspicious, even more when I said it was a computer. They got more suspicious and feel on it with knives until someone from the BC explained what was going. You couldn’t show this with an OHP. You had to be bloody-minded to get your point across. Shows how foresighted Tim and Eileen were to setting this up.

I’m not a good digital archaeologist, rooting through my old CDs, it was hard to find stuff to characterise CALRG. The papers for the CALRG conference, I can compare by looking at CALRG papers up to 1995. Two word clouds. 40 years ago, it was the subject areas that dominated – science, music, etc. But today they don’t appear at all. Learning and education dominate, students loom large, and MOOCs replace intelligent systems. They’ve become the focus of my research, although I was skeptical. There were stupid things said about MOOCs, cruel myths. But if we as researchers explore and challenge what we can do, we get something really interesting.

Reflecting on those four visions, and how they are worked on here, illustrated by some things from the UCL Knowledge Lab.

Learning is accessible to everyone.

This makes me think about the UNESCO vision of learning for all. This requires something different to what we’ve been doing. We have to use learning technology to develop high quality HE on the large scale. The sustainable development goals – SDG4 requires 68m teachers by 2030. But technology can help. Teaching skills for digital age – government panic about AI taking people’s jobs, asking what can we do – one of the first thing is to help the teachers who are helping the students, continual change and innovation.

More online models to reach more key audiences? Transnational education, that’s ok for a few hundred thousand students who are well off, but to get to millions it needs online learning. British Council sees that too. TNE demand is currently 4m, but not commensurate with the 200m global demand.

Need to understand relationships between open universities and MOOCs. OU is lamentably undercelebrated within government. Other countries have copied what we’ve done here. Proven for undergraduates. It’s not free, but you get personal tutor support, get tutor discussion, feedback, assessment, and summative assessment. But MOOCs are not suitable for undergraduates – it’s free so no personal tutor support, relies on peer discussion, feedback, assessment. But it is highly suitable for professional development. Teachers value this very highly. They trust their peers, the discussions are vibrant because they know what they’re talking about. You don’t expect and exam and qualification from professional development. Researchers value the pathway to impact on end-users. MOOCs have a role to play.

It’s a two-step cascade model. In the development context, there’s a cascade model – run work for experts and teachers at national level, then a long cascade where there ends up with a pale reflection at the end. But from a MOOC with 10,000 professional participants, if each of those trains 25 local students or adults, that’s 250,000, via blended learning. That’s how we can get to millions.

Teaching is adapted to meet learner’s needs

We developed MOOCs on FutureLearn on blended learning essentials, funding from UFI charitable trust. Running for 2y, 25,000 teachers engage (double that registered). 60% in the UK. The most successful prof dev in the FE sector ever. Has brought in HE teachers too. A way to get to teachers, they get a window on others’ practice. Access to tools and resources, talk to each other – they’re really cooperative and supportive. We learn a lot too. We can take this further e.g. with Padlet, sharing practice. We can explore the real contexts of our learners, which is incredibly important for the research.

Digital competence, technology-enhanced teaching for teachers, a different project. Progressing Technology-Enhanced Teaching – MENTEP. To enhance the self-assessment tool, we created this MOOC, with a form to submit their learning design and how that matched to the competence framework. Again, teachers share what they’re doing, and contribute back. Teachers are isolated in their innovation, but on these platforms they can share. Collaborative knowledge-building, the entire workforce needs to be engage.

Then in research contexts, we have a project looking at refugee settlements, in Lebanon. RELIEF project (Refugees, Education, Learning, IT and Enterprise for the Future). The project us using MOOCs to develop community researchers and teachers professional development. Work with them, what they really need. They said classroom management, so we’re starting with that. This looks like it will work in these challenging refugee camps. The wifi goes down for a few hours a day, but you learn to work around it.

The co-design approach means to engage, co-design, blend, embed it in how they already operate, and work it so it becomes self-sustaining, because we will fade away. That has to keep iterating.

So this is when teachers get support to develop the community knowledge of how best to use computers in learning.

Teams can successfully teach any number of students at a distance

Team of researchers create a MOOC with 10k local professionals, each of them works with 25, those 250,000 people engage – that enables working with them, not dropping stuff on them out of a helicopter.

Some tools to do this – one is the Course Resource Appraisal Modeller (CRAM). Helps people understand what they need to go online. The cost models are different – large initial outlay, but amortised over many years of teaching.

TPD@Scale Coalition – many moved. We hope there is much help for teachers out of this collaboration. Reports often say that teachers or teacher education should do this or that, but rarely much on how they will.

Learners engage enthusiastically with science learning

Yes they do! Not just the teachers bite your hand off, the learners do too. We’ll here more about that through the day.

The CALRG diaspora

It must be huge by now. I want to end with a rather self-aggrandising claim. Two incontrovertible statements, and a conclusion.

There is no clearer force for good in the world than education.

There is no way to provide education for all except through digital technologies.

Therefore, educational technologists are doing the most important job in the world.


Rose Luckin – Two questions. A slide at the beginning, and near the end. The outside world that we have to cope with. The slide that showed how things had changed, from disciplines to learning. The curriculum doesn’t change, it’s still disciplinary. We’ve changed but it hasn’t. The second is to do with funding. So many initiatives that are really good, do engage in participatory design, still flounder in that transition to scale. Struggle to see how we impact more. How can we do more of that? Second is not related. At what point do we as researchers admit that we’re wrong. It’s hard to publish things that haven’t worked. It’s difficult to accept when we haven’t got things right. Is there a contradiction as researchers?

Diana – that was at least three questions. [laughter] Curriculum – I don’t agree, I think it has agreed. [I was thinking of schools] In university it has changed. In schools politicians get in the way, we get inappropriate things demanded. How we affect those things is part of what we do in co-design, a stakeholder mapping. In change in education that’s always the education ministry. In Lebanon, we start with the guys at the top, and they are all guys, then we work down to the people who do things, we work with all of them. We need them in the MOOCs as well. The co-design works with all these different groups and sometimes mixes them up. When you hear the stories about what it takes to do anything in the camps, the people from the ministry learn something they don’t know. Sustainability is absolutely critical. Our blended learning essentials we started on day 1 working with government and agencies to take it over, wouldn’t cost much to keep it running. They came back with it’s not in alignment with our strategic plan. Unbelievable! What can you do? I spent 3y in the Dept for Education and Skills. No point using government to make viable change. To make things work, had to go back to academe and work with the teachers. They know stuff, they’re committed. That’s the best I can offer.

Ray Ison – On the viability model, I’d challenge you, the reason it’s working is the FL platform provides innovative space because it’s not settled. Why aren’t we as the OU doing what you’re doing ourselves? That’s taking a design turn, absolutely the way we have to do it. Are we not being blinkered by our own adherence to the orthodoxy of named degrees and education meeting prescribed means from others.

Diana – that’s what universities have to be, the market creators of new knowledge, that’s why research is so important. The OU has to take credit for the existence for the OU. How you use it is up to imagination. On co-design, the professionals know stuff we don’t. A perfect design cycle. With younger students it’s more difficult. Different with students than professionals.

Vision 1 – Learning is Accessible for Everyone

Martin Weller & Kate Lister

Kate starts. Four parts – Martin with a video on the Revenge of Open. Then Eileen on remote labs, me on inclusive practice, then all of you engaged. Four themese – making inaccessible.

Martin – Revenge of Open

Starts with a Star Wars crawl. We should embrace the woolliness in the ideas of open access – it enables us.

Different models of open access, illustrated with hats.

He starts with a horned helmet. The traditional model. Open entry, distance education, modular, part time. They’re radical. Ed tech startups seem to emulate it. Like the Vikings, hence the hat, it swept the world.

Next a white curly wig. Another interpretation is boosted by the Internet, Open Educational Researchers. People can do new things with it, as well as accessing it. It’s a legal (hence the wig) as much as anothe process.

Then a yellow builder’s hat. MOOCs. The cost is seen as a barrier, but limitations on rights we didn’t see with the previous. The cost of removing the cost barrier is a loss of support. Constructing barriers.

Then a face mask with respirator. (Chemist) This is mixing everything all together.

Finally, (with no mask) open access takes many different forms. We shouldn’t think of open access as one thing, but as a group of related practices. We should constantly review these, always refine what it means to be educators and an open university.

Eileen – Remote Laboratories and Remote Access

We actually haven’t moved on from thinking about science. Past work – Practical Experimentation by Access to Remote Learning (PEARL) project, Enabling Remote Access, OpenSTEM Lab.

Acknowledgements – small list – many people involved.

How does the introduction of technology change science learning?

There’s a practical problem with practical work in science. Home experiment kits – lasers in the post – or, as the Daily Mail had it, Death Rays Through The Post.

Martyn Cooper, PEARL project, EU Framework V 2000-2003. Providing access at a distance. How to turn some of our introductory science courses in to something people can experience at home or elsewhere. Other partners did manufacturing engineering and electronics. But at the OU, took the physics experiment from our programme – spectroscopy – previously had to build your own at summer school. If you think through how to deliver it at a distance, you improve it for everyone. Helps accessibility and inclusion. From a distance, use a robotic spectroscope set up here in a lab, could do work they would’ve done had they not been unable to attend summer school. This was a complex task.

That is a forerunner of a lot of the work very successfully developed by the Science Faculty, now the STEM Faculty, with a grant from the Wolfson Foundation. History – home experiment kits and broadcasts, residential lab-based classes, on-screen interactive experiments and instruments, livestream experiments, remotely operable analytical instruments.

Another project, led by Trevor Collins, trying to make geology fieldwork. ERA – Enabling remote geology fieldwork by transient wireless networking. Opening practical science for everyone. Persistent intent. Ancient Mountains course (SXR339), came from a course team query about setting up an alternative experience for learners who couldn’t attend. Started in what geologists need to see. Developed idea of the remote activities model. Students brought as close as they could be, but communicating with others. Currently this is working in Access Anglesey. In 2017 had an accessible field trip funded by the NSF, 2014, 3.5d field course on hydrology and meteorology in the field.

Out There and In Here – led by Anne Adams. Cross-university team. EPSRC funded project. Collaborative teams working in a mixed environment, some ‘in here’ working in a command centre, and others out in the field – ‘out there’ – who send and view images. Bringing lots of information together from different sources.

These are examples of the TEL Complex – you can look at what is going to be possible with near-future technology. Persistent intent, bricolage – but also interdisciplinarity. This was like that.

Kate Lister – Evolving inclusive practice

How we’re applying the lessons we’ve learned to our practice.

Another project – EU4All. Who’s heard of this? [most people] Large EU project, 13 partners. Wide stakeholder engagements. Two outputs are a model of professionalism in accessibility, and a learner-centred framework for personalisation f content and service. Three lessons: accessibility requires multi-faceted, multi-stakeholder approach; accessibility is continually evolved – persistent intent is required; understanding user needs, experience and preferences is key.

Three examples where we’re taking thees lessons and applying them.

SeGA – who’s heard of this? [almost everyone] A small team, in LTI, with a goal of embedding accessibility. We connect people to a wider community, multi-stakeholder approach. Has been around for 8y, has revolutionised how accessibility is operationalised at the OU, innovative in the sector.

Second example – the IncSTEM project, led by Trevor Collins. Aim to scale up examples of inclusive practice to the HE sector. 8 OU case studies/mini projects. It’s a huge project. Will take the example of the online practical work, the OpenSTEM labs. The virtual microscope is one of the examples. Can look at an object (e.g. a Martian meteorite), in various orientations, and in the microscope. Working up practical previews – a chance for students to try the tools off-module. A professional demo, then hands on with expert support, time to ID and address accessibility issues, and time to ID and address tech/skills gaps.

Third – understanding students, ‘Our Journey’. Students have diverse backgrounds and goals, but hard to evidence the barriers at scale. Collaborative development. Represent student journeys from the student’s perspective. Shows how various aspects impact on their learning. Can be used online or face to face. Data at scale or guided reflection.

Themes covered: making the inaccessible accessible; making things better for everyone; designing for diversity; breaking down barriers.

The future

Take 3-4 minutes, talk to those around you, pick one of these themes, ID issues you want to see on the research agenda.


Tim – Everyone talked about scalability. That ought to be on the agenda. Eileen’s thing with the rocks, brings to mind many games where people are in virtual environments, or literally running round the streets of Dundee. One of the places to learn from is the big successful multi-user computer games.

Someone – broadening term of accessibility, to include English learners, people from diverse backgrounds, making learning content accessible to them.

Do feel free to communicate more ideas to us!

The Pink Flamingo - Caldwell Zoo - Tyler TX

Vision 2 – Teaching is Adapted to Meet Learner’s Needs

Ann Jones & Bart Rienties

Ann starts. As one of the original members of the CAL group, I thought I’d go back to seek a framework looking at the early days, in a report from the first annual conference in 1981. Then Bart will talk about more recent work.

I’ll talk about three themes in the report – Models of learning, method for studying learning, and institutional research: evaluation.

A photo from 1979, with the CALRG sitting on the grass, in Cambridge, in the garden of APU.

Models of learning

Some were interested in developing production systems – for psychological modelling, computer programs consisting of a set of rules, ways of deciding between different rules, and an interpreter to run the system. A focus on collecting student protocol data, trying to understand what’s going on – but some of us didn’t progress to production systems. There was a strong relationship to supporting student learning, and a range of domains – maths, physics problem solving, and novices learning to program – my [Ann’s] interest.

My thesis work focused on novices learning programming. Has her report on ‘SOLO’. Participants came to the university, worked through the materials, on their own, then recorded talking aloud about what they were doing.

She showed some transcript data from a student talking about how they had gone about a programming task. Some of the more interesting examples are when they have difficulty.

Feeding in to teaching – it fed in to how the design of instructional materials supported understanding. CAL was also used in courses. Tim developed a computer game, used in a mathematical course – developing mathematical thinking (EM235).

Some of our systems were pretty advanced. The CYCLOPs system developed at the OU could capture written input as well.

Institutional research – we evaluated 3 tutorial CAL systems in Science and Social Science courses. (in the early 80s). Some used at summer school. Using questionnaires, observations at summer schools, interviews, automatically recorded usage, and student-completed log books.

Case study – Evaluating tutorial CAL. MCQs answered at home, went to study centre to access a terminal, would get further questions to answer which were more interactive. Use was lower than hoped, and dropped over time. Questionnaires were built into the tutorial. Overall, there were barriers to negotiate. Students had little knowledge of computers – hard to get on system. Other students around, some found it an anxious experience. Tradeoff of perceived benefit and hassle.

Another case study – work metallurgist – Canan Blake et al 1999. Evaluating another tutorial, this on in a game format, interpretations of phase diagrams, used at residential school. It worked well, most students bought a copy. Designed for individual use, but tended to use it in pairs.

Reflections on then and now. The drivers – understanding student behaviour, improving instructional design – are very similar. But in our evaluations we were looking at one part of the course (CAL), because we could change that. Research students were crucial part of group’s work, as now.

Bart – back to the future

Technology portrayal can be positive or negative.

Bart does a quick poll of those in the room – “In comparison to 1979, teaching at the OU is now adapted to the needs of students”. Most are neutral, but a just under half agreed or entirely agreed.

Another one: “With the affordances of analytics and AI, within five years teaching at the OU will be adapted to the needs of students – 36% agree, more neutral, quite a few disagreeing.

We’ll do a pre-post test.

OU Learning Design framework. And how the way we teach has an impact on student satisfaction and retention. Communication is the best predictor for engagement, and for whether they pass the module. Since 2016, have done amazing work on identifying what teachers are actually designing. Quan Nguyen took this further, mapping what the students are actually doing (on the VLE) and how active they are, and how that links to the learning design. You see linkages, and disconnects. Also, how we teach courses at the OU are widely different. Some are peaks and troughs, some are steady all the way. 69% of what students are doing in a week is determined by us, teachers.

Can also see when our students are actually studying. Compare successful to failing students. The vast majority students do not follow the course structure! [In the sense of what activity is done at what time. Some very clear trajectories. Also analysing what students are saying, we can predict emotions – positive, negative, neutral, mixed – just on writing. Our tool is much better than the other tools available.

So, we can look at what effective teachers do, look at what the best paths for students may be, understand what students ‘think’ and feel, and provide personalised feedback what to do next. But are we ready as OU to do this?

Return to the questions. Who would change their answer to the 1979 to now – not much change, but not much change. Perhaps a broader spread. Then will it be in five years?

Final question – what are you hoping that CALRG/technology will bring in 5 years to the OU?

Some suggestions: Design for diversity, replicability, larger inclusion of student-centred design, happy and fulfilled students, robust meaninful analysis of big learning analytics data, get researdh into practice, understanding students’ needs at scale, better evidence bout our treasured theories of learning, personalised, more active learning for time-poor students, research informed new regulations.


Stylianos – I see we have a better way of collecting big data, making meaningful assumptions about students experience of learning. We have gained a lot, but have we lost anything? Ann’s work on close scrutiny, precise social science methods, compared to big data.

Bart – yes. That’s why I like CALRG, we need interdisciplinary researchers. If the learning analytics people were in charge, you wouldn’t want to live in that society, because our view is so limited. You need that triangulation of perspectives. We had 40y of research, but lots of the problems were known but not empirically proven. Now we can show 69% of what students done is determined by us as teachers. We should get our skates on.

Ann – We do still bring in students to try to get a fine-grained view of what they’re doing, interacting with systems, or students with disabilities. We haven’t lost that.

Tim – When Ann and I looked at students using computer system Cicero, we were surprised. BCE – Bad Computer Experiences. e.g. the porter wouldn’t let you in, or you couldn’t get it to work. Another was emotional – feeling others watching you, or the computer would inform your tutor you didn’t really understand. That was helpful for the academic computing service of the time, which explained why usage was dropping off. All of this is sunny uplands. Is there not a proportion of students for whom the modality is adverse. Assumption is this is great for everybody. Is it?

Bart – You’re right. We didn’t show OU Analyse. We couldn’t understand, some students never clicked on anything and still passed. How is that possible, they’re not doing what we expect? But turned out they were in secure environments and couldn’t click. Vast majority of students are not following the course structure, but we focus on it.

Tim – Methodological problem, still around. Students are instrumental. Ask them why did you do something, they’ll answer. If you ask them why did you not do something, they’ll say they were too busy. Won’t say because thought might make a fool of myself, etc. How do you get underneath that to the actual reason?

Ann – We have some students who do not want to engage online. Some are on machines all day, last thing they want is to do that in the evening. A challenge for us.

Andrew Ravenscroft – Following on from Tim. LA, the problem is there may be crucial factors you can’t capture. We measure what we can easily find. Would you capture a moment of insight? Probably not. A cautionary frame round this. A lot of work on LA is less advanced on how you intervene to make things better. Can see students who may fail, too much attention on that and less on what we do to intervene. I have students who are aware of the difficulties, but getting them to improve their performance is really difficult. We need to focus on those interventions.

Bart – Has to be actionable feedback. Giving the feedback to teachers helps them give feedback. Giving feedback to students can help them if – and a big if – they are able and willing. So how can we provide really well designed activities where they’re willing to engage with our feedback. Maybe we should think about how we collect feedback, so we know before the feedback. We can do so much better at the OU than a one size fits all solution. We have the power in the research community to help.

This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission needed to reuse or remix (with attribution), but it’s nice to be notified if you do use it.