Blog

COVID-19 coronavirus and data

How does COVID-19 coronavirus illustrate timeless truths about data?

This is a version of two Twitter threads: one on how it illustrates timeless truths about data https://twitter.com/dougclow/status/1228248715728740352 and the other about authoritative information on the outbreak https://twitter.com/dougclow/status/1228246890287968256.

NB I do have some health and medical background but I am a data and learning professional, not a clinician, epidemiologist, or public health person.

Canarian Raven
“The virus is called COVID-19. No R. Call it CORVID one more time and I’ll peck your eyes out.”

Data science, BI and in fact any statistics all start with simple counting. That first bit is surprisingly hard, and getting it right is often most of the work. The current COVID-19 coronavirus outbreak shows this up nicely.

You can make as sophisticated a model as you like for an outbreak, to estimate things like the basic reproduction ratio (R_0, how many new cases each case is expected to cause, on average, which tells you how far it’s spreading), or the case fatality rate (how many people who catch it die of it), or the likely extent of the outbreak (how many people might catch it), and all sorts of stuff about how fast this is all happening or likely to happen.

But all that crucially depends on simple counting: how many have it at a given point, how many have died, etc. With the latest figures, we see how hard that is. The number of new cases in China jumped from 2,000 on Weds to 15,500 Thurs – because counting methodology was changed.

The numbers have changed retrospectively, too – I copied those numbers down yesterday, but today it looks like it was a 400 increase on Weds and a 15,100 on Thurs. This would be hard to get right even if there wasn’t a massive health crisis there.

Counting infections is always tricky, but surely deaths are easier? Turns out there are surprisingly difficult edge cases at the edge of life, but those don’t come up often, and almost everyone agrees about most deaths.

But even then, you’re probably getting data from multiple sources and combining them and that can lead to problems. Like today’s news that 108 deaths have been removed from the figures because they were double-counted.

It’s easy as a data scientist to say we need to invest in better data, and sometimes that’s right. But getting good basic counting data is hard, and expensive, and cannot be the absolute priority. The data you’re dealing with will always be messy to some degree.

Speaking of degrees, this crops up in education and learning. ‘How many learners do we have right now?’ is the basic question that is the denominator for pretty much any learning or teaching metric you care about.

And that is surprisingly hard to answer sometimes. There are late registrations, retrospective registrations, de-registrations, retrospective de-registrations, provisional versions of all those, and that’s just dealing with individuals.

When you have organisations buying in learning, it gets even worse: how many are provisionally ordered, how many are finally ordered, how many are catered for, how many show up, and how many are invoiced for are all different, and not the same as how many learned anything.

Speaking of invoices, cash at least should be easy to count? It should be clear when a customer paid us, right? Oh, my sweet summer child. That sound you hear is the entire accounting profession sniggering.

Suffice to say that the same payment can legitimately have different dates for cashflow, annual accounting, VAT, other taxes, and who knows what other purposes. Organisations are incentivised to manipulate this data, and most organisations respond to incentives.

Summarising a wide ramble: Even the simplest of data, like ‘How many people have COVID-19?’ can be surprisingly hard to get authoritatively. Getting better data is rarely a business imperative. Be cautious about interpreting your advanced statistical models.

Also, be kind to people who are working hard to do really difficult jobs in really difficult circumstances. And don’t make it harder by spreading misinformation.

Check with authoritative sources before passing on information. It does not help to spread stuff you think might be dodgy or far-fetched ‘just in case’. Most people who pass on misinformation don’t mean to cause problems. Check it’s right first. You can help protect your friends and colleagues from this hazard.

So, having said that, how do you know what’s right? Check with authoritative sources, like Wikipedia always tells you. Wikipedia has excellent info, which is rapidly changing as the situation changes rapidly: https://en.wikipedia.org/wiki/2019%E2%80%9320_Wuhan_coronavirus_outbreak

In the UK the risk of infection is very low, as in most places outside Hubei.

In short: If you think you may have the virus, stay where you are and call 111.

Outside the UK, there’s authoritative sources like the US CDC https://www.cdc.gov/coronavirus/index.html and the WHO https://www.who.int/emergencies/diseases/novel-coronavirus-2019

For hard research info, there’s stuff on the WHO site (currently under ‘technical information’ and  ‘global research’) https://www.who.int/emergencies/diseases/novel-coronavirus-2019 and many publishers have made research freely available, and some have free-access portals on the topic, e.g.

If you like statistics and numbers, here’s some good aggregation from Johns Hopkins University, with a few visualisations and a link to a well-maintained GitHub CSV. https://gisanddata.maps.arcgis.com/apps/opsdashboard/index.html#/bda7594740fd40299423467b48e9ecf6  

This data is pretty good, but don’t treat it – or any data! – as representing the objective truth.


This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission is needed to reuse or remix it (with attribution), but it’s nice to be notified if you do use it.

Project start checklist

What do you need to have thought through before you start a project?

In a large organisation, like I left last year, there’s often a lot of heavyweight project management overhead. The downside is filling out lots of paperwork, but the upside is that it does make sure that the project is in good shape at the start. If it’s good project management paperwork, of course.

Working in smaller contexts lets you be much more agile and swift. Now I’m an independent consultant, I can start a project the moment I think it’s a good idea.

But how do I know it’s a good idea, and how do I know it needs more work, or is better not done at all? Obviously, using my skill and judgement developed over many years of working on projects. And that tells me that if it’s more than a teeny tiny bit of work, it’s well worth spending some time up front, to systematically work through what needs to be in place at that stage.

Some people would be happy to simply do that in their head and be done. But I am a huge fan of checklists. This post isn’t about the wonders of checklists. Suffice to say, the more experience I get, the more I think they are wonderful. They help make sure that the obvious things actually do get done.

So I have found myself wanting a project start checklist: a list of things that need to be thought through before I start a new project. It’s here, at the bottom of this post.

The checklist is deliberately skewed towards freelancing and consultancy, rather than internal projects inside large organisations that already have more project management overhead than they perhaps need. So, for instance, it doesn’t have any of the “set up project board” and “identify project sponsor” things you’ll find on other project start checklists. If you need to do those things, you (should!) already have a formal project management process. This checklist is for when you don’t have that. It would also work for small-scale guerilla projects inside a large organisation that don’t need official sign-off. It assumes the project is highly custom and not a standard one, which covers pretty much everything I do, but it is way more than you’d need if you are a freelancer doing well-established work – like, say, you’re a graphic designer and simply doing another logo for a new client. (Although even then some of the elements will be useful.)

The way I use it is to go down the list, systematically, and make sure I have a good answer to each question. To work the checklist magic, it is important to do them in order and not to skip any of them. (Obviously, if you don’t think an item belongs on the checklist, ever, delete it and never worry about it again.) If I don’t have a good answer to a question, a good argument for why it doesn’t matter in this context will also do. If I can’t answer a question with no good excuse, that’s a prompt to find out more before I start. Not knowing can be a red flag that this project is ill-conceived. Sometimes it’s the right decision to start a project before it’s all tied up and thought through. But better to do so having that potential risk in mind, and even better to mitigate that risk before it happens.

On to the list itself!

Takeoff

Is this project worth doing?

Ideally, it’s worth doing because I believe it will make the world a better place.

The underlying idea here is to avoid doing things that would be better not done. I like Peter Drucker’s distinction between “doing the right thing” and “doing things right”. Most of project planning and delivery is focused on “doing things right”. It is all too easy – and indeed necessary a lot of the time – to forget about the big picture, get your head down, and get it done and delivered. But the time before a project starts is a golden opportunity to pause and ask the big questions, like “What’s the point of this? Why bother?”.

Even if you think the project’s a terrible idea, it may be worth doing for other reasons. Not everyone can afford to be picky about what work they do, in which case, “it is worth doing because it will pay me money I need and I don’t have a better alternative” can absolutely make it a yes here. But if you’re doing that, it’s good to be aware that you’re in it for the cash (or the exposure, or the experience) not for the project itself. That way you’ll be in a better position to work to get more of what you want and minimise what you don’t.

Who actually wants this project to happen? Who directly benefits?

This is related to the first question, but different. I draw a distinction between someone who actually wants a project to happen because they themselves want the results, and someone who wants a project to happen so they can say that something like it has been done, for presentational or organisational reasons. So, for example, an organisation might want to be seen to be doing something about an issue, but they don’t actually care about it. They set up a little project, perhaps engaging an external consultant, so they can point to that as having done something, but they are not engaged in the issue. That doesn’t make it a no, but it is a different context to one where there are actual direct beneficiaries.

Fundamentally, it is often the case that people who are intent on doing something for someone else’s benefit have a different view of what would constitute a benefit to the  person themselves. I might sincerely want you to do good for you, but you will usually have a better insight in to what is good for you than I do.

This isn’t to say that I’m against building new things that people don’t yet know they want: far from it. (Think of the apocryphal story of Henry Ford saying that if he’d asked people what they wanted, they’d have said better horses.) But you need to know when you’re doing that, and how you will know that you have given them something that they do want.

Who are the other stakeholders and what should happen to them?

To many British people of a certain age, banging on about stakeholders makes you think about Tony Blair and the 1990s, or about frustrating project management paperwork, or both. It can be a little overworked, but I do think it’s worth thinking through who else will care, or ought to care, about a project you’re involved in, and what you’re going to do to them. Or with them. With them, totally.

Where is the value generated? Where is the money coming from?

This is again related, but may give different answers. For working in the commercial sector, it’s vital to understand where the business makes its money and how the project will impact on that. For other public and third-sector work, it’s vital to understand how they are funded and for what purposes, and how the project will impact on that. Grant funders – whether that’s the European Union, a research council, a foundation or a charity – will have a set of conditions on their funding, but there will also be a more a more implicit set of ideas about how projects they fund ought to work.

Will I get paid, and when?

This is, of course, the freelancer’s main question.

The obvious thing is to make sure the mechanics are in place. Is there an agreement to pay me? Does there need to be a purchase order, and if so, has it been raised? Have we agreed the invoicing pattern, and do I know who is responsible for paying the invoices? I don’t need much paperwork: I log every potential client, and I log every project. But the client and/or funder may have paperwork or processes to follow before the project can start. I can start work on my own initiative, but that may raise the risk that the project never officially happens, and I don’t get paid.

Underlying the mechanics (or overlaying them?!), there’s the question of whether they will pay when the invoices come in. What’s their cashflow situation? What’s their payment track record? This can crop up at all levels. A small company may have cashflow problems and have to defer paying you. A large company may have an imperative to juice this quarter’s financials and defer paying you. A university may defer paying because it is extremely bureaucratic and it’s nobody’s job to make sure you are paid in a timely manner.

Why do they need me?

Usually, this is because they need to get something done but don’t know how to do it, and so they’re bringing me in because I do. That will usually mean part of my role will be explaining what I’m doing and why. Sometimes it might be that they do have the skills but they don’t have the capacity, in which case there’s less need for explanation.

There are also projects with an aspect of management consultancy to them. Anything involving organisational change falls in to this category, but so too does most work on training and development. Here, it’s very important to understand the political context within the organisation before starting.

Do we have a shared vision for the final outcome?

If they’re bringing me in because I have skills and knowledge they don’t, my experience will be very different to theirs, so it is almost certain that what I imagine will not be quite the same as what they imagine. We need to do the work to ensure we agree what we’ll have at the end.

I can be happy to go without this, so long as the project plan has some way of bringing our visions together – although if that’s the case, I’ll usually prefer to have a break or review after the converging-vision phase.

Does the project plan make sense?

This is the bread-and-butter thinking through the project and planning it, or understanding the plan if someone else has produced it, and working through what my role will be. I need to work through what I’ll be doing, and how, and exploring all aspects of the project iron triangle (quality/scope, time, resource). This also includes how it will dovetail with my other commitments.

This question is where most of the planning effort goes in, but it doesn’t need extensive reminders on a checklist.

What is out of scope?

Obviously, a complete list of things out of scope of any given project is going to be pretty large. However, I do like to explicitly write down the things that one might reasonably think were included, but are not. This can be really useful to clarify with the client or funder, particularly if I can get it in to the paperwork.

What will you end up doing anyway?

Sometimes I want to do something to a certain standard of work and the client doesn’t want me to spend all that time (and/or pay me for it), so they say don’t bother with that. In many cases that’s fine. It can be important part of making sure we’re getting best value out of the work. Not everything has to be done to world-class research standards, and outside academia, done quickly is usually more valuable than done perfectly.

However, there are some things I simply can’t shortcut. One example for me is preparing for a presentation, talk or speech. I will always put the work in to be prepared to my own standards, even if that means skipping things I really want to do or staying up absurdly late. And I have tried but failed to make something without checking what similar things other people have done already. I don’t need to do a full lit review before starting a project, but if I havent spent at least a few hours exploring what’s been done in the area recently … I know I will end up doing that anyway. And I am an incorrigible data nerd, so if I collect some data, whether quantitative or qualitative, I know I will spend a fair amount of time getting to know it, regardless of whether I’m being paid to.

It’s better if I know to expect this than have it bite me yet again.

What if things are harder than expected?

A bit of thought ahead of time can help a lot here, and again the iron triangle (quality/scope, time, resource) applies: What aspects of quality or scope could I cut? Where could I find extra time? How could I get extra resource? How would I communicate and renegotiate if I can’t address the issue myself?

 

Takeoff

What could go wrong?

I like to do a project pre-mortem. This post isn’t about the wonders of pre-mortems, but they are a very useful tool. The idea here is you imagine that the project has failed and you’re working out what went wrong. How did it happen? It’s a cognitive flip: instead of only thinking about how it will succeed, you assume that it has gone wrong, and come up with ideas for how that could have happened. This can be very useful for spotting things that you are half-deliberately hiding from yourself because you don’t want the project to fail.

I’ve done a bit of flying in light aircraft, and like many aviators, I read a lot of air accident investigation reports. Often, when you these reports, you can see that bad judgement was present at the start. So a useful question to ask when preparing to fly is “How would this look in an accident report?”. That can keep you on the straight and narrow, and out of obvious, well-known mistakes.

So, in this context, how would you talk about this phase of the project if it later turned out to be a disaster? What were the red flags, the early warning signs, the classic blunders, the usual procedures avoided?

What would huge success look like?

This is a question I picked up from a cheesy talk some time ago. It’s not my usual style: I’m quite undramatic and practical. I think a lot of massive success is luck. But I do believe in making sure you’re keeping the door open to runaway success should it show up, and not closing it off as a possibility so it never does. This often leads to practical decisions like making things easily scalable, being open, and so on.

B.R.A.I.N.

What are the Benefits, the Risks, the Alternatives, your Intuition, and what would happen if you do Nothing?

This checklist has already covered the benefits, but the risks and the alternatives need to be explored, as does the do-nothing option. This should also cover the opportunity cost of taking this project on. If I didn’t do this, what would I be doing instead?

And I always need a reminder to check what my intuition says. What does your gut say? What does your heart say? I am very much a brain sort of person, but instincts arise for a reason, and it’s worth paying attention if my analytical brain is saying this is a great idea but my emotional brain is reacting like it’s a terrible one.

(BRAIN is an acronym/method I have shamelessly stolen from decision-making around childbirth. It’s fair to say I have more experience of bringing new projects in to the world than new people, but I have found this exercise to be a useful one when faced with any major decision.)
Brain Waves

What about personal information?

This is the GDPR question. What personal information will be generated, used, and managed in the project? And what needs to be done about that? This can be a very big question, and can roam well beyond a quick checklist, but it needs addressing on pretty much any project.

Luckily for me, this is one of my interests, so it’s not too hard for me to do. If you don’t have that background, it’s worth getting some advice if you’re not sure.

What about intellectual property?

The main IP in my projects is copyright. Almost everything I produce in the project – writing, code, interfaces, graphics, diagrams – will have associated intellectual property rights. What is going to happen to them? The client is paying for me to produce them, but what scope will there be for me and them to use them later?

I am a big fan of free and open source software and of Creative Commons licensing. As an idealistic youth, my first enthusiasm for them was about the value of increasing access to things. But as my experience has grown, my main enthusiasm now is about the immense value of an open license in making sure that everyone involved will be able to build on their previous projects in the future.

A project brings people together. If the products of that project are available only under a closed license, it can be difficult to all but impossible to get the necessary paperwork together to prove that it is Ok to build on those products to make something even better. But if it was licensed openly, there’s no such problem: the license says anyone can build on them – and anyone includes the original contributors!

However, I’m a pragmatist and I’m very much of the view that not all projects are suitable for release under an open license. If so, how are we going to manage the IP generated?

As well as the stuff generated by the project, it’s worth thinking through what is happening with pre-existing intellectual property: the stuff that I am bringing in, and stuff that others are bringing in. Do we need an explicit agreement about that?

In my line of work it’s less common for patents, designs, and trade marks to be involved than copyright, but it’s worth thinking through if anything in that area is going to come up and deal with it up front.

Again, this can be a complex area, but luckily for me it is one of my particular interests. This does vary considerably between jurisdictions – for instance, I know the US has work-for-hire laws that set a very different context.

What will I learn?

One of the things I enjoy most is learning about an entirely new-to-me area of human endeavour, so if there’s an opportunity to do that and get paid for it, I’m going to be very keen.

But even when it’s well within areas I’ve worked in before, there’s almost always the opportunity to learn something new, pick up a new tool, get better at a particular task, or something in that line.

My hope is that using this checklist will help increase the chance of learning positively from a project, and decrease the chance of it being the old joke of “another bloody learning opportunity”.


This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission is needed to reuse or remix it (with attribution), but it’s nice to be notified if you do use it.

Consultancy

As I said I would, I have left the Open University, and I’m working as self-employed consultant. I’ve had several interesting proposals and discussions, some of which turned out to be too interesting to turn down.

One of the really liberating things is the wide range of possibilities for interesting work that are coming up. I’ve always had a broad range of interests, and really enjoy finding out new things and understanding new organisations, processes, and systems. Which is very handy for consultancy work where that’s often the first step!

Mam Tor

I’m also very much enjoying escaping large organisational bureaucracy, although of course for large clients I still have some of that. Best of all, for me, is the sense of proper responsibility for finances. I’ve plenty of experience in budgeting and monitoring spend on large projects, but it’s always been in a constrained framework and with other people’s money. Now, if I think an expense is justified, I can simply spend the money. If I need to travel (and the contract doesn’t have expenses separately, which is how I prefer it), I can just book the travel and accommodation immediately, instead of having to book via a travel agent I’m not allowed to talk to directly, who insists, via the intermediary, that the flight I want doesn’t exist, until I send a screenshot of Expedia back, and even then says it’ll be an extra £100 on top. If it’s feasible to travel by train rather than by plane to save carbon emissions, I can do that, even if the train journey works out more expensive. Even better, if I find a way to save money, it doesn’t become a potentially problematic underspend, it goes straight in my pocket (after the tax people have their cut).

I would like to get back to some regular blogging on here, but obviously paid work gets priority.

I do have a little limited capacity at the moment, with more in the new year, so if you’re interested in engaging me for a consultancy, you can read a bit more about what I can do on Hire Doug Clow, or simply get in touch.

New directions

I’m looking for a new career, building on my skills and experience in working with people, technology, data science, and artificial intelligence. At the moment, I’m wide open to ideas for what that might be, and in what domain. As most people who know me may be aware, I have very broad interests (everything is interesting!), which makes narrowing things down a challenge. But a good one!

This means that I’ll be leaving the Open University at the end of May, after 20 years. My plan is to leave academia too, after an even longer time.

Edinburgh_Arthur_Seat_dsc06165
Arthur’s Seat, Edinburgh, viewed from Pollock Halls (CC)-BY-SA David Monniaux

Although it’s been nominally the same job, more or less, for two decades, it’s not felt like it. I’ve been lucky enough to do new and different things all the way through. I’ve worked on projects from tiny ones in the margins of my own time to huge multi-million pound international collaborations, and all sorts of things in between. I’ve worked in education, languages, health and social care, business, and all sorts of science. I’ve worked with so many wonderful people, including developers, statisticians, analysts, senior managers, technologists, project managers, event organisers, support staff, media specialists, editors, architects, building contractors, accountants, policymakers, librarians, and all sorts of researchers, scholars, teachers and learners. It’s been fantastic to learn from all of them.

I may write more later about the OU, but I will always have a special place in my heart for its mission, and for its amazing staff and students. I’m very sorry to leave all that – but it’s the right time for me to change direction. So when a voluntary severance scheme was announced, I leapt at the chance. Things have been quite rough for the OU over the last few years, but I do think things are starting to improve, and the future looks bright. I wish the Open University all the very best as I head off in a different direction.

Here’s to a new start!

What is learning? CALRG panel

What is learning? On Thursday 29 December, CALRG assembled a panel of four researchers to discuss this fundamental question: Bart Rienties, Liz Fitzgerald, Alice Peasgood, and Peter Twining. These are the liveblog notes. An audio recording was made and should be available soon.
Bees, September '14
Continue reading “What is learning? CALRG panel”

CALRG40: Final session

Liveblog notes from the final session at CALRG’s 40th anniversary event, 19 October 2018.

Fall Reflections

Visions Reflections

Tim O’Shea and Mike Sharples

After a contribution by videolink from Mike Sharples in Shanghai, Tim launched in to a series of entertaining anecdotes from the early days of CALRG and IET. Sadly I missed most of the start of this part.

The group has the MIT demo culture. Even if you aint got a theory, build something that works and see it. Build it and test it.

The group applied the course team model in to research. Many projects with 4, 5, 6 people in them. That was imported from the course team model, an incredibly healthy thing. That makes it such high quality.

40y ago we were outlaws. A previous VC called me a flim-flam artist. And others tried to shut us down. Now there’s a message from the VC, and a PVC in the front row nodding. We were hiding in huts trying to make sure the university didn’t remember we existed and had lots for money.

The Open University is too small for its claims and potential. John Daniel’s first book, there were 10 as of his writing. All of which are a bit bigger, to the Chinese TV universities. It’s too wee. It needs to scale, to bootstrap.

I was invited to Georgia Tech to a symposium on affordable learning at scale. Has an online masters in CS, 8,000 enrolled students, approaching 10,000. Illinois online MBA with a few thousand, Michigan, Arizona State, a lot of activity. The ladder from MOOCs to micromasters to masters. At Edinburgh, many MOOCs, not at scale yet. Look at what Harvard are doing, Arizona State has set up a global freshman academy. Online first year taking people who wouldn’t get in to an American university. Large numbers. At this symposium, it has to be a four-figure number. They’re inventing the OU course team. Bright-faced people talking about bring a range of people, doing some QA, it made our courses better. It’s being reinvented to make online affordable learning. Target is generally a quarter of the regular fee. That wants looking at.

I really think you should look at computer games. They are at scale.

You should really look at teacher bots. Sian Bayne has done wonderful work on teacher bots, programs responsible for different parts of teaching. Edinburgh bots are frank and open and you can tell they’re not people. Nanotutors, Jill Watson. MSc in CS. Two of the tutors were not humans. They were programs. Georgia Tech, when there are FAQs or easy questions, the nanobot answers. If you’re a human, you pause to see if the nanobot will answer it for you. E.g. what is the pass grade, what is the best reading. Jill Watson was not only not identified. The students were told two were robots, but didn’t spot her. But one of the human tutors not only didn’t spot she was a robot but asked her out for a date. [argh!]

Learning analytics is very important. I take pride in having worked in the university that educated Thomas Bayes. LA would not work without Bayes. He was wanting to improve his performance at gambling, despite being a protestant minister. Once you have cohorts of thousands, you can apply Bayes’ rule in a way you can with 50.

If, ten years from now, the OU is 10x the size, then you will have been successful.

Richard Joiner – At the start, you said we were all rebels. Do you think the CAL group needs to be more rebellious?

Tim – It makes me nervous to get a message from the Vice Chancellor. The CAL group ought to be making the senior management uncomfortable. There should be an element of public bolshiness. A formative experience was being taken to meet the maths faculty at the OU. Like the gulag. A big man with a beard, said young man, what do you think your purpose is? I said, to improve the quality of maths learning. He raved that I was impudent, OU had the best set of maths teachers on the planet, there was no way that I could improve it. The existence of the group is pointing out it’s not perfect.

Mike – I want to ask Tim a question.

Tim – I recruited you twice, sunshine, so be careful!

Mike – I was at RMIT where our former VC is now. Martin Bean, was like Tim, we need to scale up universities. If there are some innovative tech-aware VCs, why aren’t they getting together? Why not more collaboration and coordination around the world?

Tim – UUK, the British VCs, a friend hated being there with men in suits telling each other lies. VCs are paid to boast for the universities. It’s incredibly hard for a VC to collaborate with another, it really doesn’t go with the job description.

Eileen – Thanks Tim and Mike for their interesting and rebellious comments.

Keynote – Neil Mercer

Should digital technology ‘transform pedagogy’?

Neil is at Oracy Cambridge, Hughes Hall, University of Cambridge.

I’ve worked at the OU more than anywhere else. I didn’t mean to leave, someone made me an offer I couldn’t refuse. I’m glad to be back here. Hearing things that’ll help the university in to a better future. We all with it well.

One strand of CAL research here is different to what’s been mainly talked about: the use of digital technology in classrooms. That’s what I’m going to talk about. You’ll get a different perspective.

I’m getting imposter syndrome, since I’m not a techie, I worked with people who were much more technologically adept than I am. Early work with Rupert Wegerif, Lyn Dawes, both had PhDs on computers in schools. I didn’t. In fact, when I completed my PhD at Manchester, there was one computer for the whole university and I had to book time on it in the middle of the night to do my statistics. I feel like the ordinary person on the team.

News story – ‘New technology to spearhead classroom revolution’, with Education Secretary talking about tech industry launching it. Responses from teachers: is it time to set up BECTA? A wee reminder that teachers in other jurisdictions have a manageable workload through fewer working hours spent in the classroom teaching” It felt like deja vu. When will they ever learn? Go back to Secretary of State for Education in 2004, tech has already revolutionised learning. Earlier still, it will revolutionise schools and teaching. Cuban (1986), teachers have acquired reputation as Luddites. This notion recurs, there’s wonderful technology offered to teachers who want to smash the looms. It’s a seriously misguided point of view.

What has research told us? Tech-led mode of introduction is very likely to create problems. Little research on how, if at all, they influence […] educational goals. (Electronic whiteboards example.)

We ought to be humble about what computers really can do, what they’re good at, and what they’re not good at. Excellent film he saw last night, mentioned Kurt Vonnegut: “a computer teaches a child what a computer can do. An educated human being teaches a child what a child can become”.

Schools and classrooms is different from reaching people through e.g. distance learning.

What should digital technology do for classroom education? Just give them a load of gear and it’ll revolutionise what they do. Should it? We’ve got excellent teachers, really good teachers, whose practice is not faultless, but is good. Should the tech help them do it more easily and effectively? Technology is a tool, we invent them to help us get jobs done more effectively.

Some software designers say, this software’s great, it’s got good activities in it. Software doesn’t have activities. What you’ve got in a computer is resources for designing an actual activity. You’ve got the basis, but the activity happens in the intersection of learner, learner, teacher and computer – not in the software.

Research in classroom-based education ought to address the affordances of any tech, teachers’ pedagogical practices, pupils’ learning requirements, what we know about effective classroom teaching. Often tech offered – especially hardware – that was second-hand, designed for others primarily. If teachers are good at teaching, tech should enable them to do it more efficiently, not change it. Should be focused on the learning requirements, not the content you can give them. We know a lot more about effective classroom teaching. Have just finished the largest ever classroom pedagogy study in Britain, looking at frequency of teacher activities and outcomes. We know a lot more about this, that should be the basis of designing electronic things for schools.

There are some generalities from those studies. One of them, the Education Endowment Foundation’s summary of useful things to know, one says – individualised learning [with technology] may not be as helpful as small group learning with technology or collaborative use of technology.

Here’s some of my projects that have tried to exemplify this approach. One of the CALRG ethics is researching with teachers, collaboratively. What do the best teachers do? How can the technology help them to do it? Bart mentioned this – we want to know what the best teachers do.

Showed a photo of an early project, in a school in Wolverton, old style large grey box computers.

Project with Sara Hennessy, Paul Warwick – Using interactive whiteboards (IWBs) to orchestrate classroom dialogue. Vygotskian approach to dialogue. We know that the ways teachers talk to students makes a different to learning outcomes, and how they talk back – it’s the whole dialogue that counts.

Example 1 – Using the IWB to link lessons. Give a sense of going somewhere, so they can perceive the journey. Teacher uses pictures of previous lesson as a resource. [Photo of teacher, showing classic current British classroom – whiteboard with projector on it, and walls absolutely covered with brightly coloured things – a very visually noisy environment.] We didn’t find negative features of this use.

Example 2 – Using block reveal – show a blank on the screen, e.g. three words blanked in a poem. Can encourage thinking, structure teaching, and maintained pace.  Once the teacher’s designed it, it’s harder to change the structure of the lesson, content less flexible, and tends to be used to set up ‘closed questions’. Good teachers also ask open questions.

IWB enables teachers to provide engaging whole-class learning experience, integrate diverse resources, save and revisit resource to improve continuity and coherence – all more easily than without an IWB. [Interestingly, he says ‘IWB’ as the abbreviation – which is five syllables compared to six for saying ‘interactive whiteboard’.] But teaching is not ‘transformed’ by the IWB, it can dominate events or disrupt when it fails. The effective use of the IWB depends not only on tech skill, but also on teachers’ teaching skill.

Second project, Ingvill Rasmussen and Paul Warwick, Digitalised Dialogues Across the Curriculum (DiDiAC). Based on ‘Thinking Together’ approach. A brower-based microblogging platform Talkwall. Small number of schools in Norway and UK.

Asked teachers about group work, teachers asked students to consider purpose and value of talk for learning, class agreed ground rules for Exploratory Talk (this is known to be really good for getting them to talk productively), used Talkwall to record ideas and decisions. (Important to get them to agree what they’re saying and write it down, and not just talk.) Consensual social norms governed use, not just features of the tech.

Can share this between schools. The collective thoughts of the students become a common resource, the teacher can build on them, the students can refer to them. It’s helping teacher to improve the quality of groupwork.

Project not finished then. It does help. It’s not flash or innovative, but it helps a more group-focused collaborative learning environment. It improves the variety and quality of questioning and talk by the teacher and children.

General conclusions

Not controversial to you, but might be to a Secretary of State for Education.

Transformational, tech-led approach is misguided and disfunctional. Digital tech should not be designed to replace teachers, but to assist them – good teachers doing their job well. Need to assess this using educational, not technological perspective. It’s a toolkit for pursuing educational goals, not determining them. It’s particularly good for supporting collaborative learning. We need to assess the educational strengths and weaknesses fo any new tech – especially when it arrives as 2nd hand kit primarily designed for other users. If developed from this perspective, it has so much to offer. It’s great. That’s what we should be aiming for.

Questions

Stylianos – The context of this research. Compulsory education, characteristic there is the development of a community. Students spend a lot of time with the teacher in the same environment, facilitates these activities. Are there any lessons we can learn from your research for people who teach in HE, where the engagement is not at the same level? There’s a speed-dating type contact between tutors and students.

Neil – Not sure I can give a clever answer. I agree HE is a different context. In achieving the balance between authoritative presentations like this one, and true dialogue, it’s difficult. At Cambridge, we have supervisions – it’s not cheap but it does it.

Kim – Talked about using good teaching to design tech, but what about the converse, what teachers find difficult or can’t do?

Neil – They’re aware of the richness of multimedia resources. Most of them aren’t natural techies. They want things that’ll make that more possible. There are ways that can be better. Kids are missing out if they’re not making the most of those resources. Other things, like citizen science, they find it hard to coordinate children. The one Mike Sharples did with bird feeders, the technology enabled coordination of individual records across several schools, it was great with primary kids. They’d have found that impossible. It’s a good way to look at it.

Patrick – Back to the first question. I was looking at the list of general conclusions, it applies to what we’re trying to do in HE. I feel that a lot of the research we do, it’s a challenge we face when we work with schools, but it is transferable. Is there something we do in HE that has no lessons, that’s so different, or should we be looking at this across all of the contexts?

Neil – It should be research with potential to apply across contexts. But if you ignore the particularity of the contexts you might go wrong. The aim of a primary teacher isn’t to reach beyond the 26 students they have. That’s going to affect what works. The scaling Tim mentioned, I can see why that’s beneficial for this institution and the students, but that isn’t the issue here. You don’t want to scale up a primary classroom to 100 kids. You do want to scale up to help 100 teachers see how they can do better. It depends what your question is.

Patrick – The Development goals does go across all education, not just HE. The pedagogies are around sharing experiences, broadening beyond the classroom, even though you start there. You’re underplaying your research.

Tim – Reinforce and comment about the second-hand kit. Had the experience of visiting a large computer corporation about educational strategy. Had a sequence of this would work well for schools, things designed for commercial context, but not pedagogical intent at all. The problem education has, it’s a small underfunded market. If you compare education to world finance industry, it’s very small. Cyclops was ahead of its time, BBC micro, Cicero, why are they no longer extant? They never reached the numbers the IBM PC or MS-DOS did. There isn’t an economic structure around to sustain. If you buy a spreadsheet it’s going to be Excel, not one designed for learning. A problem like the civil servants of education ministers have is not realising that education is small beer compared to defence or finance.

Neil – That’s plausible, except you see Pearson is phenomenally wealthy, and Cambridge University Press pays for half of what Cambridge does. There’s money somewhere in it. How come the books sell?

Tim – Books are easier, you just print ’em and flog ’em.

Andrew – Falling in to the trap of simplifying the problem. The issue with scaling is context. Learning occurs differently in different contexts. We can’t avoid the fact that context defines. Steve Draper’s “Niche-based success in CAL” – tech adopted when it solves a problem. Need to be more sophisticated. Scalability doesn’t happen because learning contexts are different – e.g. VLEs. I’ve been paying more attention to understanding the learning problem before we think about the solution. Learning is a complicated thing, we can’t just wade in with tech without disciplined analysis of the problem.

Neil – I agree entirely.

Wayne Holmes – I like your final conclusions, particularly number 2 (tech designed to assist teachers, not replace). Any examples where the attempt to introduce tech shows up problems?

Neil – Sara?

Sara – The whiteboards is one example. We weren’t unaware of the pedagogical issues. But when available worldwide, clear they’re not being used effectively, despite every single classroom having one. It takes a lot longer to address the problems [than to reveal them]

Bart – I’m not in habit of quoting David Willets, but he saids one problem is most educationalistst are amateurs, trying to play round the fiddle, use a very basic approach to the black box of learning. Do we need to fundamentally upskill the methodologies to show the complexities of learning?

Neil – No. I think it’s politicians who don’t understand learning. [laughter. We know a lot more about the process of teaching and learning.

Richard Joiner – Whiteboards, teachers not using them interactively, you need to change the pedagogy that teachers use?

Neil – What we do is find out what really good teachers do and distil it down to its essence. Some of the teachers said they did do things, others hadn’t thought about it. I’ve such respect for teachers, I don’t want to become a Gove knocking teachers. There are some who are not so good, need to be trained to do what we know is a good thing.

Sara – Introducing any tech, has to be preceded by support and professional development for the teachers. We’ve known this for decades. The whiteboard initiative was an example, the digital microscopes in secondary schools, they were just left in cupboards. We have lots of examples.

27/365 Present - Regalo

Poster competition winners

The poster competition winners were announced by Liz Fitzgerald, Wayne Holmes, and Doug Clow. I haven’t written them down here so as not to spoil the result.


This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission needed to reuse or remix (with attribution), but it’s nice to be notified if you do use it.

CALRG40: Afternoon

Liveblog notes from the afternoon at CALRG’s 40th anniversary, 19 October 2018.
Rays of Creusa

Vision 3 – Teams can successfully teach any number of students at a distance

Patrick McAndrew

(Rebecca Ferguson is sadly unable to attend and present)

Berrill is a challenge for presenting – you can see the people in the room, yawning, and you also know that people will come to you the next day.

The OU is one of the largest universities – in terms of students, in terms of full-time equivalents. And coming up, we are nearly in to the top 20 of students studying full time!

Patrick shows a photo from Rebecca of me (Doug) at a statue showing someone pouring knowledge in to their head and reading a book.

Teaching is when someone acts to help someone learn. “To make someone less inclined to do something” – as in “I’ll teach you to throw rocks at my windows”. Amusing. But here, teaching is causing someone to learn.

What underpins that are a series of beliefs, with a lot of evidence: distance learning works, and anyone can learn. A more open approach to being a university can be effective. Distance education works. That was a challenge for this university. The push to ensuring we have sufficient quality has driven us for many years. One was the formation of an Institute of Educational Technology. Being effective at teaching at a distance, and using technologies. Now synonymous with small devices and computers, but more broadly is how we approach how we teach. Our belief is anyone can learn. We are open to any student studying with us. Have put up work from the early 2000s with a model of pass rates, connects to Bart’s work, predictive models and machine learnings. What can we predict about what the student body does? Jim Peard has done this work, annual update. Three groups of factores – student previous experience, student motivation, module difficulty/match. One key factor is whether they have previous experience of study. As soon as they’ve done one module with us, their previous experience drops down the list of predictors.

Foundations on Supported Open Learning. But also understanding how we design. Derek Rowntree’s work. A lot of work on communication. If you’re not familiar with Diana Laurillard’s conversational framework, get familiar with it.

Also, working at scale. Currently (2011, UNESCO work by John Daniel), there were 165m people in tertiary education, projected to peak at 263m in 2025. Would require more than four major campus universities (30,000 students) opening every week for the next fifteen years.

Support that scales. Several people have come to the OU to find out how we manage, the magic that enables the university to teach at the scale it does. The material design and the support. We have a resilient but not cheapest answer. Building in support, 1:20 or 1:25, if your process can operate at roughly that level, it’s probably going to work. It’s hard to make a resilient system that operates above that ratio. Enables us to give support to our students. A major part of what makes a difference for our students is the support from their ALs. High satisfaction levels around student feedback. The student doesn’t necessarily meet the person who gives the feedback, but we get the highest ratings for the feedback. Bart’s delivered another punchline – what we design makes a difference. The learning design matters.

Success at (moderate) scale. Martin chaired one of the first really large OU courses – 15,000 students studying for credit. T171 You, Your Computer, and the Net. 55% of OU students in England come from disadvantaged backgrounds, only 29% of full-time students do. This was an era when things were able to expand more easily, we could grow supported by the systems around us.

This was also when we started to operate at a truly massive scale – OpenLearn, launched in 2006, hit the 60m visitors mark last week. Open University on YouTube and iTunesU. We have more and more routes to get content through to people. The design is important. We’ve taken away one of our pillars of success – the support – and that means we lose some of the resilience and certainty in what will happen.

I said I’d come back to some of the challenges. There’s the challenge to us: for PT entrants to study in England, there’s been a 61% drop since the introduction of the high cost student loan system. We are no longer as big a university as we were at the end of 2010. The other set of challenges is what happens to our students – from the number who start to register, then who confirm they are starting study, then actually starting, then paying their fee, then keeping going, then completing, then passing. Not everyone makes it to the end point of that journey [something like half by eye from his bar chart].

We are changing our approach. Work on the right start, so students find out in the first three weeks they’re on the wrong course, we haven’t necessarily supported them well. Targeted communication, bridging from one piece of study and the next, and different measures of success – a degree isn’t the right qualification for all our students – a Certificate in HE can be a major achievement.

There’s also being more radical. THe OU was radical in the beginning, and a radical move in 2012 setting up FutureLearn. Are universities the right structure? Changes in support models? Alternative pedagogies? Different attitudes? This is beyond our control.

Pedagogy at scale through MOOCs – led by Mike Sharples. And Rebecca Ferguson has led work looking at what research tells us, within the OU’s own FutureLearn provision. A lot of the lessons are the same: make use of effective distance learning pedagogies; offer well-designed assessment. The largest ever MOOC was the British Council English language course on FutureLearn, with (from memory) 500,000 students on a single presentation.

Rebecca’s also looked at how educators at other univerities experience developing MOOCs. Developing expertise: develop educator teams; identify and share effective learning design. FutureLearn makes the communication element more visible. How do we make this part of the education ecosystem? People still want accreditation. We’re working with FL towards accreditation.

Other pieces of work. Beck Pitt has been working with BizMOOC, to produce a MOOC Book. Shi-Min Chua, how people communicate and how we can support discussion at scale. Francisco Iniesto, how accessible are MOOCs, looking at producer, and learner perspectives. Tina Papathoma looking at the educator perspective.

We are struggling with how to maintain our scale in the current environment. Reaching out to the world, carrying out good research to support it.

Questions

Tim – What’s the claim for superiority of FutureLearn compared to others based on?

Patrick – It is owned by the OU [applause and laughter] It has a different approach to pedagogy. More involving, can see it impacting. It’s the pedagogy, the ownership, and the partners.

Eileen – When we visited Coursera, they were interested in FL’s good design for mobile devices. Because of the timing, it was a year after Coursera, and understood that mobile devices were important. Also, the idea of pedagogy around social learning at all. Nobody else had thought that through. Mike has an interesting slide on the use of the forum equivalent, and the forums not being separate but in the design. Also, Diana says it’s the best one.

Diana – From experience.

Science World Reflection

Vision 4: Learners engage enthusiastically with science learning

Eileen Scanlon, Thea Herodotou

If you join an interdisciplinary group, your birth discipline can leak out. Long list of collaborators, and past projects. One was Collaborative Learning in Primary Science – originally turned down by a reviewer who couldn’t see the point of learning in groups. Conceptual Change in Science, Alternate Reality Kit, The Talk Factory, Personal Inquiry.

Many theses – from Eileen’s own (Modelling Physics Problem Solving) to Citizen Inquiry in informal settings. We’re digitising our theses, and are going to make that avaiable online.

Some old photos from news in schools, an original Mac SE.

Considering a trajectory for new learners – informal to formal, passive to active, solitary to sharing, learners to teachers. One key step on informal to formal is the Open Science Lab. Originally, a mix of experiments to support formal teaching, but also informal science, an iSpot link. Use of technology to make such experiences available to others is important to us.

The PI project was a bit of bricolage. 2008/9 was just when mobiles were becoming available to help students connect experiences outside the classroom with those inside. Developed an approach to personal inquiry, could be an interest in aspects of yourself, of your environment, or of your community. Personalisation, and inquiry learning needs a lot of scaffolding. Outcome includes a good book, and exhibition at the Royal Society.

With the work for school kids, in the first instantiation of the OpenScience lab, we did some experiments, with Simon Kelley, around moon rocks, to see if the approach worked pedagogically in OU teaching. Just because you’ve worked with school children and technology, doesn’t mean they’re not applicable in OU teaching. We, CALRG, always took a broad approach to which approaches would inspire us and apply to our work within the Open University.

Another experiment, also with Simon Kelley, is some experience with the Virtual Microscope. Thea and Maria Aristeidou (and Mike and Eileen) have done some evaluation. It aimed to explore how students use and engage with virtual microscopy, and what teaching approaches work better for students who study practical science. Contrasted blended learning condition with wholly online one. Mixed methods study, learning analytics, survey, interviews. Blended students were more satisfied with how the virtual microscope is integrated in the module and greater perceived learning improvements (observation skills) than online students. There was a big difference in how it was used – in blended, it was introduced by a tutor and was complementary to a physical one. Online it was the only way to see the images; the learners felt a need for tutor to complement the activities.

Several projects on science learning across settings – Citizen Inquiry. Between citizen science and inquiry learning. New book on Citizen Inquiry, recently published. Active engagement in science, citizens defining their own research agenda. Not just natural and physical sciences, can be social and applied sciences. There are 12 case studies of citizen inquiry.

Started with nQuire-it project, funded by Nominet Trust. Website, and Sense-it application, can capture data using your phone. Whatever data you capture can be uploaded to the nQuire-it platform. A community developed around weather, Maria Aristeidou led. Also worked with the Centre for Policing Research and Learning, to help police understand what the community needs. Latest collaboration, was with the BBC Tomorrow’s World, redesigned the platform. It has open and closed missions, and others. Presented at the Royal Society Summer Science Exhibition. An example – My Life, My Data, #MyTomorrow. Run with the BBC.

Latest work, collaborating with organisations across the world, NSF, Wellcome and ESRC. Three museums – London, SF, and LA, and UCD, OU, and Oxford. LEARN Cit Sci . Museum-led citizen science programme, coastal biodiversity, uploaded to iNaturalist.

The vision – learners engage enthusiastically with science learning. Open learning, inquiry learning, mobile learning, enabling the learner’s transitions between formal and informal settings and contexts can offer benefits for science learners.

Questions

Andrew Ravenscroft – the word ‘disadvantaged’, how do you define that?

Thea – Chidlren or young people who don’t have opportunities ot go to museums, or families not well educated. Museums have activities to go to those communities, to raise their science capital.

Kim Issroff – How science has changed over 40y, and how pedagogical approaches have changed.

Thea – Eileen?

Eileen – A good point, particularly in relation to last points Thea made. Citizen Science as a credible activity. Muki Haklay was tweeting furiously because people were questioning the quality of cit sci data. The notion of being an active science learner has been around for 40y. One of the issues around practical work experience is engagement with practical science became difficult in the secondary curriculum. Noticed as we went round schools, we worked with geography rather than science teachers. They enthusiastically bit our hand off for pilot projects to their whole year groups rather than just 20 or so students. They had got in to science investigation as a way to help young people develop their understanding of geography. They assumed students knew what a hypothesis was. In science teaching, a move away from data collection. There were enquiries, but the activity was managing data that had come from elsewhere. Interesting mismatch cross disciplines about what we assume children will have learned in different contexts. Anyone should understand what a scientific investigation might deliver from them, whether a community group measuring pollution in a creek, or a group of kids deciding they wanted to look at how food rots if you leave it outside a fridge.  Science as even more relevant to daily life. How we include people has changed over the intervening period.

Thea – Research shows people find STEM as not interesting, or boring, or for the gifted ones. What was happening was only a few were involved in science. It wasn’t accessible to the citizens. The idea developed that science was just for a few. We are trying to show there are way to learn science, and give the support to do so.

Tim – You’re too diffident. You can be stronger about the claims you make for nQuire. Look at Zooniverse. Citizen Science and nQuire work is an unambiguous success. The question is how do we get citizen mathematics, or citizen philosophy. The one that has the unambiguous tick is the collective study and engagement in science.

Eileen – We’ve managed to broaden the types of inquiry. Koula has used nQuire to do cultural heritage enquiries. There’s a lot of interest in our work from other museums than just science museums. That’s a strong direction the work might go in.

Diana – The Muki Haklay, it was about data that is used as a research output to demonstrate things about London. There is an issue about how oyu assure the quality, comprehensiveness of the data you collect. You’ve been talking about citizen science as an educational process. But when it’s a process of data gathering, you do have to address data quality, but he didn’t.

Eileen – We do both with iSpot – it’s for learning, and for real science data gathering. The solution is to have seeded experts on your platform to help with identification, then by continuing participation on the platform, through a reputation management platform, develop a more robust way of QAing the observations.

Janice – Before we launched iSpot, we engaged the expert biological recording community. A big job was signing them up, getting their expertise on. That’s the model we’ve used. Get the experts on board. They assist with the verification process. We badge them with the specific expertise they have.

Eileen – In 1975, as a science faculty course manager, we had measurement of SO2 pollution. People filled in a form, we sent them kits where they measured SO2 in their local area. They sent the forms to me, I checked the readings were at least within 100 of where they should be. We had a live BBC programme to report their feedback every year from 1975 onwards. Paper in New Scientist that credits the cohort of OU students.

Thea – With Zooniverse and iNat, partners on LEARN Cit Sci. On iNat, developed a machine learning technique that gives you ideas about the correct answer. Zooniverse, the identification is cross-checked across at least three volunteers.

Diana – That’s a much better answer.

Allison – Do you think the fact that questions, or tasks in science can generally be broken down in to smaller ones, so many people can come together, makes it in some ways simpler to have citizen science rather than citizen philosophy? How do we move forward to bring this together?

Thea – We try to break this down through the design on the site. We structure the steps of the inquiry process. The challenging thing is what kind of questions they can put down – so they’re do-able and can be answered. So young people had ideas about going to space and doing things that couldn’t be done. It needs some scaffolding from experts to narrow them down to link to a conclusion.


This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission needed to reuse or remix (with attribution), but it’s nice to be notified if you do use it.