Blog

All the best for an online autumn 2020

All the very best to all involved in UK universities as we hit last-minute decision time for the new term/semester. As of 1.18am today, Thursday 10 September 2020, we now have as much guidance as we’re going to get.

This is the blog post version of a long thread I posted on Twitter earlier today. In short, I advise going all-online now, but all the best anyway – you’re great.

Go all-online now,
but all the best anyway –
you’re great.

First, I want to apologise that some of my commentary has made it seem to some that I don’t appreciate or understand the scale of the work being done, and that I’m unsympathetic to staff working on plans that differ from what I’m arguing should happen. That’s not the case. I’ve also inadvertently contributed to a sense that those working on opening physical spaces don’t care about students. That’s simply not true. People who disagree with me on this are driven by a deep concern for students. And they’re working bloody hard on good stuff.So although I may disagree on some tactics, I want to strongly support and applaud the enormous efforts going on across the sector. These decisions are not easy, and much hard work is being done by people who don’t get much say but do get much stick, and that’s unfair.

This latest guidance is obviously not the last oddly-timed intervention from the Government we’ll get, so all those systems for rapidly appraising and acting on new developments will need to be on hot standby for the foreseeable future. The Government guidance sets out ‘tiers of restriction’ in response to local outbreaks. Most universities already have a range of scenarios mapped out. One of today’s urgent jobs will probably be to map those onto these new official response ‘tiers of restriction’.

My view remains that it would be better to choose now to teach online, except for those things that can only be done in person – i.e. starting at Tier 2 or 3, not Tier 1 (default position).

Despite what some say, that is not an easy decision. It’s a difficult balance. My two main reasons for deciding now to teach online are to reduce the spread of the virus, and to reduce the workload (and cognitive load) of staff and students in keeping multiple scenarios live – which will make the online learning better.

But this is not an easy option. Not least because the Government explicitly expects most universities to open their physical campuses, even now. And students will, quite reasonably, be furious to be moving all that way and taking out accommodation contracts etc if it’s all online anyway at late notice. For most courses in most subjects in most universities in most subjects, teaching online now will not be as effective and engaging a learning experience as teaching in person. I’m ex-OU. I am utterly convinced you can do world-class university education online. But the Open University has had decades to prepare and has a completely different organisational setup and staff base. Staff across universities have worked enormously hard since the start of all this to move everything online, and the transformation in capability is truly astonishing. But realistically – with some very notable exceptions – most online provision will not be as good as in person. Yet! Although it’s important to note that Covid-secure measures mean in-person teaching is very constrained, and less good than it would’ve been in normal times.

You’re darned if you do,
but damned if you don’t.

So why on earth do I still say start with a more restrictive, online approach to university teaching? Two main reasons:

  1. take more responsibility to reduce the spread of a pandemic virus than the Government is requiring, and
  2. reduce the pressure on staff.

(1) Whether or not to take a stronger line on risk reduction than the Govt advises isn’t easy, and reasonable people can come to different views. I would give greater weight to advice from the Govt if it had been more effective in dealing with the pandemic to date. I’d also give it more weight if the advice on reopening buildings and campuses had more explicit acknowledgement of the risk to staff and the community. Universities are more than their students, and have responsibilities to their host communities.

There’s a maxim from public health on taking early, effective action: you’re darned if you do, but damned if you don’t. Effective action is unpopular & will inevitably look like too much too soon: there is no major disease! If it works, it’ll appear unnecessary in hindsight. But failing to act to prevent wholly foreseeable disasters is not just unpopular, it’s massively condemned. Especially when there are avoidable deaths. SAGE says, in their report on managing transmission of the virus in universities, “Outbreaks in HE are very likely.” Physical opening is socially acceptable now, but may not be in retrospect.

(2) Going all/mostly-online now means staff can largely forget about the physical teaching scenarios and focus on the one or two that will have to be delivered. That saves effort and reduces uncertainty and worry. All of which would be hugely welcome. Running multiple possible scenarios comes at enormous cost. Everyone involved has to make multiple plans. There isn’t enough time to do everything properly, so that means the plans for each scenario are less good than if there were fewer to deal with – preferably just one.

There’s also the cost of switching from one scenario to another, in direct staff and student time in implementing the new arrangements, and also in cognitive load in rethinking how your routines work. Starting more restrictively means fewer scenario switches.

You absolutely can get better at teaching online. I would bet that teaching will more rapidly improve if staff are doing it than if they’re struggling along with massively complex hybrid arrangements they can barely cope with. Doing both simultaneously is really hard.

I want to salute and acknowledge the enormous, spectacular efforts being made across higher education.

But! I can see how others can come to a different view, and I may be wrong about how likely outbreaks are (I do hope so), and your university might well turn out to be one of the lucky ones despite – as Mr Gove infamously remarked – choosing to run things quite hot. And providing some in-person teaching will, at the moment, in many instances, result in a better experience for students. If you do have in-person time, do prioritise group-forming and community-building activities over curriculum delivery. You can build community online, but it’s much harder. You can teach more effectively online when there are good relationships between your learners already. Make good use of the limited time you may have. @ProfSallyBrown has some excellent ideas in this line in her post on Wonkhe.

Regardless of any difference of opinion about tactics, I want to salute and acknowledge the enormous, spectacular efforts being made across higher education. Academic staff, professional staff, and non-professional staff have all worked fantastically hard and worked wonders. Students and students unions have also done extraordinary things, and engaging with them is the best antidote to gloom about the current situation and the future. Even when they present with a diversity of strongly-held views. Perhaps particularly then!

Finally, keep an eye to the future: it’s hard to make headroom for long-range plans, but this won’t last forever. I’m sure that universities who make continuing good use of the expertise in online working developed in these hard times will thrive in the more distant future.

Good luck!

New service: Tutxoring!

I’m really excited to launch a new and unique service: tutxoring!

For most things you need to learn, there are many experts on the subject from whom you can learn, and a vast range of learning materials: courses, textbooks, videos, communities. For most topics, there’s an agreed curriculum, a set of things that most experts agree you need to know. But for the most difficult, the most challenging, and the most ground-breaking learning, there’s almost nothing to help you. That’s where tutxoring comes in.

The model comes from the later stages of PhD supervision, or some forms of an Oxbridge tutorial. While the supervisor or tutor usually starts with a better understanding than the student, good students will, by the end, be among the world experts in the specific area they’re working in, and the role of the supervisor or tutor becomes much more of a guide than a direct teacher.

This is just to whet your appetite: there are more details about what tutxoring is on my website, as well as an even longer discussion about tutxoring, how it works, and how I’m well placed to offer it.

Many thanks to everyone who provided early feedback on a sneak preview. You’ll see I’ve changed the name based on what you said. And a particular salute to the old guard who still have my blog in their RSS feed reader.

If you’re interested in tutxoring, want to find out more, or want to discuss how tutxoring can help you, do please get in touch.

And tell your friends!

More student data, but later

We need to seriously consider doing a lot less with student data right now. Stopping data logging will reduce the impact on our systems and, more importantly, on our students.

As a longstanding learning analytics researcher, I don’t say this lightly.

Computer Problems
“Waiting for Moodle to render this page is taking longer than it takes to get to the lecture theatre on the other side of campus.”

The Covid-19 coronavirus crisis is profoundly changing society, including universities. There’s been a mad dash to online teaching, and a mad dash to online assessment close behind. Those of us who’ve been enthusiasts for online learning for a long time know that this may be a huge success in some places, but that it isn’t going to go terribly well in many others. It’s easy for our eyes to light up at the thought of all that interesting data that all that online activity could generate.

But hold up. In a crisis, we need to prioritise what’s most important. Frankly, the benefit to students of most of our data gathering is not sufficient to justify it getting priority in a crisis. And our evidence of what benefit there is has improved since I wrote a rather despairing paper with Rebecca Ferguson about it, but not hugely. I do believe it’s worth pursuing. But it absolutely can wait, so it should wait.

A big turn-off

What if we just turn all that data logging off for the duration of the crisis?

We’d reduce the impact on our systems. Online learning systems are under massive strain as IT staff and suppliers struggle valiantly to deal with a completely unprecedented spike in demand. With a well-designed and well-tuned system, data logging needn’t be a huge drain on front-facing server resources. But when you’re rushing to scale up, you don’t have the time to tune it well and built a robust and separate data architecture. It will make the IT people’s life much easier if we just drop those requirements for now. It would, at least, be one less thing for them to worry about. And it might well materially improve performance, particularly on hastily-deployed systems where there hasn’t been time to optimise them.

We’d also reduce the impact on our students. Most academics are only in the student data business to make things better for students – but there are other interests at play too. Students are quite reasonably concerned about how their data is being used at the best of times. There isn’t the time to do all the engagement around data privacy that good practice requires, and that you need to properly address understandable and quite reasonable concerns. We could just steamroller them in to it. This seems to be happening a lot, and there’s even been some commentary from UK ministers about the GDPR that might be useful political cover for it. Or we could just … not do that, and give them a break. Deal with their worries about data privacy by sharply reducing the amount of data we collect. I think, given all that this cohort is putting up with, and is going to have to put up with in the near future, they badly need any break we can give them.

What can’t wait?

There will be some exceptions. Obviously, where you have a cognitive tutor setup, it would be nonsense to turn off the logging – and, not coincidentally, that’s where we have the best evidence of direct student benefit.

More widely, I’d argue for saving the last login information for each student so their tutor can see who’s been able to access the system and who hasn’t. I can’t instantly think of good papers showing this, but my strong hunch from practical experience with predictive modelling is that a huge chunk of the benefit that can come from such systems is increasing awareness among tutors of which of their students hasn’t been able to study for a while. We can do that directly with a lot less impact on students and servers.

And obviously, some data has to be recorded to operate an online learning system at all.

More later

For the avoidance of doubt, I am not for one minute arguing that learning analytics should close down and give up. I do still believe that there is huge potential from using students’ data to improve their learning, and that there’s more to be gained in future than has been done so far. I am arguing that we should be humble about what we can offer and prioritise the benefit to students. That is, after all, the whole point of learning analytics.

Learning analytics researchers and practitioners have never been in more demand in their organisations. We understand the practicalities of online learning in ways our more traditional colleagues don’t. It’s not like we’d be short of stuff to do if we spend the next months prioritising support for them and for student than our data-gathering projects.

We should do a lot more with student learning data … and we should do it later, when all this is over.

COVID-19 coronavirus and data

How does COVID-19 coronavirus illustrate timeless truths about data?

This is a version of two Twitter threads: one on how it illustrates timeless truths about data https://twitter.com/dougclow/status/1228248715728740352 and the other about authoritative information on the outbreak https://twitter.com/dougclow/status/1228246890287968256.

NB I do have some health and medical background but I am a data and learning professional, not a clinician, epidemiologist, or public health person.

Canarian Raven
“The virus is called COVID-19. No R. Call it CORVID one more time and I’ll peck your eyes out.”

Data science, BI and in fact any statistics all start with simple counting. That first bit is surprisingly hard, and getting it right is often most of the work. The current COVID-19 coronavirus outbreak shows this up nicely.

You can make as sophisticated a model as you like for an outbreak, to estimate things like the basic reproduction ratio (R_0, how many new cases each case is expected to cause, on average, which tells you how far it’s spreading), or the case fatality rate (how many people who catch it die of it), or the likely extent of the outbreak (how many people might catch it), and all sorts of stuff about how fast this is all happening or likely to happen.

But all that crucially depends on simple counting: how many have it at a given point, how many have died, etc. With the latest figures, we see how hard that is. The number of new cases in China jumped from 2,000 on Weds to 15,500 Thurs – because counting methodology was changed.

The numbers have changed retrospectively, too – I copied those numbers down yesterday, but today it looks like it was a 400 increase on Weds and a 15,100 on Thurs. This would be hard to get right even if there wasn’t a massive health crisis there.

Counting infections is always tricky, but surely deaths are easier? Turns out there are surprisingly difficult edge cases at the edge of life, but those don’t come up often, and almost everyone agrees about most deaths.

But even then, you’re probably getting data from multiple sources and combining them and that can lead to problems. Like today’s news that 108 deaths have been removed from the figures because they were double-counted.

It’s easy as a data scientist to say we need to invest in better data, and sometimes that’s right. But getting good basic counting data is hard, and expensive, and cannot be the absolute priority. The data you’re dealing with will always be messy to some degree.

Speaking of degrees, this crops up in education and learning. ‘How many learners do we have right now?’ is the basic question that is the denominator for pretty much any learning or teaching metric you care about.

And that is surprisingly hard to answer sometimes. There are late registrations, retrospective registrations, de-registrations, retrospective de-registrations, provisional versions of all those, and that’s just dealing with individuals.

When you have organisations buying in learning, it gets even worse: how many are provisionally ordered, how many are finally ordered, how many are catered for, how many show up, and how many are invoiced for are all different, and not the same as how many learned anything.

Speaking of invoices, cash at least should be easy to count? It should be clear when a customer paid us, right? Oh, my sweet summer child. That sound you hear is the entire accounting profession sniggering.

Suffice to say that the same payment can legitimately have different dates for cashflow, annual accounting, VAT, other taxes, and who knows what other purposes. Organisations are incentivised to manipulate this data, and most organisations respond to incentives.

Summarising a wide ramble: Even the simplest of data, like ‘How many people have COVID-19?’ can be surprisingly hard to get authoritatively. Getting better data is rarely a business imperative. Be cautious about interpreting your advanced statistical models.

Also, be kind to people who are working hard to do really difficult jobs in really difficult circumstances. And don’t make it harder by spreading misinformation.

Check with authoritative sources before passing on information. It does not help to spread stuff you think might be dodgy or far-fetched ‘just in case’. Most people who pass on misinformation don’t mean to cause problems. Check it’s right first. You can help protect your friends and colleagues from this hazard.

So, having said that, how do you know what’s right? Check with authoritative sources, like Wikipedia always tells you. Wikipedia has excellent info, which is rapidly changing as the situation changes rapidly: https://en.wikipedia.org/wiki/2019%E2%80%9320_Wuhan_coronavirus_outbreak

In the UK the risk of infection is very low, as in most places outside Hubei.

In short: If you think you may have the virus, stay where you are and call 111.

Outside the UK, there’s authoritative sources like the US CDC https://www.cdc.gov/coronavirus/index.html and the WHO https://www.who.int/emergencies/diseases/novel-coronavirus-2019

For hard research info, there’s stuff on the WHO site (currently under ‘technical information’ and  ‘global research’) https://www.who.int/emergencies/diseases/novel-coronavirus-2019 and many publishers have made research freely available, and some have free-access portals on the topic, e.g.

If you like statistics and numbers, here’s some good aggregation from Johns Hopkins University, with a few visualisations and a link to a well-maintained GitHub CSV. https://gisanddata.maps.arcgis.com/apps/opsdashboard/index.html#/bda7594740fd40299423467b48e9ecf6  

This data is pretty good, but don’t treat it – or any data! – as representing the objective truth.


This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission is needed to reuse or remix it (with attribution), but it’s nice to be notified if you do use it.

Project start checklist

What do you need to have thought through before you start a project?

In a large organisation, like I left last year, there’s often a lot of heavyweight project management overhead. The downside is filling out lots of paperwork, but the upside is that it does make sure that the project is in good shape at the start. If it’s good project management paperwork, of course.

Working in smaller contexts lets you be much more agile and swift. Now I’m an independent consultant, I can start a project the moment I think it’s a good idea.

But how do I know it’s a good idea, and how do I know it needs more work, or is better not done at all? Obviously, using my skill and judgement developed over many years of working on projects. And that tells me that if it’s more than a teeny tiny bit of work, it’s well worth spending some time up front, to systematically work through what needs to be in place at that stage.

Some people would be happy to simply do that in their head and be done. But I am a huge fan of checklists. This post isn’t about the wonders of checklists. Suffice to say, the more experience I get, the more I think they are wonderful. They help make sure that the obvious things actually do get done.

So I have found myself wanting a project start checklist: a list of things that need to be thought through before I start a new project. It’s here, at the bottom of this post.

The checklist is deliberately skewed towards freelancing and consultancy, rather than internal projects inside large organisations that already have more project management overhead than they perhaps need. So, for instance, it doesn’t have any of the “set up project board” and “identify project sponsor” things you’ll find on other project start checklists. If you need to do those things, you (should!) already have a formal project management process. This checklist is for when you don’t have that. It would also work for small-scale guerilla projects inside a large organisation that don’t need official sign-off. It assumes the project is highly custom and not a standard one, which covers pretty much everything I do, but it is way more than you’d need if you are a freelancer doing well-established work – like, say, you’re a graphic designer and simply doing another logo for a new client. (Although even then some of the elements will be useful.)

The way I use it is to go down the list, systematically, and make sure I have a good answer to each question. To work the checklist magic, it is important to do them in order and not to skip any of them. (Obviously, if you don’t think an item belongs on the checklist, ever, delete it and never worry about it again.) If I don’t have a good answer to a question, a good argument for why it doesn’t matter in this context will also do. If I can’t answer a question with no good excuse, that’s a prompt to find out more before I start. Not knowing can be a red flag that this project is ill-conceived. Sometimes it’s the right decision to start a project before it’s all tied up and thought through. But better to do so having that potential risk in mind, and even better to mitigate that risk before it happens.

On to the list itself!

Takeoff

Is this project worth doing?

Ideally, it’s worth doing because I believe it will make the world a better place.

The underlying idea here is to avoid doing things that would be better not done. I like Peter Drucker’s distinction between “doing the right thing” and “doing things right”. Most of project planning and delivery is focused on “doing things right”. It is all too easy – and indeed necessary a lot of the time – to forget about the big picture, get your head down, and get it done and delivered. But the time before a project starts is a golden opportunity to pause and ask the big questions, like “What’s the point of this? Why bother?”.

Even if you think the project’s a terrible idea, it may be worth doing for other reasons. Not everyone can afford to be picky about what work they do, in which case, “it is worth doing because it will pay me money I need and I don’t have a better alternative” can absolutely make it a yes here. But if you’re doing that, it’s good to be aware that you’re in it for the cash (or the exposure, or the experience) not for the project itself. That way you’ll be in a better position to work to get more of what you want and minimise what you don’t.

Who actually wants this project to happen? Who directly benefits?

This is related to the first question, but different. I draw a distinction between someone who actually wants a project to happen because they themselves want the results, and someone who wants a project to happen so they can say that something like it has been done, for presentational or organisational reasons. So, for example, an organisation might want to be seen to be doing something about an issue, but they don’t actually care about it. They set up a little project, perhaps engaging an external consultant, so they can point to that as having done something, but they are not engaged in the issue. That doesn’t make it a no, but it is a different context to one where there are actual direct beneficiaries.

Fundamentally, it is often the case that people who are intent on doing something for someone else’s benefit have a different view of what would constitute a benefit to the  person themselves. I might sincerely want you to do good for you, but you will usually have a better insight in to what is good for you than I do.

This isn’t to say that I’m against building new things that people don’t yet know they want: far from it. (Think of the apocryphal story of Henry Ford saying that if he’d asked people what they wanted, they’d have said better horses.) But you need to know when you’re doing that, and how you will know that you have given them something that they do want.

Who are the other stakeholders and what should happen to them?

To many British people of a certain age, banging on about stakeholders makes you think about Tony Blair and the 1990s, or about frustrating project management paperwork, or both. It can be a little overworked, but I do think it’s worth thinking through who else will care, or ought to care, about a project you’re involved in, and what you’re going to do to them. Or with them. With them, totally.

Where is the value generated? Where is the money coming from?

This is again related, but may give different answers. For working in the commercial sector, it’s vital to understand where the business makes its money and how the project will impact on that. For other public and third-sector work, it’s vital to understand how they are funded and for what purposes, and how the project will impact on that. Grant funders – whether that’s the European Union, a research council, a foundation or a charity – will have a set of conditions on their funding, but there will also be a more a more implicit set of ideas about how projects they fund ought to work.

Will I get paid, and when?

This is, of course, the freelancer’s main question.

The obvious thing is to make sure the mechanics are in place. Is there an agreement to pay me? Does there need to be a purchase order, and if so, has it been raised? Have we agreed the invoicing pattern, and do I know who is responsible for paying the invoices? I don’t need much paperwork: I log every potential client, and I log every project. But the client and/or funder may have paperwork or processes to follow before the project can start. I can start work on my own initiative, but that may raise the risk that the project never officially happens, and I don’t get paid.

Underlying the mechanics (or overlaying them?!), there’s the question of whether they will pay when the invoices come in. What’s their cashflow situation? What’s their payment track record? This can crop up at all levels. A small company may have cashflow problems and have to defer paying you. A large company may have an imperative to juice this quarter’s financials and defer paying you. A university may defer paying because it is extremely bureaucratic and it’s nobody’s job to make sure you are paid in a timely manner.

Why do they need me?

Usually, this is because they need to get something done but don’t know how to do it, and so they’re bringing me in because I do. That will usually mean part of my role will be explaining what I’m doing and why. Sometimes it might be that they do have the skills but they don’t have the capacity, in which case there’s less need for explanation.

There are also projects with an aspect of management consultancy to them. Anything involving organisational change falls in to this category, but so too does most work on training and development. Here, it’s very important to understand the political context within the organisation before starting.

Do we have a shared vision for the final outcome?

If they’re bringing me in because I have skills and knowledge they don’t, my experience will be very different to theirs, so it is almost certain that what I imagine will not be quite the same as what they imagine. We need to do the work to ensure we agree what we’ll have at the end.

I can be happy to go without this, so long as the project plan has some way of bringing our visions together – although if that’s the case, I’ll usually prefer to have a break or review after the converging-vision phase.

Does the project plan make sense?

This is the bread-and-butter thinking through the project and planning it, or understanding the plan if someone else has produced it, and working through what my role will be. I need to work through what I’ll be doing, and how, and exploring all aspects of the project iron triangle (quality/scope, time, resource). This also includes how it will dovetail with my other commitments.

This question is where most of the planning effort goes in, but it doesn’t need extensive reminders on a checklist.

What is out of scope?

Obviously, a complete list of things out of scope of any given project is going to be pretty large. However, I do like to explicitly write down the things that one might reasonably think were included, but are not. This can be really useful to clarify with the client or funder, particularly if I can get it in to the paperwork.

What will you end up doing anyway?

Sometimes I want to do something to a certain standard of work and the client doesn’t want me to spend all that time (and/or pay me for it), so they say don’t bother with that. In many cases that’s fine. It can be important part of making sure we’re getting best value out of the work. Not everything has to be done to world-class research standards, and outside academia, done quickly is usually more valuable than done perfectly.

However, there are some things I simply can’t shortcut. One example for me is preparing for a presentation, talk or speech. I will always put the work in to be prepared to my own standards, even if that means skipping things I really want to do or staying up absurdly late. And I have tried but failed to make something without checking what similar things other people have done already. I don’t need to do a full lit review before starting a project, but if I havent spent at least a few hours exploring what’s been done in the area recently … I know I will end up doing that anyway. And I am an incorrigible data nerd, so if I collect some data, whether quantitative or qualitative, I know I will spend a fair amount of time getting to know it, regardless of whether I’m being paid to.

It’s better if I know to expect this than have it bite me yet again.

What if things are harder than expected?

A bit of thought ahead of time can help a lot here, and again the iron triangle (quality/scope, time, resource) applies: What aspects of quality or scope could I cut? Where could I find extra time? How could I get extra resource? How would I communicate and renegotiate if I can’t address the issue myself?

 

Takeoff

What could go wrong?

I like to do a project pre-mortem. This post isn’t about the wonders of pre-mortems, but they are a very useful tool. The idea here is you imagine that the project has failed and you’re working out what went wrong. How did it happen? It’s a cognitive flip: instead of only thinking about how it will succeed, you assume that it has gone wrong, and come up with ideas for how that could have happened. This can be very useful for spotting things that you are half-deliberately hiding from yourself because you don’t want the project to fail.

I’ve done a bit of flying in light aircraft, and like many aviators, I read a lot of air accident investigation reports. Often, when you these reports, you can see that bad judgement was present at the start. So a useful question to ask when preparing to fly is “How would this look in an accident report?”. That can keep you on the straight and narrow, and out of obvious, well-known mistakes.

So, in this context, how would you talk about this phase of the project if it later turned out to be a disaster? What were the red flags, the early warning signs, the classic blunders, the usual procedures avoided?

What would huge success look like?

This is a question I picked up from a cheesy talk some time ago. It’s not my usual style: I’m quite undramatic and practical. I think a lot of massive success is luck. But I do believe in making sure you’re keeping the door open to runaway success should it show up, and not closing it off as a possibility so it never does. This often leads to practical decisions like making things easily scalable, being open, and so on.

B.R.A.I.N.

What are the Benefits, the Risks, the Alternatives, your Intuition, and what would happen if you do Nothing?

This checklist has already covered the benefits, but the risks and the alternatives need to be explored, as does the do-nothing option. This should also cover the opportunity cost of taking this project on. If I didn’t do this, what would I be doing instead?

And I always need a reminder to check what my intuition says. What does your gut say? What does your heart say? I am very much a brain sort of person, but instincts arise for a reason, and it’s worth paying attention if my analytical brain is saying this is a great idea but my emotional brain is reacting like it’s a terrible one.

(BRAIN is an acronym/method I have shamelessly stolen from decision-making around childbirth. It’s fair to say I have more experience of bringing new projects in to the world than new people, but I have found this exercise to be a useful one when faced with any major decision.)
Brain Waves

What about personal information?

This is the GDPR question. What personal information will be generated, used, and managed in the project? And what needs to be done about that? This can be a very big question, and can roam well beyond a quick checklist, but it needs addressing on pretty much any project.

Luckily for me, this is one of my interests, so it’s not too hard for me to do. If you don’t have that background, it’s worth getting some advice if you’re not sure.

What about intellectual property?

The main IP in my projects is copyright. Almost everything I produce in the project – writing, code, interfaces, graphics, diagrams – will have associated intellectual property rights. What is going to happen to them? The client is paying for me to produce them, but what scope will there be for me and them to use them later?

I am a big fan of free and open source software and of Creative Commons licensing. As an idealistic youth, my first enthusiasm for them was about the value of increasing access to things. But as my experience has grown, my main enthusiasm now is about the immense value of an open license in making sure that everyone involved will be able to build on their previous projects in the future.

A project brings people together. If the products of that project are available only under a closed license, it can be difficult to all but impossible to get the necessary paperwork together to prove that it is Ok to build on those products to make something even better. But if it was licensed openly, there’s no such problem: the license says anyone can build on them – and anyone includes the original contributors!

However, I’m a pragmatist and I’m very much of the view that not all projects are suitable for release under an open license. If so, how are we going to manage the IP generated?

As well as the stuff generated by the project, it’s worth thinking through what is happening with pre-existing intellectual property: the stuff that I am bringing in, and stuff that others are bringing in. Do we need an explicit agreement about that?

In my line of work it’s less common for patents, designs, and trade marks to be involved than copyright, but it’s worth thinking through if anything in that area is going to come up and deal with it up front.

Again, this can be a complex area, but luckily for me it is one of my particular interests. This does vary considerably between jurisdictions – for instance, I know the US has work-for-hire laws that set a very different context.

What will I learn?

One of the things I enjoy most is learning about an entirely new-to-me area of human endeavour, so if there’s an opportunity to do that and get paid for it, I’m going to be very keen.

But even when it’s well within areas I’ve worked in before, there’s almost always the opportunity to learn something new, pick up a new tool, get better at a particular task, or something in that line.

My hope is that using this checklist will help increase the chance of learning positively from a project, and decrease the chance of it being the old joke of “another bloody learning opportunity”.


This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission is needed to reuse or remix it (with attribution), but it’s nice to be notified if you do use it.

Consultancy

As I said I would, I have left the Open University, and I’m working as self-employed consultant. I’ve had several interesting proposals and discussions, some of which turned out to be too interesting to turn down.

One of the really liberating things is the wide range of possibilities for interesting work that are coming up. I’ve always had a broad range of interests, and really enjoy finding out new things and understanding new organisations, processes, and systems. Which is very handy for consultancy work where that’s often the first step!

Mam Tor

I’m also very much enjoying escaping large organisational bureaucracy, although of course for large clients I still have some of that. Best of all, for me, is the sense of proper responsibility for finances. I’ve plenty of experience in budgeting and monitoring spend on large projects, but it’s always been in a constrained framework and with other people’s money. Now, if I think an expense is justified, I can simply spend the money. If I need to travel (and the contract doesn’t have expenses separately, which is how I prefer it), I can just book the travel and accommodation immediately, instead of having to book via a travel agent I’m not allowed to talk to directly, who insists, via the intermediary, that the flight I want doesn’t exist, until I send a screenshot of Expedia back, and even then says it’ll be an extra £100 on top. If it’s feasible to travel by train rather than by plane to save carbon emissions, I can do that, even if the train journey works out more expensive. Even better, if I find a way to save money, it doesn’t become a potentially problematic underspend, it goes straight in my pocket (after the tax people have their cut).

I would like to get back to some regular blogging on here, but obviously paid work gets priority.

I do have a little limited capacity at the moment, with more in the new year, so if you’re interested in engaging me for a consultancy, you can read a bit more about what I can do on Hire Doug Clow, or simply get in touch.

New directions

I’m looking for a new career, building on my skills and experience in working with people, technology, data science, and artificial intelligence. At the moment, I’m wide open to ideas for what that might be, and in what domain. As most people who know me may be aware, I have very broad interests (everything is interesting!), which makes narrowing things down a challenge. But a good one!

This means that I’ll be leaving the Open University at the end of May, after 20 years. My plan is to leave academia too, after an even longer time.

Edinburgh_Arthur_Seat_dsc06165
Arthur’s Seat, Edinburgh, viewed from Pollock Halls (CC)-BY-SA David Monniaux

Although it’s been nominally the same job, more or less, for two decades, it’s not felt like it. I’ve been lucky enough to do new and different things all the way through. I’ve worked on projects from tiny ones in the margins of my own time to huge multi-million pound international collaborations, and all sorts of things in between. I’ve worked in education, languages, health and social care, business, and all sorts of science. I’ve worked with so many wonderful people, including developers, statisticians, analysts, senior managers, technologists, project managers, event organisers, support staff, media specialists, editors, architects, building contractors, accountants, policymakers, librarians, and all sorts of researchers, scholars, teachers and learners. It’s been fantastic to learn from all of them.

I may write more later about the OU, but I will always have a special place in my heart for its mission, and for its amazing staff and students. I’m very sorry to leave all that – but it’s the right time for me to change direction. So when a voluntary severance scheme was announced, I leapt at the chance. Things have been quite rough for the OU over the last few years, but I do think things are starting to improve, and the future looks bright. I wish the Open University all the very best as I head off in a different direction.

Here’s to a new start!

CALRG40: Final session

Liveblog notes from the final session at CALRG’s 40th anniversary event, 19 October 2018.

Fall Reflections

Visions Reflections

Tim O’Shea and Mike Sharples

After a contribution by videolink from Mike Sharples in Shanghai, Tim launched in to a series of entertaining anecdotes from the early days of CALRG and IET. Sadly I missed most of the start of this part.

The group has the MIT demo culture. Even if you aint got a theory, build something that works and see it. Build it and test it.

The group applied the course team model in to research. Many projects with 4, 5, 6 people in them. That was imported from the course team model, an incredibly healthy thing. That makes it such high quality.

40y ago we were outlaws. A previous VC called me a flim-flam artist. And others tried to shut us down. Now there’s a message from the VC, and a PVC in the front row nodding. We were hiding in huts trying to make sure the university didn’t remember we existed and had lots for money.

The Open University is too small for its claims and potential. John Daniel’s first book, there were 10 as of his writing. All of which are a bit bigger, to the Chinese TV universities. It’s too wee. It needs to scale, to bootstrap.

I was invited to Georgia Tech to a symposium on affordable learning at scale. Has an online masters in CS, 8,000 enrolled students, approaching 10,000. Illinois online MBA with a few thousand, Michigan, Arizona State, a lot of activity. The ladder from MOOCs to micromasters to masters. At Edinburgh, many MOOCs, not at scale yet. Look at what Harvard are doing, Arizona State has set up a global freshman academy. Online first year taking people who wouldn’t get in to an American university. Large numbers. At this symposium, it has to be a four-figure number. They’re inventing the OU course team. Bright-faced people talking about bring a range of people, doing some QA, it made our courses better. It’s being reinvented to make online affordable learning. Target is generally a quarter of the regular fee. That wants looking at.

I really think you should look at computer games. They are at scale.

You should really look at teacher bots. Sian Bayne has done wonderful work on teacher bots, programs responsible for different parts of teaching. Edinburgh bots are frank and open and you can tell they’re not people. Nanotutors, Jill Watson. MSc in CS. Two of the tutors were not humans. They were programs. Georgia Tech, when there are FAQs or easy questions, the nanobot answers. If you’re a human, you pause to see if the nanobot will answer it for you. E.g. what is the pass grade, what is the best reading. Jill Watson was not only not identified. The students were told two were robots, but didn’t spot her. But one of the human tutors not only didn’t spot she was a robot but asked her out for a date. [argh!]

Learning analytics is very important. I take pride in having worked in the university that educated Thomas Bayes. LA would not work without Bayes. He was wanting to improve his performance at gambling, despite being a protestant minister. Once you have cohorts of thousands, you can apply Bayes’ rule in a way you can with 50.

If, ten years from now, the OU is 10x the size, then you will have been successful.

Richard Joiner – At the start, you said we were all rebels. Do you think the CAL group needs to be more rebellious?

Tim – It makes me nervous to get a message from the Vice Chancellor. The CAL group ought to be making the senior management uncomfortable. There should be an element of public bolshiness. A formative experience was being taken to meet the maths faculty at the OU. Like the gulag. A big man with a beard, said young man, what do you think your purpose is? I said, to improve the quality of maths learning. He raved that I was impudent, OU had the best set of maths teachers on the planet, there was no way that I could improve it. The existence of the group is pointing out it’s not perfect.

Mike – I want to ask Tim a question.

Tim – I recruited you twice, sunshine, so be careful!

Mike – I was at RMIT where our former VC is now. Martin Bean, was like Tim, we need to scale up universities. If there are some innovative tech-aware VCs, why aren’t they getting together? Why not more collaboration and coordination around the world?

Tim – UUK, the British VCs, a friend hated being there with men in suits telling each other lies. VCs are paid to boast for the universities. It’s incredibly hard for a VC to collaborate with another, it really doesn’t go with the job description.

Eileen – Thanks Tim and Mike for their interesting and rebellious comments.

Keynote – Neil Mercer

Should digital technology ‘transform pedagogy’?

Neil is at Oracy Cambridge, Hughes Hall, University of Cambridge.

I’ve worked at the OU more than anywhere else. I didn’t mean to leave, someone made me an offer I couldn’t refuse. I’m glad to be back here. Hearing things that’ll help the university in to a better future. We all with it well.

One strand of CAL research here is different to what’s been mainly talked about: the use of digital technology in classrooms. That’s what I’m going to talk about. You’ll get a different perspective.

I’m getting imposter syndrome, since I’m not a techie, I worked with people who were much more technologically adept than I am. Early work with Rupert Wegerif, Lyn Dawes, both had PhDs on computers in schools. I didn’t. In fact, when I completed my PhD at Manchester, there was one computer for the whole university and I had to book time on it in the middle of the night to do my statistics. I feel like the ordinary person on the team.

News story – ‘New technology to spearhead classroom revolution’, with Education Secretary talking about tech industry launching it. Responses from teachers: is it time to set up BECTA? A wee reminder that teachers in other jurisdictions have a manageable workload through fewer working hours spent in the classroom teaching” It felt like deja vu. When will they ever learn? Go back to Secretary of State for Education in 2004, tech has already revolutionised learning. Earlier still, it will revolutionise schools and teaching. Cuban (1986), teachers have acquired reputation as Luddites. This notion recurs, there’s wonderful technology offered to teachers who want to smash the looms. It’s a seriously misguided point of view.

What has research told us? Tech-led mode of introduction is very likely to create problems. Little research on how, if at all, they influence […] educational goals. (Electronic whiteboards example.)

We ought to be humble about what computers really can do, what they’re good at, and what they’re not good at. Excellent film he saw last night, mentioned Kurt Vonnegut: “a computer teaches a child what a computer can do. An educated human being teaches a child what a child can become”.

Schools and classrooms is different from reaching people through e.g. distance learning.

What should digital technology do for classroom education? Just give them a load of gear and it’ll revolutionise what they do. Should it? We’ve got excellent teachers, really good teachers, whose practice is not faultless, but is good. Should the tech help them do it more easily and effectively? Technology is a tool, we invent them to help us get jobs done more effectively.

Some software designers say, this software’s great, it’s got good activities in it. Software doesn’t have activities. What you’ve got in a computer is resources for designing an actual activity. You’ve got the basis, but the activity happens in the intersection of learner, learner, teacher and computer – not in the software.

Research in classroom-based education ought to address the affordances of any tech, teachers’ pedagogical practices, pupils’ learning requirements, what we know about effective classroom teaching. Often tech offered – especially hardware – that was second-hand, designed for others primarily. If teachers are good at teaching, tech should enable them to do it more efficiently, not change it. Should be focused on the learning requirements, not the content you can give them. We know a lot more about effective classroom teaching. Have just finished the largest ever classroom pedagogy study in Britain, looking at frequency of teacher activities and outcomes. We know a lot more about this, that should be the basis of designing electronic things for schools.

There are some generalities from those studies. One of them, the Education Endowment Foundation’s summary of useful things to know, one says – individualised learning [with technology] may not be as helpful as small group learning with technology or collaborative use of technology.

Here’s some of my projects that have tried to exemplify this approach. One of the CALRG ethics is researching with teachers, collaboratively. What do the best teachers do? How can the technology help them to do it? Bart mentioned this – we want to know what the best teachers do.

Showed a photo of an early project, in a school in Wolverton, old style large grey box computers.

Project with Sara Hennessy, Paul Warwick – Using interactive whiteboards (IWBs) to orchestrate classroom dialogue. Vygotskian approach to dialogue. We know that the ways teachers talk to students makes a different to learning outcomes, and how they talk back – it’s the whole dialogue that counts.

Example 1 – Using the IWB to link lessons. Give a sense of going somewhere, so they can perceive the journey. Teacher uses pictures of previous lesson as a resource. [Photo of teacher, showing classic current British classroom – whiteboard with projector on it, and walls absolutely covered with brightly coloured things – a very visually noisy environment.] We didn’t find negative features of this use.

Example 2 – Using block reveal – show a blank on the screen, e.g. three words blanked in a poem. Can encourage thinking, structure teaching, and maintained pace.  Once the teacher’s designed it, it’s harder to change the structure of the lesson, content less flexible, and tends to be used to set up ‘closed questions’. Good teachers also ask open questions.

IWB enables teachers to provide engaging whole-class learning experience, integrate diverse resources, save and revisit resource to improve continuity and coherence – all more easily than without an IWB. [Interestingly, he says ‘IWB’ as the abbreviation – which is five syllables compared to six for saying ‘interactive whiteboard’.] But teaching is not ‘transformed’ by the IWB, it can dominate events or disrupt when it fails. The effective use of the IWB depends not only on tech skill, but also on teachers’ teaching skill.

Second project, Ingvill Rasmussen and Paul Warwick, Digitalised Dialogues Across the Curriculum (DiDiAC). Based on ‘Thinking Together’ approach. A brower-based microblogging platform Talkwall. Small number of schools in Norway and UK.

Asked teachers about group work, teachers asked students to consider purpose and value of talk for learning, class agreed ground rules for Exploratory Talk (this is known to be really good for getting them to talk productively), used Talkwall to record ideas and decisions. (Important to get them to agree what they’re saying and write it down, and not just talk.) Consensual social norms governed use, not just features of the tech.

Can share this between schools. The collective thoughts of the students become a common resource, the teacher can build on them, the students can refer to them. It’s helping teacher to improve the quality of groupwork.

Project not finished then. It does help. It’s not flash or innovative, but it helps a more group-focused collaborative learning environment. It improves the variety and quality of questioning and talk by the teacher and children.

General conclusions

Not controversial to you, but might be to a Secretary of State for Education.

Transformational, tech-led approach is misguided and disfunctional. Digital tech should not be designed to replace teachers, but to assist them – good teachers doing their job well. Need to assess this using educational, not technological perspective. It’s a toolkit for pursuing educational goals, not determining them. It’s particularly good for supporting collaborative learning. We need to assess the educational strengths and weaknesses fo any new tech – especially when it arrives as 2nd hand kit primarily designed for other users. If developed from this perspective, it has so much to offer. It’s great. That’s what we should be aiming for.

Questions

Stylianos – The context of this research. Compulsory education, characteristic there is the development of a community. Students spend a lot of time with the teacher in the same environment, facilitates these activities. Are there any lessons we can learn from your research for people who teach in HE, where the engagement is not at the same level? There’s a speed-dating type contact between tutors and students.

Neil – Not sure I can give a clever answer. I agree HE is a different context. In achieving the balance between authoritative presentations like this one, and true dialogue, it’s difficult. At Cambridge, we have supervisions – it’s not cheap but it does it.

Kim – Talked about using good teaching to design tech, but what about the converse, what teachers find difficult or can’t do?

Neil – They’re aware of the richness of multimedia resources. Most of them aren’t natural techies. They want things that’ll make that more possible. There are ways that can be better. Kids are missing out if they’re not making the most of those resources. Other things, like citizen science, they find it hard to coordinate children. The one Mike Sharples did with bird feeders, the technology enabled coordination of individual records across several schools, it was great with primary kids. They’d have found that impossible. It’s a good way to look at it.

Patrick – Back to the first question. I was looking at the list of general conclusions, it applies to what we’re trying to do in HE. I feel that a lot of the research we do, it’s a challenge we face when we work with schools, but it is transferable. Is there something we do in HE that has no lessons, that’s so different, or should we be looking at this across all of the contexts?

Neil – It should be research with potential to apply across contexts. But if you ignore the particularity of the contexts you might go wrong. The aim of a primary teacher isn’t to reach beyond the 26 students they have. That’s going to affect what works. The scaling Tim mentioned, I can see why that’s beneficial for this institution and the students, but that isn’t the issue here. You don’t want to scale up a primary classroom to 100 kids. You do want to scale up to help 100 teachers see how they can do better. It depends what your question is.

Patrick – The Development goals does go across all education, not just HE. The pedagogies are around sharing experiences, broadening beyond the classroom, even though you start there. You’re underplaying your research.

Tim – Reinforce and comment about the second-hand kit. Had the experience of visiting a large computer corporation about educational strategy. Had a sequence of this would work well for schools, things designed for commercial context, but not pedagogical intent at all. The problem education has, it’s a small underfunded market. If you compare education to world finance industry, it’s very small. Cyclops was ahead of its time, BBC micro, Cicero, why are they no longer extant? They never reached the numbers the IBM PC or MS-DOS did. There isn’t an economic structure around to sustain. If you buy a spreadsheet it’s going to be Excel, not one designed for learning. A problem like the civil servants of education ministers have is not realising that education is small beer compared to defence or finance.

Neil – That’s plausible, except you see Pearson is phenomenally wealthy, and Cambridge University Press pays for half of what Cambridge does. There’s money somewhere in it. How come the books sell?

Tim – Books are easier, you just print ’em and flog ’em.

Andrew – Falling in to the trap of simplifying the problem. The issue with scaling is context. Learning occurs differently in different contexts. We can’t avoid the fact that context defines. Steve Draper’s “Niche-based success in CAL” – tech adopted when it solves a problem. Need to be more sophisticated. Scalability doesn’t happen because learning contexts are different – e.g. VLEs. I’ve been paying more attention to understanding the learning problem before we think about the solution. Learning is a complicated thing, we can’t just wade in with tech without disciplined analysis of the problem.

Neil – I agree entirely.

Wayne Holmes – I like your final conclusions, particularly number 2 (tech designed to assist teachers, not replace). Any examples where the attempt to introduce tech shows up problems?

Neil – Sara?

Sara – The whiteboards is one example. We weren’t unaware of the pedagogical issues. But when available worldwide, clear they’re not being used effectively, despite every single classroom having one. It takes a lot longer to address the problems [than to reveal them]

Bart – I’m not in habit of quoting David Willets, but he saids one problem is most educationalistst are amateurs, trying to play round the fiddle, use a very basic approach to the black box of learning. Do we need to fundamentally upskill the methodologies to show the complexities of learning?

Neil – No. I think it’s politicians who don’t understand learning. [laughter. We know a lot more about the process of teaching and learning.

Richard Joiner – Whiteboards, teachers not using them interactively, you need to change the pedagogy that teachers use?

Neil – What we do is find out what really good teachers do and distil it down to its essence. Some of the teachers said they did do things, others hadn’t thought about it. I’ve such respect for teachers, I don’t want to become a Gove knocking teachers. There are some who are not so good, need to be trained to do what we know is a good thing.

Sara – Introducing any tech, has to be preceded by support and professional development for the teachers. We’ve known this for decades. The whiteboard initiative was an example, the digital microscopes in secondary schools, they were just left in cupboards. We have lots of examples.

27/365 Present - Regalo

Poster competition winners

The poster competition winners were announced by Liz Fitzgerald, Wayne Holmes, and Doug Clow. I haven’t written them down here so as not to spoil the result.


This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission needed to reuse or remix (with attribution), but it’s nice to be notified if you do use it.