Doug’s Data Doctrines for Project Leaders

These are some of the principles I’ve drawn from more than 20 years experience in what we now call data science: using data to understand and improve human systems. They’re more guidelines than rules, particularly the ones rating one thing above another.

I have two lists: this one for project leaders and managers, and that one for project sponsors. This one is focused on issues around how you deliver a project and what sort of a project it is; the project sponsor one is more about how you frame and resource a project. Obviously, they overlap and have great synergy.

Always good to ensure your project delivery measures up. Image by Steve Buissinne from Pixabay
Continue reading “Doug’s Data Doctrines for Project Leaders”

Doug’s Data Doctrines for Project Sponsors

These are some of the principles I’ve drawn from more than 20 years experience in what we now call data science: using data to understand and improve human systems. They’re more guidelines than rules, particularly the ones rating one thing above another.

I’ve split this in to two lists. This one is aimed towards project sponsors, but obviously it’d be useful for anyone who has to interact with project sponsors. I have another one for project leaders. The idea is that this one has more of the things you need to bear in mind in setting up and resourcing a project and connecting it to the rest of the organisation, and the other has more of the things you need to bear in mind in getting the project done.

How these points link together. Image by Gerd Altmann from Pixabay
Continue reading “Doug’s Data Doctrines for Project Sponsors”

Sweary parrots and social media

I can’t get this news story about sweary parrots out of my head. Lincolnshire Wildlife Park has had to separate five African Grey parrots after they started swearing too much, egging each other on to tell people to F off and then laughing.

I think these profanity-spouting birds illustrate a process that we see more widely, to lesser degree in traditional media and to a great degree in social media. (It’s also closely related to some recent steps forward in AI, but I’ll leave that mostly aside in this post.)

Many parrots are good mimics, and African Greys are particularly good at it. And they have a widely-known tendency to get potty-mouthed and say completely outrageous things.

How come? Why would a bird do that? They are highly intelligent, for birds, but it really isn’t that they want to insult people and say awful stuff because they’re deep down horrible. We say people are “parroting” something to mean they don’t understand it and are merely mimicking the noise, and we can be pretty sure that’s the case for our actual parrots here.

I think this sophisticated awful behaviour – or perhaps more precisely, this awful behaviour that appears sophisticated – is the result of the interaction between two components: a generative one, and a selective one. (If you know about AI you’ll see where my mind is going.) The generative component is the parrot’s ability to mimic what it hears. The selective component is the parrot’s ability to pick up on social cues and choose things to mimic that reliably get a reaction and attention from people.

Having a parrot pipe up unexpectedly and drop an F bomb will, indeed, reliably get a reaction and attention from people. So our smart-enough parrot tries mimicking things it hears, and when it gets a reaction, mimics that thing more often in future. And that often means persistent cussing.

Grey parrots

This type of feedback loop is immensely powerful. We can see it in lots of places. Toddlers do it. Teenagers do it. And adults do it in the media.

Do we see this in traditional media? Oh yes. We have as the generative process the efforts of a whole creative media industry, and the selective process is complex and multi-layered, but what gets a reaction and attention is a large part of it. Media organisations are desperately concerned with audience. Some of what draws an audience is good, but terrible things can also be a substantial draw.

‘News values’ – what the news prioritises and highlights – have been identified as potentially problematic since the 1950s. “If it bleeds, it leads” is a very old slogan. Reading the local weekly newspaper can be a bit depressing, until we remember that this is a curated list of the most shocking, lurid, and outrageous things that have happened with any tenuous connection to our area over the last seven whole days, which gives some reassuring perspective.

It’s not just news, of course. ‘Reality’ television is clearly well down this path of being rewarded for shocking behaviour.

What about social media?

Here the generative process is a much broader pool of humans composing (mostly) short bits of media, and the selective process is what generates ‘engagement’. Media platforms are optimised to bring more of what generates engagement to users, because that keeps them on the site longer and generates more possibilities for selling adverts. Users learn the sort of thing that tends to be rewarded by the platform, and off it goes in that powerful feedback spiral.

As before, sometimes good things generate engagement, but terrible things are often more reliable in prompting a reaction.

We see this in our everyday lives online. Occasionally you get lovely, uncontroversial things where somebody has done something exceptionally creditable or particularly cute. But more often you get conflict, hostility, and outrage. And some extremely dark stuff can be amplified by the process.

Mr Zuckerberg has sometimes defended Facebook as merely holding up a mirror to humanity. It’s not a mirror, it’s a feedback process that is vastly more powerful. A nuclear chain reaction is perhaps a better analogy. It is possible to harness that power for good, but it’s extremely challenging in engineering terms, and it’s much simpler just to use it to blow stuff up, particularly if it’s someone else’s job to clean up the radioactive mess left behind.

Let’s get back to our parrots. One occasionally-sweary bird isn’t so much of a big deal. The problem at the wildlife park started when they housed these particular five parrots together. One would swear and another would laugh, and with the mimic-reaction feedback, they were soon all swearing like troopers. The BBC report that the park’s chief executive, Steve Nichols, said, “if they teach the others bad language and I end up with 250 swearing birds, I don’t know what we’ll do”.

With social media, we have the human equivalent.

This isn’t a new phenomenon, and it’s not a new observation to highlight the issues with the attention economy. But next time I see something terrible getting a lot of attention on social media, I’m going to think of a bunch of sweary parrots.

All the best for an online autumn 2020

All the very best to all involved in UK universities as we hit last-minute decision time for the new term/semester. As of 1.18am today, Thursday 10 September 2020, we now have as much guidance as we’re going to get.

This is the blog post version of a long thread I posted on Twitter earlier today. In short, I advise going all-online now, but all the best anyway – you’re great.

Go all-online now,
but all the best anyway –
you’re great.

First, I want to apologise that some of my commentary has made it seem to some that I don’t appreciate or understand the scale of the work being done, and that I’m unsympathetic to staff working on plans that differ from what I’m arguing should happen. That’s not the case. I’ve also inadvertently contributed to a sense that those working on opening physical spaces don’t care about students. That’s simply not true. People who disagree with me on this are driven by a deep concern for students. And they’re working bloody hard on good stuff.So although I may disagree on some tactics, I want to strongly support and applaud the enormous efforts going on across the sector. These decisions are not easy, and much hard work is being done by people who don’t get much say but do get much stick, and that’s unfair.

This latest guidance is obviously not the last oddly-timed intervention from the Government we’ll get, so all those systems for rapidly appraising and acting on new developments will need to be on hot standby for the foreseeable future. The Government guidance sets out ‘tiers of restriction’ in response to local outbreaks. Most universities already have a range of scenarios mapped out. One of today’s urgent jobs will probably be to map those onto these new official response ‘tiers of restriction’.

My view remains that it would be better to choose now to teach online, except for those things that can only be done in person – i.e. starting at Tier 2 or 3, not Tier 1 (default position).

Despite what some say, that is not an easy decision. It’s a difficult balance. My two main reasons for deciding now to teach online are to reduce the spread of the virus, and to reduce the workload (and cognitive load) of staff and students in keeping multiple scenarios live – which will make the online learning better.

But this is not an easy option. Not least because the Government explicitly expects most universities to open their physical campuses, even now. And students will, quite reasonably, be furious to be moving all that way and taking out accommodation contracts etc if it’s all online anyway at late notice. For most courses in most subjects in most universities in most subjects, teaching online now will not be as effective and engaging a learning experience as teaching in person. I’m ex-OU. I am utterly convinced you can do world-class university education online. But the Open University has had decades to prepare and has a completely different organisational setup and staff base. Staff across universities have worked enormously hard since the start of all this to move everything online, and the transformation in capability is truly astonishing. But realistically – with some very notable exceptions – most online provision will not be as good as in person. Yet! Although it’s important to note that Covid-secure measures mean in-person teaching is very constrained, and less good than it would’ve been in normal times.

You’re darned if you do,
but damned if you don’t.

So why on earth do I still say start with a more restrictive, online approach to university teaching? Two main reasons:

  1. take more responsibility to reduce the spread of a pandemic virus than the Government is requiring, and
  2. reduce the pressure on staff.

(1) Whether or not to take a stronger line on risk reduction than the Govt advises isn’t easy, and reasonable people can come to different views. I would give greater weight to advice from the Govt if it had been more effective in dealing with the pandemic to date. I’d also give it more weight if the advice on reopening buildings and campuses had more explicit acknowledgement of the risk to staff and the community. Universities are more than their students, and have responsibilities to their host communities.

There’s a maxim from public health on taking early, effective action: you’re darned if you do, but damned if you don’t. Effective action is unpopular & will inevitably look like too much too soon: there is no major disease! If it works, it’ll appear unnecessary in hindsight. But failing to act to prevent wholly foreseeable disasters is not just unpopular, it’s massively condemned. Especially when there are avoidable deaths. SAGE says, in their report on managing transmission of the virus in universities, “Outbreaks in HE are very likely.” Physical opening is socially acceptable now, but may not be in retrospect.

(2) Going all/mostly-online now means staff can largely forget about the physical teaching scenarios and focus on the one or two that will have to be delivered. That saves effort and reduces uncertainty and worry. All of which would be hugely welcome. Running multiple possible scenarios comes at enormous cost. Everyone involved has to make multiple plans. There isn’t enough time to do everything properly, so that means the plans for each scenario are less good than if there were fewer to deal with – preferably just one.

There’s also the cost of switching from one scenario to another, in direct staff and student time in implementing the new arrangements, and also in cognitive load in rethinking how your routines work. Starting more restrictively means fewer scenario switches.

You absolutely can get better at teaching online. I would bet that teaching will more rapidly improve if staff are doing it than if they’re struggling along with massively complex hybrid arrangements they can barely cope with. Doing both simultaneously is really hard.

I want to salute and acknowledge the enormous, spectacular efforts being made across higher education.

But! I can see how others can come to a different view, and I may be wrong about how likely outbreaks are (I do hope so), and your university might well turn out to be one of the lucky ones despite – as Mr Gove infamously remarked – choosing to run things quite hot. And providing some in-person teaching will, at the moment, in many instances, result in a better experience for students. If you do have in-person time, do prioritise group-forming and community-building activities over curriculum delivery. You can build community online, but it’s much harder. You can teach more effectively online when there are good relationships between your learners already. Make good use of the limited time you may have. @ProfSallyBrown has some excellent ideas in this line in her post on Wonkhe.

Regardless of any difference of opinion about tactics, I want to salute and acknowledge the enormous, spectacular efforts being made across higher education. Academic staff, professional staff, and non-professional staff have all worked fantastically hard and worked wonders. Students and students unions have also done extraordinary things, and engaging with them is the best antidote to gloom about the current situation and the future. Even when they present with a diversity of strongly-held views. Perhaps particularly then!

Finally, keep an eye to the future: it’s hard to make headroom for long-range plans, but this won’t last forever. I’m sure that universities who make continuing good use of the expertise in online working developed in these hard times will thrive in the more distant future.

Good luck!

New service: Tutxoring!

I’m really excited to launch a new and unique service: tutxoring!

For most things you need to learn, there are many experts on the subject from whom you can learn, and a vast range of learning materials: courses, textbooks, videos, communities. For most topics, there’s an agreed curriculum, a set of things that most experts agree you need to know. But for the most difficult, the most challenging, and the most ground-breaking learning, there’s almost nothing to help you. That’s where tutxoring comes in.

The model comes from the later stages of PhD supervision, or some forms of an Oxbridge tutorial. While the supervisor or tutor usually starts with a better understanding than the student, good students will, by the end, be among the world experts in the specific area they’re working in, and the role of the supervisor or tutor becomes much more of a guide than a direct teacher.

This is just to whet your appetite: there are more details about what tutxoring is on my website, as well as an even longer discussion about tutxoring, how it works, and how I’m well placed to offer it.

Many thanks to everyone who provided early feedback on a sneak preview. You’ll see I’ve changed the name based on what you said. And a particular salute to the old guard who still have my blog in their RSS feed reader.

If you’re interested in tutxoring, want to find out more, or want to discuss how tutxoring can help you, do please get in touch.

And tell your friends!

More student data, but later

We need to seriously consider doing a lot less with student data right now. Stopping data logging will reduce the impact on our systems and, more importantly, on our students.

As a longstanding learning analytics researcher, I don’t say this lightly.

Computer Problems
“Waiting for Moodle to render this page is taking longer than it takes to get to the lecture theatre on the other side of campus.”

The Covid-19 coronavirus crisis is profoundly changing society, including universities. There’s been a mad dash to online teaching, and a mad dash to online assessment close behind. Those of us who’ve been enthusiasts for online learning for a long time know that this may be a huge success in some places, but that it isn’t going to go terribly well in many others. It’s easy for our eyes to light up at the thought of all that interesting data that all that online activity could generate.

But hold up. In a crisis, we need to prioritise what’s most important. Frankly, the benefit to students of most of our data gathering is not sufficient to justify it getting priority in a crisis. And our evidence of what benefit there is has improved since I wrote a rather despairing paper with Rebecca Ferguson about it, but not hugely. I do believe it’s worth pursuing. But it absolutely can wait, so it should wait.

A big turn-off

What if we just turn all that data logging off for the duration of the crisis?

We’d reduce the impact on our systems. Online learning systems are under massive strain as IT staff and suppliers struggle valiantly to deal with a completely unprecedented spike in demand. With a well-designed and well-tuned system, data logging needn’t be a huge drain on front-facing server resources. But when you’re rushing to scale up, you don’t have the time to tune it well and built a robust and separate data architecture. It will make the IT people’s life much easier if we just drop those requirements for now. It would, at least, be one less thing for them to worry about. And it might well materially improve performance, particularly on hastily-deployed systems where there hasn’t been time to optimise them.

We’d also reduce the impact on our students. Most academics are only in the student data business to make things better for students – but there are other interests at play too. Students are quite reasonably concerned about how their data is being used at the best of times. There isn’t the time to do all the engagement around data privacy that good practice requires, and that you need to properly address understandable and quite reasonable concerns. We could just steamroller them in to it. This seems to be happening a lot, and there’s even been some commentary from UK ministers about the GDPR that might be useful political cover for it. Or we could just … not do that, and give them a break. Deal with their worries about data privacy by sharply reducing the amount of data we collect. I think, given all that this cohort is putting up with, and is going to have to put up with in the near future, they badly need any break we can give them.

What can’t wait?

There will be some exceptions. Obviously, where you have a cognitive tutor setup, it would be nonsense to turn off the logging – and, not coincidentally, that’s where we have the best evidence of direct student benefit.

More widely, I’d argue for saving the last login information for each student so their tutor can see who’s been able to access the system and who hasn’t. I can’t instantly think of good papers showing this, but my strong hunch from practical experience with predictive modelling is that a huge chunk of the benefit that can come from such systems is increasing awareness among tutors of which of their students hasn’t been able to study for a while. We can do that directly with a lot less impact on students and servers.

And obviously, some data has to be recorded to operate an online learning system at all.

More later

For the avoidance of doubt, I am not for one minute arguing that learning analytics should close down and give up. I do still believe that there is huge potential from using students’ data to improve their learning, and that there’s more to be gained in future than has been done so far. I am arguing that we should be humble about what we can offer and prioritise the benefit to students. That is, after all, the whole point of learning analytics.

Learning analytics researchers and practitioners have never been in more demand in their organisations. We understand the practicalities of online learning in ways our more traditional colleagues don’t. It’s not like we’d be short of stuff to do if we spend the next months prioritising support for them and for student than our data-gathering projects.

We should do a lot more with student learning data … and we should do it later, when all this is over.

COVID-19 coronavirus and data

How does COVID-19 coronavirus illustrate timeless truths about data?

This is a version of two Twitter threads: one on how it illustrates timeless truths about data and the other about authoritative information on the outbreak

NB I do have some health and medical background but I am a data and learning professional, not a clinician, epidemiologist, or public health person.

Canarian Raven
“The virus is called COVID-19. No R. Call it CORVID one more time and I’ll peck your eyes out.”

Data science, BI and in fact any statistics all start with simple counting. That first bit is surprisingly hard, and getting it right is often most of the work. The current COVID-19 coronavirus outbreak shows this up nicely.

You can make as sophisticated a model as you like for an outbreak, to estimate things like the basic reproduction ratio (R_0, how many new cases each case is expected to cause, on average, which tells you how far it’s spreading), or the case fatality rate (how many people who catch it die of it), or the likely extent of the outbreak (how many people might catch it), and all sorts of stuff about how fast this is all happening or likely to happen.

But all that crucially depends on simple counting: how many have it at a given point, how many have died, etc. With the latest figures, we see how hard that is. The number of new cases in China jumped from 2,000 on Weds to 15,500 Thurs – because counting methodology was changed.

The numbers have changed retrospectively, too – I copied those numbers down yesterday, but today it looks like it was a 400 increase on Weds and a 15,100 on Thurs. This would be hard to get right even if there wasn’t a massive health crisis there.

Counting infections is always tricky, but surely deaths are easier? Turns out there are surprisingly difficult edge cases at the edge of life, but those don’t come up often, and almost everyone agrees about most deaths.

But even then, you’re probably getting data from multiple sources and combining them and that can lead to problems. Like today’s news that 108 deaths have been removed from the figures because they were double-counted.

It’s easy as a data scientist to say we need to invest in better data, and sometimes that’s right. But getting good basic counting data is hard, and expensive, and cannot be the absolute priority. The data you’re dealing with will always be messy to some degree.

Speaking of degrees, this crops up in education and learning. ‘How many learners do we have right now?’ is the basic question that is the denominator for pretty much any learning or teaching metric you care about.

And that is surprisingly hard to answer sometimes. There are late registrations, retrospective registrations, de-registrations, retrospective de-registrations, provisional versions of all those, and that’s just dealing with individuals.

When you have organisations buying in learning, it gets even worse: how many are provisionally ordered, how many are finally ordered, how many are catered for, how many show up, and how many are invoiced for are all different, and not the same as how many learned anything.

Speaking of invoices, cash at least should be easy to count? It should be clear when a customer paid us, right? Oh, my sweet summer child. That sound you hear is the entire accounting profession sniggering.

Suffice to say that the same payment can legitimately have different dates for cashflow, annual accounting, VAT, other taxes, and who knows what other purposes. Organisations are incentivised to manipulate this data, and most organisations respond to incentives.

Summarising a wide ramble: Even the simplest of data, like ‘How many people have COVID-19?’ can be surprisingly hard to get authoritatively. Getting better data is rarely a business imperative. Be cautious about interpreting your advanced statistical models.

Also, be kind to people who are working hard to do really difficult jobs in really difficult circumstances. And don’t make it harder by spreading misinformation.

Check with authoritative sources before passing on information. It does not help to spread stuff you think might be dodgy or far-fetched ‘just in case’. Most people who pass on misinformation don’t mean to cause problems. Check it’s right first. You can help protect your friends and colleagues from this hazard.

So, having said that, how do you know what’s right? Check with authoritative sources, like Wikipedia always tells you. Wikipedia has excellent info, which is rapidly changing as the situation changes rapidly:

In the UK the risk of infection is very low, as in most places outside Hubei.

In short: If you think you may have the virus, stay where you are and call 111.

Outside the UK, there’s authoritative sources like the US CDC and the WHO

For hard research info, there’s stuff on the WHO site (currently under ‘technical information’ and  ‘global research’) and many publishers have made research freely available, and some have free-access portals on the topic, e.g.

If you like statistics and numbers, here’s some good aggregation from Johns Hopkins University, with a few visualisations and a link to a well-maintained GitHub CSV.  

This data is pretty good, but don’t treat it – or any data! – as representing the objective truth.

This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission is needed to reuse or remix it (with attribution), but it’s nice to be notified if you do use it.

Project start checklist

What do you need to have thought through before you start a project?

In a large organisation, like I left last year, there’s often a lot of heavyweight project management overhead. The downside is filling out lots of paperwork, but the upside is that it does make sure that the project is in good shape at the start. If it’s good project management paperwork, of course.

Working in smaller contexts lets you be much more agile and swift. Now I’m an independent consultant, I can start a project the moment I think it’s a good idea.

But how do I know it’s a good idea, and how do I know it needs more work, or is better not done at all? Obviously, using my skill and judgement developed over many years of working on projects. And that tells me that if it’s more than a teeny tiny bit of work, it’s well worth spending some time up front, to systematically work through what needs to be in place at that stage.

Some people would be happy to simply do that in their head and be done. But I am a huge fan of checklists. This post isn’t about the wonders of checklists. Suffice to say, the more experience I get, the more I think they are wonderful. They help make sure that the obvious things actually do get done.

So I have found myself wanting a project start checklist: a list of things that need to be thought through before I start a new project. It’s here, at the bottom of this post.

The checklist is deliberately skewed towards freelancing and consultancy, rather than internal projects inside large organisations that already have more project management overhead than they perhaps need. So, for instance, it doesn’t have any of the “set up project board” and “identify project sponsor” things you’ll find on other project start checklists. If you need to do those things, you (should!) already have a formal project management process. This checklist is for when you don’t have that. It would also work for small-scale guerilla projects inside a large organisation that don’t need official sign-off. It assumes the project is highly custom and not a standard one, which covers pretty much everything I do, but it is way more than you’d need if you are a freelancer doing well-established work – like, say, you’re a graphic designer and simply doing another logo for a new client. (Although even then some of the elements will be useful.)

The way I use it is to go down the list, systematically, and make sure I have a good answer to each question. To work the checklist magic, it is important to do them in order and not to skip any of them. (Obviously, if you don’t think an item belongs on the checklist, ever, delete it and never worry about it again.) If I don’t have a good answer to a question, a good argument for why it doesn’t matter in this context will also do. If I can’t answer a question with no good excuse, that’s a prompt to find out more before I start. Not knowing can be a red flag that this project is ill-conceived. Sometimes it’s the right decision to start a project before it’s all tied up and thought through. But better to do so having that potential risk in mind, and even better to mitigate that risk before it happens.

On to the list itself!


Is this project worth doing?

Ideally, it’s worth doing because I believe it will make the world a better place.

The underlying idea here is to avoid doing things that would be better not done. I like Peter Drucker’s distinction between “doing the right thing” and “doing things right”. Most of project planning and delivery is focused on “doing things right”. It is all too easy – and indeed necessary a lot of the time – to forget about the big picture, get your head down, and get it done and delivered. But the time before a project starts is a golden opportunity to pause and ask the big questions, like “What’s the point of this? Why bother?”.

Even if you think the project’s a terrible idea, it may be worth doing for other reasons. Not everyone can afford to be picky about what work they do, in which case, “it is worth doing because it will pay me money I need and I don’t have a better alternative” can absolutely make it a yes here. But if you’re doing that, it’s good to be aware that you’re in it for the cash (or the exposure, or the experience) not for the project itself. That way you’ll be in a better position to work to get more of what you want and minimise what you don’t.

Who actually wants this project to happen? Who directly benefits?

This is related to the first question, but different. I draw a distinction between someone who actually wants a project to happen because they themselves want the results, and someone who wants a project to happen so they can say that something like it has been done, for presentational or organisational reasons. So, for example, an organisation might want to be seen to be doing something about an issue, but they don’t actually care about it. They set up a little project, perhaps engaging an external consultant, so they can point to that as having done something, but they are not engaged in the issue. That doesn’t make it a no, but it is a different context to one where there are actual direct beneficiaries.

Fundamentally, it is often the case that people who are intent on doing something for someone else’s benefit have a different view of what would constitute a benefit to the  person themselves. I might sincerely want you to do good for you, but you will usually have a better insight in to what is good for you than I do.

This isn’t to say that I’m against building new things that people don’t yet know they want: far from it. (Think of the apocryphal story of Henry Ford saying that if he’d asked people what they wanted, they’d have said better horses.) But you need to know when you’re doing that, and how you will know that you have given them something that they do want.

Who are the other stakeholders and what should happen to them?

To many British people of a certain age, banging on about stakeholders makes you think about Tony Blair and the 1990s, or about frustrating project management paperwork, or both. It can be a little overworked, but I do think it’s worth thinking through who else will care, or ought to care, about a project you’re involved in, and what you’re going to do to them. Or with them. With them, totally.

Where is the value generated? Where is the money coming from?

This is again related, but may give different answers. For working in the commercial sector, it’s vital to understand where the business makes its money and how the project will impact on that. For other public and third-sector work, it’s vital to understand how they are funded and for what purposes, and how the project will impact on that. Grant funders – whether that’s the European Union, a research council, a foundation or a charity – will have a set of conditions on their funding, but there will also be a more a more implicit set of ideas about how projects they fund ought to work.

Will I get paid, and when?

This is, of course, the freelancer’s main question.

The obvious thing is to make sure the mechanics are in place. Is there an agreement to pay me? Does there need to be a purchase order, and if so, has it been raised? Have we agreed the invoicing pattern, and do I know who is responsible for paying the invoices? I don’t need much paperwork: I log every potential client, and I log every project. But the client and/or funder may have paperwork or processes to follow before the project can start. I can start work on my own initiative, but that may raise the risk that the project never officially happens, and I don’t get paid.

Underlying the mechanics (or overlaying them?!), there’s the question of whether they will pay when the invoices come in. What’s their cashflow situation? What’s their payment track record? This can crop up at all levels. A small company may have cashflow problems and have to defer paying you. A large company may have an imperative to juice this quarter’s financials and defer paying you. A university may defer paying because it is extremely bureaucratic and it’s nobody’s job to make sure you are paid in a timely manner.

Why do they need me?

Usually, this is because they need to get something done but don’t know how to do it, and so they’re bringing me in because I do. That will usually mean part of my role will be explaining what I’m doing and why. Sometimes it might be that they do have the skills but they don’t have the capacity, in which case there’s less need for explanation.

There are also projects with an aspect of management consultancy to them. Anything involving organisational change falls in to this category, but so too does most work on training and development. Here, it’s very important to understand the political context within the organisation before starting.

Do we have a shared vision for the final outcome?

If they’re bringing me in because I have skills and knowledge they don’t, my experience will be very different to theirs, so it is almost certain that what I imagine will not be quite the same as what they imagine. We need to do the work to ensure we agree what we’ll have at the end.

I can be happy to go without this, so long as the project plan has some way of bringing our visions together – although if that’s the case, I’ll usually prefer to have a break or review after the converging-vision phase.

Does the project plan make sense?

This is the bread-and-butter thinking through the project and planning it, or understanding the plan if someone else has produced it, and working through what my role will be. I need to work through what I’ll be doing, and how, and exploring all aspects of the project iron triangle (quality/scope, time, resource). This also includes how it will dovetail with my other commitments.

This question is where most of the planning effort goes in, but it doesn’t need extensive reminders on a checklist.

What is out of scope?

Obviously, a complete list of things out of scope of any given project is going to be pretty large. However, I do like to explicitly write down the things that one might reasonably think were included, but are not. This can be really useful to clarify with the client or funder, particularly if I can get it in to the paperwork.

What will you end up doing anyway?

Sometimes I want to do something to a certain standard of work and the client doesn’t want me to spend all that time (and/or pay me for it), so they say don’t bother with that. In many cases that’s fine. It can be important part of making sure we’re getting best value out of the work. Not everything has to be done to world-class research standards, and outside academia, done quickly is usually more valuable than done perfectly.

However, there are some things I simply can’t shortcut. One example for me is preparing for a presentation, talk or speech. I will always put the work in to be prepared to my own standards, even if that means skipping things I really want to do or staying up absurdly late. And I have tried but failed to make something without checking what similar things other people have done already. I don’t need to do a full lit review before starting a project, but if I havent spent at least a few hours exploring what’s been done in the area recently … I know I will end up doing that anyway. And I am an incorrigible data nerd, so if I collect some data, whether quantitative or qualitative, I know I will spend a fair amount of time getting to know it, regardless of whether I’m being paid to.

It’s better if I know to expect this than have it bite me yet again.

What if things are harder than expected?

A bit of thought ahead of time can help a lot here, and again the iron triangle (quality/scope, time, resource) applies: What aspects of quality or scope could I cut? Where could I find extra time? How could I get extra resource? How would I communicate and renegotiate if I can’t address the issue myself?



What could go wrong?

I like to do a project pre-mortem. This post isn’t about the wonders of pre-mortems, but they are a very useful tool. The idea here is you imagine that the project has failed and you’re working out what went wrong. How did it happen? It’s a cognitive flip: instead of only thinking about how it will succeed, you assume that it has gone wrong, and come up with ideas for how that could have happened. This can be very useful for spotting things that you are half-deliberately hiding from yourself because you don’t want the project to fail.

I’ve done a bit of flying in light aircraft, and like many aviators, I read a lot of air accident investigation reports. Often, when you these reports, you can see that bad judgement was present at the start. So a useful question to ask when preparing to fly is “How would this look in an accident report?”. That can keep you on the straight and narrow, and out of obvious, well-known mistakes.

So, in this context, how would you talk about this phase of the project if it later turned out to be a disaster? What were the red flags, the early warning signs, the classic blunders, the usual procedures avoided?

What would huge success look like?

This is a question I picked up from a cheesy talk some time ago. It’s not my usual style: I’m quite undramatic and practical. I think a lot of massive success is luck. But I do believe in making sure you’re keeping the door open to runaway success should it show up, and not closing it off as a possibility so it never does. This often leads to practical decisions like making things easily scalable, being open, and so on.


What are the Benefits, the Risks, the Alternatives, your Intuition, and what would happen if you do Nothing?

This checklist has already covered the benefits, but the risks and the alternatives need to be explored, as does the do-nothing option. This should also cover the opportunity cost of taking this project on. If I didn’t do this, what would I be doing instead?

And I always need a reminder to check what my intuition says. What does your gut say? What does your heart say? I am very much a brain sort of person, but instincts arise for a reason, and it’s worth paying attention if my analytical brain is saying this is a great idea but my emotional brain is reacting like it’s a terrible one.

(BRAIN is an acronym/method I have shamelessly stolen from decision-making around childbirth. It’s fair to say I have more experience of bringing new projects in to the world than new people, but I have found this exercise to be a useful one when faced with any major decision.)
Brain Waves

What about personal information?

This is the GDPR question. What personal information will be generated, used, and managed in the project? And what needs to be done about that? This can be a very big question, and can roam well beyond a quick checklist, but it needs addressing on pretty much any project.

Luckily for me, this is one of my interests, so it’s not too hard for me to do. If you don’t have that background, it’s worth getting some advice if you’re not sure.

What about intellectual property?

The main IP in my projects is copyright. Almost everything I produce in the project – writing, code, interfaces, graphics, diagrams – will have associated intellectual property rights. What is going to happen to them? The client is paying for me to produce them, but what scope will there be for me and them to use them later?

I am a big fan of free and open source software and of Creative Commons licensing. As an idealistic youth, my first enthusiasm for them was about the value of increasing access to things. But as my experience has grown, my main enthusiasm now is about the immense value of an open license in making sure that everyone involved will be able to build on their previous projects in the future.

A project brings people together. If the products of that project are available only under a closed license, it can be difficult to all but impossible to get the necessary paperwork together to prove that it is Ok to build on those products to make something even better. But if it was licensed openly, there’s no such problem: the license says anyone can build on them – and anyone includes the original contributors!

However, I’m a pragmatist and I’m very much of the view that not all projects are suitable for release under an open license. If so, how are we going to manage the IP generated?

As well as the stuff generated by the project, it’s worth thinking through what is happening with pre-existing intellectual property: the stuff that I am bringing in, and stuff that others are bringing in. Do we need an explicit agreement about that?

In my line of work it’s less common for patents, designs, and trade marks to be involved than copyright, but it’s worth thinking through if anything in that area is going to come up and deal with it up front.

Again, this can be a complex area, but luckily for me it is one of my particular interests. This does vary considerably between jurisdictions – for instance, I know the US has work-for-hire laws that set a very different context.

What will I learn?

One of the things I enjoy most is learning about an entirely new-to-me area of human endeavour, so if there’s an opportunity to do that and get paid for it, I’m going to be very keen.

But even when it’s well within areas I’ve worked in before, there’s almost always the opportunity to learn something new, pick up a new tool, get better at a particular task, or something in that line.

My hope is that using this checklist will help increase the chance of learning positively from a project, and decrease the chance of it being the old joke of “another bloody learning opportunity”.

This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission is needed to reuse or remix it (with attribution), but it’s nice to be notified if you do use it.


As I said I would, I have left the Open University, and I’m working as self-employed consultant. I’ve had several interesting proposals and discussions, some of which turned out to be too interesting to turn down.

One of the really liberating things is the wide range of possibilities for interesting work that are coming up. I’ve always had a broad range of interests, and really enjoy finding out new things and understanding new organisations, processes, and systems. Which is very handy for consultancy work where that’s often the first step!

Mam Tor

I’m also very much enjoying escaping large organisational bureaucracy, although of course for large clients I still have some of that. Best of all, for me, is the sense of proper responsibility for finances. I’ve plenty of experience in budgeting and monitoring spend on large projects, but it’s always been in a constrained framework and with other people’s money. Now, if I think an expense is justified, I can simply spend the money. If I need to travel (and the contract doesn’t have expenses separately, which is how I prefer it), I can just book the travel and accommodation immediately, instead of having to book via a travel agent I’m not allowed to talk to directly, who insists, via the intermediary, that the flight I want doesn’t exist, until I send a screenshot of Expedia back, and even then says it’ll be an extra £100 on top. If it’s feasible to travel by train rather than by plane to save carbon emissions, I can do that, even if the train journey works out more expensive. Even better, if I find a way to save money, it doesn’t become a potentially problematic underspend, it goes straight in my pocket (after the tax people have their cut).

I would like to get back to some regular blogging on here, but obviously paid work gets priority.

I do have a little limited capacity at the moment, with more in the new year, so if you’re interested in engaging me for a consultancy, you can read a bit more about what I can do on Hire Doug Clow, or simply get in touch.