The Learning Analytics Cycle

What is learning analytics? Well, the LAK11 open course has generated a useful definition:

Learning analytics is the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs.

Reflecting on this and everything else from the pre-conference workshop at the Learning Analytics and Knowledge conference today (see my liveblog notes from the morning and the afternoon), I realised that my underlying model here is of a Learning Analytics Cycle.

It looks something like this:

(If this was a proper academic article, I’d be linking all this explicitly to Schön and Kolb and all that at this point, and quite possibly Engeström and Wenger.)

The process starts with learners. (Always a good bet pedagogically.) These learners could be students enrolled at a university, or informal learners, or participants at a research conference. Data is generated by or about the learners. It could be demographic information, it could be a clickstream, it could be login time, it could be the full text of their contributions to an online discussion, it could be their assessment scores – everything. This data can be generated automatically, or with great and strenuous efforts by a large multidisciplinary team – or anything in between. There’s also data about learners who are like the learners in question in some way or other. One of the drivers for the interest in learning analytics as a field is that we have far, far more of this information than we used to, for all sorts of reasons. Next comes the bit in the process that perhaps most springs to mind when you think of doing ‘learning analytics’ – someone takes that data and generates some metrics or analytics based on that data, which give some insight in to what’s going on. This could be a visualisation, a dashboard, a list of ‘at risk’ students, a comparison of some outcome measure with some benchmark, or whatever. Some of these are easily generated because they’re built in to tools at hand; others take huge amounts of time and effort to research, develop and validate. Then those distillations of the data are used to drive some sort of intervention, which has some change on the learners. This could be a tutor making personal contact with a student that a model has identified is at very high risk of dropping out. Or it could be a dashboard for learners showing how their activity compares to others’. Or it could be a Government looking at the graduation rates across the university sector and deciding that sector-wide change is needed. And right back down at a simple level, a teacher reviewing the final grade scores for a course and deciding considering what to do next time they teach it is an example of the cycle in action.

Accountability comes in here too. There was much talk of ‘analytics gone awry’ today, when they’re used inappropriately as an unhelpful stick to bash people with. This cycle would show that pathological process spinning off and not closing the loop effectively. A process that doesn’t result in an improvement for learners is not actually helping learning. (Though it may, of course, have some effect politically.)

I don’t think you have to have all the elements present to be able to claim to be doing learning analytics. The bit at the bottom, where you create some representation of data about learners, should count on its own. But to make effective use of learning analytics – and unlock the huge potential value that’s there – you do need to close the loop in a cycle like this. There are lots of places where the cycle can break: the data may not accurately reflect the learners; the metrics or analytics developed may be flawed; the intervention may not happen at all, or be inappropriate or ineffective and not affect learners at all.


This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission needed to reuse or remix (with attribution), but it’s nice to be notified if you do use it.

Author: dougclow

Data scientist, tutxor, project leader, researcher, analyst, teacher, developer, educational technologist, online learning expert, and manager. I particularly enjoy rapidly appraising new-to-me contexts, and mediating between highly technical specialisms and others, from ordinary users to senior management. After 20 years at the OU as an academic, I am now a self-employed consultant, building on my skills and experience in working with people, technology, data science, and artificial intelligence, in a wide range of contexts and industries.