LAK13: Wednesday afternoon (5)

More liveblogging from LAK13 conference – Wednesday afternoon, fourth session.

MOOCs

Rude Cow!

 

NB Reduced liveblogging in this session because I was presenting.

Deconstructing Disengagement: Analyzing Learner Subpopulations in Massive Open Online Courses

Rene Kizilcec, Chris Piech, Emily Schneider

Emily starts. Interdisciplinary team.

These are more the instructionist, individualised model courses. Students enter at any point, 2-3h video lectures, quizzes. Assessments with deadlines regularly, autograded.

Big question: how do we understand whether MOOCs are successful. Challenge is that they are Open – open as in open access. Heterogeneous learner population. People come to the courses for a wide variety of reasons. May be satisfied and leave after a short period.

Defining success fo open-access learners: Assessment scores – but not everyone takes those. Completion rates – many people talk about; problematic – usually the number of people who pass the course, based on assessment scores, so if they chose not to do those not included; also what’s the denominator – very high enrolments, but in our courses, 30-50% higher than number who ever come in to the course. Initial enrolment is more an indicator of interest than commitment. Completion rate is a binary outcome – misses nuance in terms of subpopulations of learners. Process measures – interactions with the content. There’s much we can learn from how people interact with the content. Hard but tricky.

Classification of subpopulations. Want to be universal, theory-driven, parsimonious, predictive, dynamic. Use as a lens for analysis – comparison across subpopulations, across courses.

They have a lot of data – previous quals: 46k high school, 27k UG, 21k undergrad. Average age is in the thirties. Big skew towards high development index countries – mainly the US.

Chris takes over.

Clustering as a way of making sense. Using engagement as the measure. Ideal – students engagement changes over time, want to look at the pattern over time. Coarse labels – on track, behind, auditing, or out (not interacted at all).

Find four prototypical trajectory. First – on track throughout. Second. Third disengage. Fourth who just watch one or two.

Auditing – watch lectures, but attempt few assessments. Completing – complete most assignments. Disengaging – attempt at the beginning but then sparesely watch lectures or disappear. Sampling briefly explore by watching a few videos.

Same patterns in all classes. Very robust, deep trend!

Cluster validation very strong. Also passes the common sense test.

Q The disengaging population is interesting. Any trends within that group – e.g. left because of exam.

Yes, we do. Clustering algorithm can be specific to a class. Most will have a mass exodus after a particular assignment, and that’d be the cluster centroid.

Q Do you predict individuals, which cluster they’d fall in to?

We’re going to do that in future research.

Found four robust, protypical patterns in their engagement MOOC data.

Rene takes over.

Digging deeper. Comparing clusters in different courses, using features not used in the clustering. Started with overall experience reported in the post-course survey.

Completing learners have better experience, and Auditing learners. Disengaging are like completers but drop out, and have very significantly different experiences. This suggests we should ID the subpopulations early to customise course features. Could be real-time clustering analytics, or even self-report at the beginning – there could be different tracks they can sign up for. Maybe auditing learners get fewer reminders of the assessment; they’re happy without the assessment.

Completing learners are most active in the forum (posts and comments) – grad-level course, way more from completing learners than any of the others. This is observational – don’t want to jump to causal conclusions naively. What is the causal relationship? Could run either way, or be a background variable. Reputation systems and social features – how can they change how the populations behave?

Geographical link too – HDI level vs pattern of learning. Picked top four participating countries.

Q: Is there a variable like language behind this – e.g. India, Russia – could be other factors.

Different countries are one of the most powerful predictors of exam score. Language, but internet access too. In India, mostly in big cities where it’s good – not small rural places.

Gender – female completing learners underrepresented in advanced courses. Though baseline average male:female ratio was above 7! (This was CS.) So – frame assessments to minimise stereotype threat.

Future directions

Experiments! This was observational, not experimental. Have generated hypotheses that could be tested in future. One around collaboration and peer effects. How assigning group projects might work out versus those trajectories. Interface customisation and targeted interventions for different subpopulations.

Nuanced analytics – auditing – MOOC as a resource vs MOOC as a class. And disengaging – early prediction for intervention.

Reasons to enroll and trajectories. Completing learners say they want to enhance their resume.

Engagement trajectories for real-time analytics in MOOCs – dashboard visualisations.

Stanford Lytics Lab – led by Roy Pea – lytics.stanford.edu

Questions

Naomi Jefferey:  Excited to use these beyond just success measure. Any concerns – beyond test for robustness – to applying this to courses that are not massive or open. OCs.

Do it carefully, but no reason it should only work for MOOCs. Need at least 100-200 students to be confident in the clusters. The methodology should be applicable.

Sheila MacNeill: Do you think your four patterns would stand up for more cMOOC courses? E.g. ones with no instructor videos? Suspect they might stand up. Any plans?

Emily: Great point. Would like to do that.

The Pairing of Lecture Recording Data with Assessment Scores: A Method of Discovering Pedagogical Impact

Negin Mirriahi, Shane Dawson

Shane presenting.

My talk’s not about MOOCs. Going to try to weave it in. This is really Negin’s work, she couldn’t make it.

What this means beyond predicting student performance.

Collaborative Lecture Annotation System – CLAS.

Video can be particularly helpful in medical education.

MOOCs and the Funnel of Participation

Doug Clow

Notes and slides (when uploaded) and references available here.

 

This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission needed to reuse or remix (with attribution), but it’s nice to be notified if you do use it.

Advertisements

Author: dougclow

Academic in the Institute of Educational Technology, the Open University, UK. Interested in technology-enhanced learning and learning analytics.