Liveblog notes from the Sunday afternoon workshop at the start of LAK12.
Lori Lockyer and Shane Dawson: Where Learning Analytics Meets Learning Design
Starting with introductions – there are people here with all sorts of backgrounds.
Learning Design
Lori Lockyer has worked on learning design for a long time. Not just instructional design, but as a way of representing teaching and learning practice. Emerged after ‘learning objects’; building on ideas about reusability.
Cycles of development – starts with a flourish of technology. And that’s where learning analytics is at the moment. Has started with interdisciplinary focus. Internationally – pedagogical models, pedagogical patterns. Big learning design project (2000) at Wollongong. Built on Ron Oliver’s model (1999, Dist. Ed 20(2), 240-254). Tasks as an important component. Developed a visual framework: tasks or activities are the central focus, the main organiser down the middle – what the learner does. Then how different resources are needed to support those tasks, and what supports underpin them. Hard to distinguish tasks from supports at times. Working towards making a repository, tools; explored how people used them. Now want to focus on ‘the regular faculty member’ and what they do in terms of learning design.
How do teachers design? Iterative, move between macro and micro views frequently. Constraints and considerations are important, and locally variable. Inspiration often from others, regardless of discipline.
Interpreting designs
Then we explored some sample, decontextualised learning designs on paper.
Looked at what students are doing – producing things online, submitting assignments. Gave illustration of what teachers need to do, but students might not get it from this view. Might be useful to students to give the more of a gestalt of the class – a visual can help. Challenging to show visualisations from week, to semester, to whole course. How the teacher does the activities isn’t shown, of course.
When you look at learning design, you’re looking at what the learner does, what the teacher does; move to analytics picks this up.
Learning analytics overview
Definition of learning analytics – yet again. (I think we’re going to see that a lot this conference.)
Course, unit, subject, module – map as a programme, or at institutional level. George Siemens – learning analytics is the most important educational innovation in the last 100 years. Nobody is an expert in all of learning analytics: it covers so many disciplines. Ed theory, SNA, data mining, semantic, psychology and more – entrepreneurialism too.
Large data sets – make interpretations of the patterns we find. We are creatures of habit. Recommendations e.g. from Netflix, Amazon. Credit card security – people noticing that now. What do changes in habit indicate?
Core aspect: we have a wealth of data, from what they have to breakfast to when they go to bed. The access you have is very important.
Discussion – what data do you have access to? Social media? Survey, LMS?
One person (Chris Brooks?) investigated predictors of final grade – mid-term was the most powerful; behaviour (through their online lecture system) was also a predictor. But self-reports of e.g. reasons/motivations for study, self-matching to a cut-down Bloom’s taxonomy didn’t predict final grades at all. [Interesting! Would like to look harder at this in OU data.]
Diversity of data captured, and available for access. Someone even has information about lunch purchases from student cards used in the campus food outlets. There’s also information about the composition of the courses. RateMyProfessor too. Psych-based surveys – learning dispositions etc. Clickers in class.
Performance and behaviour
Relationship between grades and online activity – what data? Times and type of access?
At institutional level, it evens out, but at a class of 30, the variability is so high it’s harder to see.
At UBC, consistent relationship – frequency of logins, and discussion activity (make a post, or make a reply – simple ‘have done/not’) are the big predictors. John reckons practice quiz use is big too. You don’t need deeper analysis.
Someone else found re-reading a post is a predictor of a better mark. Hardly anyone’d thought to look for that; an issue for learning analytics (which is why we’re all here).
Often an issue in getting hold of grades for research performance.
Alert systems e.g. ‘tell me if they don’t log in for a couple of weeks’.
Time spent online isn’t important as a predictor, but frequency of login is. Interesting discussion about why that might be.
Concerns about attrition
Where do they go, why do they drop out? Motivation; achievement orientations – Carol Dweck and Jan Tan – learning or performing.
Study – course with two forums – learning and sharing; admin. Analysed learning analytics vs learning dispositions scales ( 5 factors). Findings: High users of admin forum – performance driven. High users of learning forum – learning driven.
Can monitor participation, reduce attrition.
Social network analysis
Node is a student, tie indicates an interaction. Using SNAPP tool. Very powerful visualisation tool. Makes outliers obvious – disconnected students – e.g. students who’ve made a post but nobody replied.
Instructor wanted to make a student community, but it looked in SNAPP more like an instructor-centred one: they were communicating so frequently and often, that it discouraged students from doing so. The following year, with SNAPP available over the year, as a reflective reminder. Did pause to allow communication, could see the connections growing.
Don’t need really complex models – really simple indicators to show whether or not you’re on track.
Individual networks – one class very highly linked, with central nodes who are students. Explored ego-networks (who is my immediate network): low 10% network – quite a sparse network, poor density, few nodes, mostly talk to other failing students; top 10% have richer, denser networks – largely talking to students of the same ability. The students self-select in to ability subgroups. The TAs spent 70% of their time in the high-performers network; 10% in the low-performers one – perhaps not what they intended. Why? Was about developing community: low performers asked very low-level questions, and they wanted to let students respond to those.
Another discussion about time spent with high achievers – a faculty member said he wasn’t interested in the low achievers, he wanted to switch on the ones who really cared and turn them in to grad students. Also true in grade school about time spent. The high performers ask harder questions which need higher-level response.
Another experience where low performers were engaged – the high performers lost motivation, and grades dropped. Need to satisfy their needs too.
A tool with indicators to get help – the ones who went to get help were the ones who didn’t really need it.
Can we join groups together – get the Vygotskyian scaffolding process working by getting top performers to help lower ones?
Creative capacity
Ronald Burt (1992) – people bridging structural holes, who link two otherwise unlinked networks. They see information earlier – they can translate and bring that back. Central learning support units can do this – don’t solve problems, but translate, put people in touch with each other.
Looked at cognitive playfulness vs SNA; found people with high betweenness had high self-reports of creativity. If you need students who are creative problem-solvers – where they fit in network structures is illustrative.
Reflecting on learning designs: the key actions (e.g. listening, writing on LMS, small group work) – look at those engagement types, and decide weighting. Small group, or their product? Interactions are a different set of analytics to outputs.
[In conversation over tea, I was very struck with a sense of the power of very simple metrics. The simplicity can be innate – e.g. login frequency. Or the result of data reduction hiding a lot of complexity – e.g. Signals. The report that re-reading was usefully predictive was interesting – lots of people are going to go away and look at this. This confirms the sense I have of learning analytics as a young discipline: we are still finding the low-hanging fruit.]
Bringing design and analytics together
An activity: learning designs in front of you. In pairs, take a learning design (your own, or a given) – look at what data would help inform the design, the delivery of the design: the interactions for learners and teachers, what data and patterns you would expect. How would you know it was going well, how could you spot it going off course? And how could you intervene?
Pair discussion, then join up with another pair. A lot of interest in text mining. Very easy to focus on the clever, big-guns predictive modelling stuff.
Many issues coming up about online vs face-to-face that are long-standing; question of what we want to do with it, just because we can do it. Should we be more strategic about what we want to capture and intervene on? Practical, theory or evidence-driven intervention?
Discussion forums – does the tutor get the information from the LMS? Can they deal with it all? And for peer assessment, a challenge. With a large class, a lot of work to engage with on the forums. Simple tools like SNAPP, OAnalyst (??), TagHelper, Cohere. Study on producing rockstars – gave peer assessment throughout. Feedback was number of words compared to mean of the group; very simple, made big improvement.
When you have a lot of students, who they work with face to face is strongly influenced by where they physically live.
It’s more challenging to capture process aspects than outcomes: outcomes more intrinsically available for analysis. Portfolios seem very rich, but very challenging to analyse.
Much learning analytics around large student groups, first year, STEM. So for contrast, explored the opposite – small senior course in philosophy. Capture their way of reading. Invent critical thinking analysis engine – point at genesis products and final output – and see if they can summarise, argue, assess effectively. Interesting to look at this visually, video, multimodal. There is research developing software for this – ? John Naismith?, SFU. Indiana University startup – annotating textbooks tool. Digital handshake between LMS and e.g. OER repositories, YouTube – could the LMS scope what’s happening outside?
Conclusions
Interesting challenge, lots of possibilities. The reality of our LMS: we have large surveys, more complex activities. The tools we use change all the time. Interesting to explore where we can go in the future. Thanks to all for participating, look forward to continuing the conversation over the conference.
–
This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission needed to reuse or remix (with attribution), but it’s nice to be notified if you do use it.
One thought on “LAK12: Sunday afternoon”
Comments are closed.