LAK13: Wednesday morning (3) – Sequence analytics

More liveblogging from LAK13 conference – Wednesday morning, third session.

Sequence analytics 

Watchman - pillar saints
(cc) Eddy van 3000 on Flickr

Interpreting Data Mining Results with Linked Data for Learning Analytics: Motivation, Case Study and Directions.

Mathieu D’Aquin, Nicolas Jay

Mathieu presents.

He comes from KMi at the Open University, which is a great producer of attenders at this conference, but I know nothing of learning analytics – I work from a more technical area of linked data and semantic web. So my views on LA may be a bit naive.

Super naive view of LA: Data, some kind of data processing, visualisation, to insight – tada! We should do something and try to improve the learning experience. Being slightly less naive, the interesting bit is going from data to insight – I call that interpretation. We need some kind of process in the background, for the patterns from the analytics, we get some explanation or justification. Interpretation requires background knowledge: you can’t understand the pattern if you don’t understand the domain and have some info to give you some explanation. Part of it might be captured or encapsulated in the data, but most of the time it’s not the case. Most of the time it needs to be in the head of the people looking at the analytics.  We like having people. Well, most of you – I don’t. (laughter) Challenge: the data cover a wide range of topics, domains, subjects, and you might not know in advance what background knowledge is needed for interpretation.

Our approach: integrate linked data sources at the time of interpretation. To bring background information in to the interpretation.

What’s Linked Data? See the Linked Data tutorial from yesterday.

The web is conceptually a network of documents with hyperlinks between them. Nothing prevents you from linking in your document to someone else’s document. Linked Data is doing that for data. Can declare in a data way the links between data items. It’s a graph representation of the links. It’s in a structured, processable form. As in the web, the links are not constrained by technology – I can connect to university information even if it’s not on my systems.  It’s a very large quantity of data now – graph was impressive in 2011, but they’ve stopped working on the displays because there’s now too much. All sorts of data – species and genome data, Wikipedia data, research publications, government data, public information from universities – e.g. courses, facilities, programs.

Open University – data.open.ac.uk – was the first but there are now many. All the courses are represented there, related OER, AV material from iTunesU, YouTube, buildings information (many!). Directly processable and queryable. Can query with SPARWL – can just ask e.g. show me all courses available in Nigeria.

Very interesting resource. We’re interested in interpretation.

Here’s a use case in using the resources to interpret the results – sequence mining in student enrolment.

Data from OU Course Profile Facebook Application – shows who enrolled in what course at what time: studentID, course code, status (intend to study, studying, completed), date. So want to mine that for regular patterns.

Used obvious approach. Represent each student as a sequence of courses – e.g. (DD100) -> (D203, S180) -> S283). Have about 8,000 of those. Apply basic sequence mining, to find frequent patterns in these sequences, ie. courses often taken together in a certain order.

From 8,806 sequences, found 126 different patterns with more than 100 results. So e.g. DD100 to DSE212 is very common. How can we know what that means? You need to know what the courses are, relies on background knowledge for this interpretation.

We extract the sequences, then select relevant Linked Data to build a navigation structure in the patterns.  Use LinkedData URI for the courses, which gives you a very large amount of data about the course. And from those data – which are URIs – you can move on to other data too. Look at all the courses we have, query LinkedData to know the relationship between the objects. So e.g. the subject, and relationships linked to that – e.g. broader subjects. Build a hierarchy of concepts representing groupings of these attributes using formal concept analysis.

Provides an overview of the patterns. Can be straighforwardly applied to other source data. Sequences exploring many different dimensions. Broad subject area, subjects of related course material, assessment method.

There are limitations – needs linked data sources.

And! It has to be a loop. [Yes – the Learning Analytics Cycle.]

Questions

Someone: Strikes me a next step would be to look at the sequences, and see which are most associated with success. Are you considering?

That would be very interesting. We don’t have data about that at the moment, if we did it’d be an interesting dimension. Not looking at success globally, but the dropout rate in particular courses might be affected by previous experience, could interpret sequences that lead to dead ends, or rebranching.

Someone: Each sequence was an ordered triple?

They are steps, you only saw three but that was a coincidence.

Erik Duval: Like the overall approach. You didn’t show any example where the LD added value – so long as it’s within the OU, don’t have to worry about the rest of the world. How does the world add value?

This is work we’ve already done. I showed this one, in this part you’re right. I only looked in to the OU Linked Data. It looks trivial, but it’s not. It’s complex. It didn’t seem to make much sense in this area. Could look at the enrolment in courses based on where they’re available. Mappings between topics at OU and somewhere else and use say the Library of Congress Subject map. Technically it’s completely trivial. I didn’t see an example in this case of student enrolment where it would bring anything new. In other cases, working on now, trajectories of patients in diagnosis and treatment, in that case it’s much more interesting because you can go across different classifications and see things from other people’s background knowledge. We do the jumping thing, but only internally. In reality, it’s already visible and possible, it makes no difference.

Nanogenetic Learning Analytics: Illuminating Student Learning Pathways in an Online Fraction Game.

Taylor Martin, Ani Aghababyan, Jay Pfaffman, Stephanie Baker, Carmen Petrick Smith, Rachel Phillips, Phillip Janisiewicz

Taylor talking. Coming from the learning analytics side, forgive the data mining naivety. By training a psychologist.

Starting point – 3 out of 2 people have trouble with fractions. Big problem in the US. STEM illiteracy – “I hate maths”; not the same for reading. Many decades of research. Quantitative pre/post tests are good, but lmited. Qualitative great on process, but hard to generalise from. Solution: we love big data.

Teaching fractions – math ed. Many theories, debated. Part/whole theory, or operator theory. Connected learning – how do we make better environments. Interest-driven, social, online, open and shared (like Scratch), connect across home and school. Want to use affordances of current technology. Change view of STEM from we’ll solve a problem to more about diversity, increasing opprtunity.

Example from Refraction game from U Washington. Based on ‘splitting’ model of fraction. Repeated halving, repeated thirds. Combination of spatial challenge as well as splitting/fractions. One example level – have to make 1/6 and 1/9. When they’ve done many previous levels. This is hard. Kids are prone to go to one half, because that makes sense to them, and they use it first (but you can’t make 1/9 from 1/2). And there’s two ships.

How do we define what they know, and whether it’s changing. Build a mathematical state of the game. Abstracts from the game, showing sequences of development and right/wrong.

Iterated on visualisations of the levels – challenge to do, especially showing repeats/changes. Look at where they spend their time. Used those state to do some association rule learning – like supermarket stuff {Onions, potato chips} -> {burgers}. On the pre-level, when just messing, highest confidence for success was things using thirds at all. Post level, it was similar; there’s 1000 rules associated with the success, but making 1/9 at all is the important predictors. Conclusions: messing around with 1/3 is really productive, the central conceptual hurdle – even if not achieving an obvious goal. Messing with 1/2 was unproductive.

Cluster analysis to explore the states, total number of states, average time on states, time to use 1/3. Hierarchical cluster analysis, four basic patterns. First is totally haphazard but not really succeeding. Second was active, but succeeding more often and more quickly get to 1/3 space. Careful group just goes to the solution. Then a fourth that look like the careful group … but fails – minimal group.

External transfer tests pre/post. Haphazard and minimal were unproductive. Exploration and Careful were productive.

Fussing with core concepts was productive, and at medium level. Careful strategies can be productive too. Want to move towards making the environment adaptive – what degree of fussing for whom, and what time – to process or sequence analysis.

Next steps – more engagement, map to brain activity patterns with NIRS machine (tolerant of movement). Developing teacher tools with visualisations for them. Teachers think kids just playing, want to know what they’ve learned.

Questions

Someone: Going back to the Haphazard, Minimal, – the correlations to the pre/post. How strong were they?

Minimal is worse than haphazard, doesn’t reach significance. Effect sizes are really nice. Interesting – did same clustering of the data, their pre-level does predict their pre-test score, but doesn’t predict their post-test score. That’s also true on the post-level. Most people by then have changed to more productive strategies, so that’s why there’s no association (predictively).

Someone2: I like this too. Games are strategic, people choose moves for reasons. What do you think they were?

We’re using cross-method validation. Want to talk about this. With data mining, you get the pattern, so it must be real. Really? We’ve turned these patterns in to observables in the classroom, have a team looking there to see if they’re doing it, plus ethnographer taking field notes. Is it real in the classroom or a data mining artefact. We’ve refined our patterns. E.g. haphazard – thought it was all haphazard, didn’t realise difference to exploration (more (arbitrary) goal-directed). Haphazard had more repeat moves, but exploration had very few. So tweaked the algorithms to pick that distinction up. Love to talk about this with anyone.

Bjorn?: Comments on the training factor of having played around with the game, just becoming better because good at the game but not learned the knowledge.

We have a lot more of that to unpack. Working hard on game analytics. How to take what’s happening in game – is it knowledge, or just behaviour patterns. That’s why we keep pre/post there. They’re learning fractions from this. Finding a control group is a different story. Played around with it, not satisfied I can say maybe the game’s more fun so they’re learning more. Could compare to doing nothing but not very satisfying.

Erik: Can you use the LA to improve the design of the game? I was puzzled why you include the spatial difficulties if you’re going for helping to learn fractions. Makes it more complicated.

Worked very carefully with the dev team. In first study, noticed kids weren’t struggling with the math, but the spatial part. So have tweaked that – the spatial is what’s fun. Kids like making the ship go wooo! Keeping them engaged is important. Study just starting, trying to see mapping the data – are the spatial challenges helping, or are these rocks just in the way. Not redesigning Fractions, more working on other games.

David Shaffer: Like the work you’re doing. The categories – haphazard etc – how are you characterising those? Hand coded? How do you determine which category for each student.

It’s hierarchical cluster analysis with a load of variables (e.g. number of unique board states, time on level, etc).

David: Generalise to a non-closed form domain, if not discrete states. In this case, those trees completely map the space. But if every state wasn’t as easily determined?

Doing similar in more open-ended environments e.g. programming Scratch. Grain size important – at the moment level, then mathematical state. But for Scratch, starting with a whole project and variables at that level. Another one, virtual robotics, touch programming environment, much simpler than Scratch – so can have analysis at the level of program states. You just have to define that.

Alyssa: Very interesting. Novel – situationally-dependent productivity of patterns. The idea, I can be an explorer, if in the 1/3 space it’s productive, but if in the 1/2 space it’s not. Detecting e.g. I’m explorer but in the wrong path, give feedback to change.

One of the steps to take to make the games adaptive. That’s one of the first things to pull out. It’ll be different for each game. How do we kick them over in to more productive spaces? Investigating in the classroom what kinds of feedback help those changes happen. Can e.g. move a focus of attention to where attention would be productive. So not heavy-handed ‘go do this’ but embedded in the game.

This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission needed to reuse or remix (with attribution), but it’s nice to be notified if you do use it.

Author: dougclow

Data scientist, tutxor, project leader, researcher, analyst, teacher, developer, educational technologist, online learning expert, and manager. I particularly enjoy rapidly appraising new-to-me contexts, and mediating between highly technical specialisms and others, from ordinary users to senior management. After 20 years at the OU as an academic, I am now a self-employed consultant, building on my skills and experience in working with people, technology, data science, and artificial intelligence, in a wide range of contexts and industries.

2 thoughts on “LAK13: Wednesday morning (3) – Sequence analytics”

Comments are closed.

%d bloggers like this: