LAK12: Tuesday morning – 4A Visual Analytics

Liveblog notes from Tuesday morning (1 May) at LAK12 –  Session 4A on Visual Analytics.

Illuminate Yaletown Lights, Magic, Action!

Alfred Essa and Hanan Ayad
Student Success System: Risk Analytics and Data Visualization using Ensembles of Predictive Models

From Desire2Learn.

Talking about the design – student success system, it’s function, design principles, problem set – then a demo of the visualisations. Then on the research problem set.

Heroes are Sir Francis Bacon and Aristotle (!).

Design principle 1 – from UX, less data : more insights. Underneath the hood, more data : less complexity.

Goal of application is to build on John Campbell’s Signals – predict/dientify at-risk and exceptional students, trigger interventions.

Similar to patient/doctor workflow.

Design principle 2- goal isn’t prediction, it’s optimisation. Analytics maturity – start with data about present, and past: reporting.  Then at level 2, forecasting, predictive modeling. Holy grail is optimization – what we want the desired future to be, what should happen. Finding the optimal path from current state to desired future.

Limitations of ‘Signals’-type approaches: interpretability, portability. Student/course is an input, model function, leads to output signal. In healthcare, if system just said red/yellow/green, not much to go on. But if goal is to design an intervention, need more. Not enough information for intervention. Not enough interactivity – can’t interrogate and make sense of the data. And not portable – same model used for every course at every institution, doesn’t account for variability of learning design, instructor, course size, discipline.

Quick demo of the Student Success System.

Shows students with name, picture and a red/yellow/green indication. Click through, get a profile chart – demographics. Shows a win/loss chart – shows what risk factors are contributing to the prediction. e.g. attendance, completion, participation. Also a risk quadrant – each point represents a student, on two axes – course success index and calculated grade to date. Top right quadrant is on track; bottom left at high risk. Can also see the running account of interactions student has had (CRM – like OU VOICE) – shows qualitative picture of what’s happening. Then can make a referral, make an intervention. Tool shows possibilities.

Another case demo – student taking four classes, predictions for each. For three looking green, but for one it’s looking less good. Attendance fine, competion fine, participation Ok, but social learning negative. Shows an sociogram – network graph – shows her as an isolated learner.

Research design and modeling

The ensemble model underneath. Three aspects: first, what it means. Second, from statistics/machine learning perspective. Third, the application domain – how we apply it in education specifically for predicting student success.

Ensemble modeling means collective intelligence – optimising decision-making. Can’t rely on a single expert; need to bring together multiple experts. For example, a patient with a problem might want to seek a second opinion.

The Signals model – have intelligent system; preserve interpretability, interactivity, and accuracy of prediction.

Ensemble modeling has goal of improving predictive accuracy – a consensus function. Not a single model or hypothesis that is trained, build multiple hypotheses, and through process of consensus bring them together. Mathematical basis and large-scale empirical benchmarks showing that this improves decision-making. Also opens up design choices too: can look at different datasets and features, and what are the consensus functions that bring them together.

Using multiple semantic units – text mining, SNA, participation, completion, attendance. A whole series of domains and dimensions.

Conclusion

Francis Bacon and Aristotle. Signals-type approach – the predictions are black boxes. Here trying to open them up, so we can probe them. Bacon is originator of scientific method – want to peek in the black box. Aristotle on the idea of judgement. Machine intelligence is not enough, have to couple with qualitative data and human advisors.

Questions

Someone 1: Nice visualisations. We have problems with uncertainty, tell faculty to use LMS but they use something else, a new tool. How does that factor in to these? How can we quantify or at least recognise this uncertainty? E.g. a small class with discussion not improtant, but lots of red bars.

Hanan: The predictive models, are in domains visible to the instructor – that opens the possibility of tuning the models. They have ability to control the weights. If most discussions in class, face to face, can chose to turn off that domain completely from contributing to the prediction. The design framework is modular, so you can add and remove things.

Someone 2: To what extent are the plans to build analytics capabilities with tools that use the LTI protocol – bringing in outside tools, using that data infrastructre. E.g. Panopto, Piazza. Newer tools that interface with LMSes.

Alfred: Didn’t talk about the enginerring. A client might say, we’ll build our own proprietary model, don’t even want D2L to have access. The models could be exposed and we’d pick that up as a web service. Engineering is intended to be flexible.

Jose Luis Santos, Katrien Verbert, Sten Govaerts and Erik Duval: Goal-Oriented Visualizations of Activity Tool Tracking:
A Case Study with Engineering Students

Jose talking, from KU Leuven in Belgium

Motivation for project was helping students be aware, self-reflect, and make sense. A lot of work done in this area, interested in how to engage the students in the process of reflection and modifying their behaviour. Related work in LAMS, Moodle, Dokeos, Khan Academy. Students should reflect during the activity.

Student activity meter previous project. Also learning dashboard/learnscapes. Feedback was need more context to understand what’s going on.

Wanted to use the concept of goals. Not so much learning goals, as the Quantified Self approach – oriented to the activity. Focus on the process rather than the final learning goal.

Used design-based research methodology. Get early feedback, requirements. Cycle of design/implement/deploy/observe & evaluate/ analyse/requirements.

Paper prototype, digital prototype, first release, second release. Focus on teachers and TAs initially. 1st with n=7, 2nd with n=5.

Feedback results – about the size of tables, redundant info (user expects two visualisations mean two datasets), personalisation (different backgrounds, few students used to visualisation), context and motion.

Then tried a real course, on programming. Wanted to explore how students spend their time during the lab sessions. As engineering students, need to learn not just code, but also how to manage their time, and so on.

They tracked how individuals and groups spent time – tracking on the desktop (development, writing), applications, program codes. Pseudonymous accounts used – given to students, create accounts in RescueTime for them. Also with Eclipse / rabbit-eclipse to track their development activity.

Visualisations show how you compare to other students; are interactive. Show colour-coded goal – red not started, blue in progress, green complete. Also a motion chart, showing their activity vs average of the group, also by time. Bar plot of your time vs the total time of the group. Whole series of charts, for time spent in applications, on various documents, and websites.

First evaluation n=36. Did a demo, explore perceived usefulness, privacy concerns (tracking!). Questionnaire – found it good, helpful; are Ok with tracking – but not to be tracked at home.

Second evaluation n=10. 4 lab sessions, 15 minutes. More questionnaires. SUS result 72- usability quite good. Usefulness not totally convinced. Could understand visualisations, see patterns.

Two widgets that they found most useful: time spent on 6 main activities compared to peers, and time spent on general software development activities.

Final iteration (not in paper) – with n=14. SUS questionnaire – 66. Was at the end of the course.

Although they consider it useful, understandable, usable – students did not use the tool frequently. Why? One student said had other priorities. They don’t realise that reflection is useful feedback. We have to engage them, get them used to this. How can we engage them? Working on this.

Now doing other tool studies, simplifying the visualisations. Also want a small/simple widget – e.g. traffic light.

Questions

Someone 1: Can say more about privacy concerns?

J: In first iteration, had 36 students. Could not force them to use it – is against the law. We installed the software in the lab computers. At least half of them wanted to work at home with their own laptops. So we only had half the students working on that. They told us they don’t feel comfortable that other students, or us, can look at what they are doing. For them they are autonomous. The person who rejected this tracking were the ones who were not so active during the course. It’s difficult to manage.

Someone 1: On the lap computers, was there some sort of review process in your institution to be able to collect and analyse this?

J: Not part of normal operation, but no review process. Let them know they could stop. Were giving them some pressure by saying it’s part of our research.

Someone 1: The student who was using Facebook, do you imagine interventions for this? If they were off task? Or helping them to become self-monitoring.

J: not so interested in intervening. They can use Facebook, problem is if half the class are doing that. But in principle they have this for them; no dashboard for us. Trying to get free of sense that we are observing them. Want to provide them with reflective tools.

Someone 2: One problem I’m struggling with: the goal is reflection, learning impact. Early iterations showing them data that’s not their own. Any way to close between the fake data and the goal early on? To put them in to a scenario?

J: The scenario with the fake data was pretty close. We were simulating the real scenario. The usefulness was quite good, but they have problems to figure it out. They can’t link it to their own activity.

Barbara Kump, Christin Seifert, Guenter Beham, Tobias Ley and Stefanie Lindstaed:
Seeing What the System Thinks You Know – Visualizing Evidence in an Open Learner Model

Tobias is talking, and tells his children (watching!) they have to go to bed after his presentation.

Talking about aposdle – ended four years ago, but still working on it despite having no money. Previously at TU Graz, now Tallinn.

Adaptive system used in the workplace. It tracks what people are doing, makes inferences about their characteristics, and gives recommendations about what they should do next. This user modeling was often secretive – the user was unaware; tackling this to open up the user model, visualise the system to the user.

Three things: open-ended learning environment in the workplace; large and complex learning domain (not easily modeled); result of an implicit diagnosis. MyExperience created – non-invasive, reflects what they’re doing; reduces complexity to get a lot on the screen; an intuitive and correctable.

Work done on non-invasive knowledge assessment (Brusilovsky 2004, Kay 2002, DeBra et al 2010). Open learner models (Bull & Kay 2007). Mainly in educational settings.

Assumptions in APOSDLE – also empirically found in the workplace: learning is self-directed, driven from workplace tasks; learning uses existing resources; social process involving role-switching; experts act differently to novices – not just more of the same. Want to look qualitatively, not just 1-10 score.

Aposdle was result of 4y research project.

User model is an overlay of the domain model. It’s a generic application, instantiated by a domain model. Four workplace domains modelled – innovation management, regulations on chemicals and their safe use, simulation in electromagnetism (lightning strikes on aeroplanes), requirement engineering. Modeled tasks and topics in the domain; basic model is that a task requires learning of certain topics.

Assessment implicit – tracked knowledge indicating events -that give indication of the kind of knowledge someone has of a topic. Three levels: learner, worker, supporter. Events tracked for each. Algorithm simple – majority voting, if do mostly learner events assume they’re learners. Also neutral events not classified.

Two strands of research. User-driven development through several iterations. Shows three areas for each person: learner, worker, supporter, with topics from the domain ontology shown that fit in each, based on the knowledge-indicating events that were tracked for them. Intensity of colours show frequency of interactions. Principles from info vis – people quick to find info in tree views where you can drill down through the domain ontology, coordinated with a treemap view with all info on one screen for an overview. Ben Schneiderman principles – give overview first, then a filter, with details on demand when you drill in.

Future work – analysing knowledge-indicating events from months of use at workplace and their prediction of self- and peer-assessment.

New project just accepted with the EU – Learning Layers: Scaling up Informal Learning in SME Networks. Construction sites, mobile phones; mobile techs, social semantic learning analytics, 20 partners, €13m.

Questions

Someone 1: Three layers – what’s the rationale? Literature? Specific domain? What is the reason behind mapping between KIE and the specific player? Why is editing an annotation indicative of a supporter role, but creating done by learner?

Tobias: Where three levels come from: we assume that experts do different things than novices, should be reflected in the system. Quite a lot of research showing this. Three particular ones came from research in to knowledge workers. Could of course try to come up with more fine-grained analyses; for workplace practice, needed metaphors that are very intuitive. Then, it was a pragmatic approach – would be interesting to do machine learning for the classification, that’s future work. Wanted to see how far we could get with simplistic approach.

Jon Dron: I like the solution to the problem of inscrutable user models. It makes it easier to game the system. Did you find examples of people deliberately modifying behaviour?

Tobias: No, it was only for themselves. You don’t have to game, you can just change a level if you think the system has misclassified you. The only effect this has is you get different recommendations from the system.

Derick Leony, Abelardo Pardo, Luis De La Fuente Valentín, David Sánchez De Castro and Carlos Delgado Kloos
GLASS: A first look through a Learning Analytics System

Derick presents, from University Carlos III de Madrid. A demo of GLASS and two visualisations.

The problem was around large data volumes of student events in the learning activity. Are at an exploratory stage to look at visualisations to uncover hidden information in that dataset. Faced many problems. Want to be able to reuse visualisations in different contexts.

Context is a programming course, collecting data from a series of tools – editor, IDE, compiler; LMS class forums (use depends on professor); browser. Students have their own characteristics – work in groups, have different majors, etc.

Demo starts.

First visualisation is based on Google Analytics. Left side showing track of events students generated during the whole term. Can see clear weekly rhythm, and peak of activity right before the face to face session where they were asked to (?present) information. They did it at the last moment. Also big peak right before final project or mid-term.  Right side shows what they’re working on: mostly on the browser, but then the programming tools. Spread changes over time – initially a lot of activity with the browser and less with compiler; then by the end when working on their final project, can see that they browse a lot less and work a lot on the project. Built automatic filters – e.g. by username, community they belong to. Can compare users easily. Can see e.g. which groups are using the forums (collaboration).

Second visualisation, designed to be very simple. Shows student photos, with red/amber/green indicators for events in each category (compiler, web, forum).

Third visualisation, not yet deployed. Aimed at the teacher. Gives overview of how the class is performing. Will include marks.

Questions

Someone 1: Graph showing students wait until the last minute before a face to face activity before work. Anything looked at to see if, what the correlation is between waiting to the last minute and performance. Represent that for the educator?

D: Was one of the approaches. Were looking for variables that correlate with the grades; haven’t found one that correlates a lot. Now trying to check the insight; might not correlate, but want students to learn to have a constant work rate. Try to avoid that kind of behaviour.

Someone 2: Impressed with what you can pull out about the student activity. I’m sure there’s a correlation between doing the right thing at the right time and the learning. Any data about their actual conceptions and how those move, grow over time? Is there a way to extend this to get more of what their conceptions are?

D: That was the idea with GLASS. We are not collecting their conceptions now about their learning or the tool. We would have to have a dataset with that information first, then visualise that or correlate with performance in the class.

Someone 2: Is there any data besides grades that’s easy to get that would have something to do with their performance?

D: That’s what we’re trying to find out. We’re not sure.

Someone 3: Self-assessment on a specific concept, then you can then answer this kind of question.

D: Issue with self-assessment, it’s optional, the last visualisation is based on engagement, is on self-assessment on the content. If they want to self-assess, we correlate more with that than the performance at the end.


This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission needed to reuse or remix (with attribution), but it’s nice to be notified if you do use it.

Advertisements

Author: dougclow

Academic in the Institute of Educational Technology, the Open University, UK. Interested in technology-enhanced learning and learning analytics.

1 thought on “LAK12: Tuesday morning – 4A Visual Analytics”

Comments are closed.