LAK13: Wednesday morning (2)

More liveblogging from LAK13 conference – Wednesday morning, second session.

Visualization to support awareness and reflection

Second session at LAK13. Ravi from Copenhagen is the chair.

Zumba crowd
(cc) Cimm/Simon Shoeters on Flickr. This is not a photo of the conference itself but it is very much like this.

Addressing learner issues with StepUp!: an Evaluation

José Luis Santos, Katrien Verbert, Sten Govaerts, Erik Duval

José presenting.

It’s a tool, a learning dashboard. Funded by two European projects, including WeSPOT.

Learning dashboards are a subset of personal information dashboards. Using many sensors. Example of RunKeeper – can apply for learning; can have a wristband. Easy ways to track and see what’s happening. There are many existing dashboards, we should pay attention to those. Examples from LAMs and others. Goverts, Verbert, Duval, Pardo paper at CHI 2012 is previous work.

Evaluated the tool in three courses – First was a master’s thesis course, with 11 students, individually. Second multimedia students designing mobile apps n=20, groups of 2-3. Third multimedia, n=36, groups of 6.

In all the courses, follow an open learning approach with learning traces. There are face to face sessions with supervisors, they report progress in blogs, Twitter, and also asked them to report time using a reporting tool. (toggl dashboard). The tool is STEUP!. Open and available to all. Big table view – rows are students, columns are blog posts in various groups, and time spent, Twitter, sparklines for social media and time activity. The sparklines you can click through to see the full graph of activity. This gives you a big overview, can compare your activity to colleagues.

Development methodology was design-based research – cycle from requirements elicitation, design, deployment, observation, evaluation, analysis then back to requirements. Analysis – data sources were Google analytics, survey, tweets/blogs/time tracker.

There’s a model underlying it: four phases – awareness (data), self-reflection (questions); sensemaking (answers); impact (behaviour change new meaning) – feeds back to awareness again. (Verbert et al Am Behavioural Scientist 2013 paper sets out). Behaviour change was small, not large impact.

Evaluation if the students perceive it, if their sensemaking was solving their learning issues. Set up three brainstorming sessions with the different groups, discussing what their learning issues were. Researchers processed the information, then fed it back to the groups. Generated list of 34 issues. Found three kinds of issues – one can’t solve at micro-level e.g. ‘we have too many exams’ – but the course this was on had no exam. Some that were out of scope of the dashboard project. Then third group were issues they could address with the dashboard – e.g. group member that does not work, communication within the group, how I distribute my time.

Simple evaluation via Google analytics – dashboard was used. At the start, average visit duration was long, but reduced over time. Perhaps when they’d found a way of using the tool.

Questionnaire feedback from students on the evaluation. Mixed to negative feedback. Much agreement for ‘I think that some students may over-report the time they work on this course to give the impression that they more work than they actually do’ – concern for gaming the system. But little support for help to plan the time better, or realise when something goes wrong in the course. Positive for ‘I think that I am motivated for this course’, but not for ‘StepUp! increases my motivation for the course’. Some positive for promoting activity.

What they’ve learned. We need to solve the problems the users have. The tool works more for individual than group work. Works better for same topic than different ones.

Next steps, tracking outcomes rather than effort. Apply badges – see the objectives they can achieve.

Questions

Denise Whitelock: It looks interesting. You thought students were being disingenuous, reporting more time than they were spending. Often students don’t declare how much time they’re working to each other – bragging rights are ‘I don’t do work because I’m so clever’. So maybe you have better data.

I agree.

Ravi: Design-based research, in learning sciences, a particular tradition, working e.g. with school teachers and interventions. What is the criteria for selecting the courses? Were you also the instructor? Do you work with teachers outside the research group? Is it participatory action research, or true design-based research.

Just now, it’s basically courses the professor is teaching. All of them follow the same learning approach.

Live Interest Meter – Learning from Quantified Feedback in Mass Lectures

Verónica Rivera-Pelayo, Johannes Munk, Valentin Zacharias, Simone Braun

Verónica presents.

The use case and motivation – holding a good presentation is hard – get better with practice and feedback – and most importantly learning from that feedback. Applying learning analytics for workplace learning, capture feedback to support reflective learning. Part of the Mirror project, reflective learning from your own experiences at work. Inspired by Quantified Self approaches. Interested in mass lectures, virtual courses and conferences.

Application model – aggregated live feedback from audience to presenter, leads to adaptation/reaction (e.g. slow down, rexplain); audience have peer communication (e.g. talk is to hard, I’m distracted).

They have an app – LIM App (Live Interest Meter).  Android app and browsers (JavaScript). It’s a single dimension – you click a slider up or down a single value e.g. general performance, speech speed, comprehension. This is aggregated from all students, with average and spread – shows how it changes over time. Teacher can put topics as overlay to the graph to give context.

Also does quick polls and questions – and with questions from the audience set up as a poll. Students click which are important, can see the most important ones based on those votes.

Experimental tests: First in a project meeting, with 10 participants using LIM. It was discussion driven over several topics. Intention was to influence the speaker, and compare their interest to others.  Second test a university lecture with 15 participants. Technical results very satisfactory, but acceptance results limited. The group already knew each other well.

Two conclusions: the presenter’s role has to be well defined.

Then next, larger user study to refine the use case and guide further development – which scenarios are most effective, and what the most want. 20 qualitative interviews. Anonymity good, polls good, chat bad – distracting. Reflection needs time, preferred after the session. Online survey n=87 (120 surveyed), over a month. 55% audience, 45% presenters. Realistically responding was to react to feedback periodically, in content blocks. Though some wanted to do it immediately. Preferred live/continuous data collection to periodically collected. What to evaluate? Interest, tempo, clarity, slides/documents. Finally – preferences for app features: poll questions top, reviewing polls after session good, but chat very low. 78% found idea positive, 57% would like to test it.

Presenters prepared to learn retrospectively; motivate students to use it. Main concern is distraction.

Now moving towards a second prototype – mockups to adapt the results to real features. Plus usability improvements, and others.

Questions

Dan Suthers: More general questions, for both presenters and their colleagues. Occurs to me, looking at these vis, they’re meaningful to the researchers, but they require a culture and skill in interpreting vis that we’re just assuming students have. Do we need to train faculty to use them? How do we know that this interface to data are really understandable? Not just survey questions. How do we see the connection to these being actionable affordances? Actually using them to change their practice? Do we have evidence of that, or do we need training?

Veronica: We found they don’t need much training. The graphic may have looked complicated, the evolution graph – we saw that was too much information. Can’t see the line with the averages, min/max and history. So next prototype, just show what’s happening now. Just average, minimum and maximum. This will simplify the visualisations, so need less training for students and lecturers.

Jose: I agree we have to evaluate this, we have to do a lot of iterations. Was a nice longitudinal study, they found the people who used more dashboards had less dropout from the course.

Jan Plass?: The visualisation of the results of data mining is a key issue. One reason the question came up – e.g. the colour coding. From too slow to too fast, too complex to too easy. Choice of colours could be something to look at. Why green for too fast?

Veronica: Wanted to give more feedback. In the next prototype, the presenter can change to a different schema. The middle is the optimal, so can use multiple colour schemes.

Someone: Colour blindness?

Veronica: I should test it. My colleague is colour blind and he didn’t say anything. (laughter)

Someone: Question about this feedback with video presentations. An extra challenge with live presentations. Are you focusing on live presentations, or thinking to put in videos?

Veronica: First approach is live lectures. But see potential for video lectures but have not yet explored it. We have seen some web tools that allow videoconferences that have thumbs up/down. We may have to compare the differences. Not just giving the feedback live, but afterwards having it available.

Someone: Motivation for using this is to improve presentation skills. Have you thought about making it more useful for the students by being able to give feedback to the presenter, maybe do I understand or don’t. A similar app called UnderstoodIt.com – very simple app to give to students, only two buttons: I understand, or I don’t. You continuously see a graph of these. Goal is for a teacher to instantly repeat a part – it’s linked to PowerPoint. But can use this with more information.

Veronica: We’re using as motivation for students, curiosity of where I am compared to other students. It’s fast for you, it’s Ok for the others, what’s the matter there? In the next prototype, try to evaluate how useful it is. Trying to figure out the best approach.

This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission needed to reuse or remix (with attribution), but it’s nice to be notified if you do use it.

Author: dougclow

Data scientist, tutxor, project leader, researcher, analyst, teacher, developer, educational technologist, online learning expert, and manager. I particularly enjoy rapidly appraising new-to-me contexts, and mediating between highly technical specialisms and others, from ordinary users to senior management. After 20 years at the OU as an academic, I am now a self-employed consultant, building on my skills and experience in working with people, technology, data science, and artificial intelligence, in a wide range of contexts and industries.

%d bloggers like this: