LASI13 Wednesday (3): Quantified self

Third liveblogging post from Wednesday 3 July 2013 at  LASI13 #lasi13, the Learning Analytics Summer Institute, at Stanford University, CA.

Quantified Self Technologies and Learning Analytics

Test tubes and other recipients in chemistry lab

Speakers: Erik Duval, Abelardo Pardo

Erik starts. Plan is a few minutes of introduction, then put you to work, work around the tables, 3 or 4 groups, with an assignment and quiz.

Erik says that Doug (ie. me) is now the most interesting Doug alive, because Doug Englebart died recently. We should get better at getting better.

Plan is to have an introduction, first assignment (what you might want to track), report back, second assignment (think about scenario that makes use of the data you want to track, using other group’s ideas), report back, group discussion and report back and look at future research.

Quantified self

Quantified self idea. What do you think about it? Fitbit, calorie trackers, health, sleep. Fitbit tracks every move/step you take, sends a note saying here’s how many steps you took. Everyone in his team has one. It tells you how active – or inactive (if you’re on a 12h flight) you are. Other similar things available e.g. Nike. Also information about the quality of your sleep. Tracking mood. Things that run on your mobile device, regularly manually input how you feel, get visualisations, help you spot & understand patterns. Rescue Time – track your computer use, browsing habits, see how much you spend on social media sites. If you’ve never done this, just experiment a little bit, often an eye-opener. Indicate what you think is productive or not in your habits. So if you think Facebook not so productive, it’ll tell you. Can also set goals – e.g. don’t want to do more than 1.5h a day in email. In the beginning it was a big issue, then became a habit.

Q: Can it restrict the topic that you’re interested in?

Yes, but then it’s a lot of tweaking. Can do email one person, that’s productive, email someone else, it’s isn’t. Easier to go for big categories. Also most people start with big things they’re worried about.

Q: Are tracking environment – home energy consumption, taking constant photos, lifelogging.

All very much related. Lifelogging possible different, not necessarily quantify it. If just pics of you in an album, more memories.

Q: Energy use?

Can be. Typically quantified self, relate back to what you do that creates this huge consumption in the morning vs the evening.

Q: I work with brain imaging – portable EEG?

Absolutely. Lots of variations. We’re starting on devices that are professional ECG recorders. Another example – probably ways to measure attention, how excited you are, stressed.

Q: I have these devices and am wearing them!

Mood tracker – MoodPanda is one. Can get map of your mood, graphs, calendar. Typically manual input – you say I’m happy. Or choose a picture that represents how you feel, has metadata associated with it.

Q: Geospatial, location.

Of course, very quantified self-ish. Communities of people building tools, sharing experiences. Eating, drinking. Sexual activity. Another example – lift.do – express a goal, every day you go there and say yes I did it (or no I didn’t). Used it for email – every evening inbox is empty. 12,000 people had the same goal. Many of these applications, you use to solve a problem, and then you stop using them.

QuantifiedSelf.com is the central site, has a specific forum on QS for learning and cognition, some stuff going on there.

Q: Are you involved with it? Your research? The website?

It’s related to that. But I didn’t make the website. It’s just the web.

Abelardo

Premise for today: let’s leave ethics aside for now. We talked about that on Monday. (laughter) Capitalise on these technologies and see what would it be interesting to apply from QS in the environment. Observe e.g. only while they’re working in the activities. Everything 24/7 is too much, but could go all the way with learning.

Imagine a magic switch. Tracking everything while you’re learning, but when not, switch goes off and nothing is logged. Did some experiments, provided students with a virtual machine, a computer to install on your computer. Another desktop appears, we ask them to work on that one on the course activities. While you’re there, you’re being tracked. Tracked the browser – likely to check lots of things. If you want to check something but us not know, you can go outside our desktop and use the other one.

Why now? Example of medical school, using a simulator, can track a lot of what they’re doing. More and more tasks mediated by digital devices. Sensors are becoming a commodity – position sensors, galvanic skin response, airflow/breathing, blood pressure, O2 sat, etc. Intelligent textiles. Facial expression recognition – all the laptops here have a camera.

Think 10 years from now. What is the kind of data you’d like to collect. That’s the first assignment. Second on is to identify the scenario to use it.

Assignment 1: What would you like to measure?

Plan to put something up here to stay up during LASI.

Q: Blood pressure … or anxiety. There’s an inferential step there. Which are you talking about?

Have to go in to play mode. Short answer is the second. Stuff to help the teacher or the student with learning.

Idea – Dillenbourg, directional microphones on a table, work out who is talking. Simple data, but feed that back during the conversation so the more you talk the more your colour of lights light up. Bachour, Kaplan, Dillenbourg, IEEE Trans on Learning Tech, 2010. Also emotiv – alpha waves, attention. So push it further than GPS to know where you are.

What can we measure, and how is it relevant? What is relevant and how can we measure it? Fine to work both ways.

[Small group work]

  • Capture the a-ha moment. (brainwave/EEG)
  • Plasticity, ability to hold on to things, here’s how to move out of the box. Purposely trying to confuse yourself. Tracking what happens in creativity. A-ha moment is a good one. (Brainwave/EEG) Distraction for creativity.
  • Getting stuck. Discrete state vs continuum between stuck and aha moment. Moments of triumph/accomplishment – hitting a solution. (Dopamine rush?)
  • Teacher emotional state – frustration, fatigue, irritation.
  • [Design aspiration – actionable in the moment.]
  • Attention – who’s paying attention to whom. Could get with gaze, but tricky. Conversational pattern stuff.
  • Eye rolls.
  • Physical movement in the classroom, use of space.
  • Keep history, identify people’s pattern individually. A lot here about baseline, and noticing changes from that.
  • Could we sense people’s sense of interpersonal relationships, how they’re thinking about themselves vs others, management of self in social situations. Autistic kids have trouble with social regulation. This could help. Mirror neurons. ‘outcasting’
  • Images directly out of people’s heads (fMRI?)
  • Get people’s internal monologues and thoughts.
  • Meditation patterns.
  • Spot when ‘cute neurons’ are triggered – puppies, kittens. Beauty.
  • ‘temperature’ of a classroom
  • Quality of trust – body language. Individual differences. Transitively, rather than individual sense. Physiological state, amygdala.
  • Resilience.
  • Body language.
  • Direct measure of knowledge! (bit speculative)

[back to whole group]

Other group’s ideas – Erik suggests people add other group’s ideas when your list didn’t include it, in a different colour.

  • Linguistic analysis, disfluencies, use of the word ‘beer’
  • Knowledge about spread of viruses
  • Apparent off-task behaviour but actually productive
  • Number of different words used over a day
  • Ambient measures – colour, temperature, light, music
  • Affect relative to conversations.
  • A x B – Data is only useful with another data point – internet use crossed with time is more useful than either alone.
  • How much people mirror each other socially
  • Creativity – e.g. how unexpected your actions are
  • State of flow – serotonin, performance measures
  • Conversational ambiguity
  • Mood from typing – speed, accuracy
  • Posture
  • Social interactions
  • Interruptions
  • Self-generated or instructed actions (how do you measure that)
  • Sleepiness
  • Reflectiveness
  • How many languages you use in a day. Or accents, registers.
  • Cognitive load
  • Boredom, frustration, anxiety, focus, concentration
  • Nutrition profile – photos & RFID to monitor what they’re eating
  • Blood O2
  • Exercise & fitness
  • Bullying
  • Truancy
  • Metacognitive strategies
  • Achievement outcomes
  • Fidgeting – shoe tapping, etc
  • Learning preferences
  • Sense of self-efficacy, confidence, attribution theory
  • Value placed on the content, outside influences coming in as well as self
  • Empathy, interpersonal relationship skills

Assignment 2: Scenario

Two slides from Erik. When you track stuff, opportunities for innovation. Comfortable with tracking stuff because you can, then work out later how to make use of it – or not. Example from running (RunKeeper), tracks where you run, get data about how far, how fast you run. If I want to run in an hour, I can ask where can I go to run, lots of people have done that here, RunKeeper knows what I do, can pick things out. Often in a big city, says go left here, second right, doesn’t look good, but you end up in a park. Wasn’t built for that purpose, but simple leverage of data creates value on top of the original. Further example – if you break your habit, when you stop using it, sends you mail saying you’re not using it any more, better to team up with people, we can find a friend of yours via Facebook with similar pattern. Or someone in your neighbourhood. Or pair you with someone you don’t know somewhere else and virtually connect about running. Very easy to transpose to learning. E.g. learning Italian, could put you in similar groups, do social control that you create for yourselves, encourage to do 5 words a day. Very efficient and simple.

Wearable computing gives opportunities to get data in, but also get data out – e.g. wristband tracking running/steps, vibrates a little when you hit multiples of 5000 miles. Or little display on a hacked wristband.

Think of a scenario where QS approach would help, added value more pronounced than what you have so far. Maybe you use non-trivial method to feed data back. Not yet another dashboard. Or at least, think a bit less trivial. Could be tabletop, device stuff. Explore data collaboratively. Example of QS data related to learning. Project it in public space, trigger conversations between students, or between students and teaching assistants. Another one – open badges, visualise them and how they relate – good evidence that more interesting stuff happens with 3 students behind the table than just 1, they start to explore difference between them which leads to interesting questions about what they did to earn a badge.

Task: Very concrete example. Specific people maybe.  Design QS scenario.

Maybe diagram, draw something.

[Small group work]

Idea about diagnosing misconceptions (as in science, stats, etc) by simply sticking people’s heads in an fMRI. Like a very good ITS but with an fMRI instead of diagnostic tests. Could have a social aspect too.

[Group reporting]

First group:

New schooling system, adapts, optimise schedule in the day. Making suggestions during school. Kid only got 4h sleep, no breakfast, walked 30 min to school, lunch at 12, 7pm study session. Suppose we have QS measure their capacity, the system makes suggestions on what they should do. E.g. it knows he didn’t have breakfast, system might suggest having an energy bar. Might be feeling grumpy, so maybe in morning sessions don’t suggest group work but self-study. After lunch, notices not much concentration, cafeteria gives options to help boost concentration levels, optimise nutrition.

Ethical challenges here, around child protection questions too. Very focused on food and physical activity. When to schedule group work, self study, PE sessions. Adaptive learning for self study based on QS. How many think nutrition is an important influencer? Quite a lot, but others don’t.

Second group:

People progress towards knowing something. Build on conceptual change/misconceptions literature, esp in science and mathematics, documents the finite qualitatively different ways people understand different phenomena – e.g. there are 3 ways people tend to think about electric current. So want to identify their current conception – it tracks you from the beginning in a QS way – elicit their view, then personalised learning experience to confront them if not the consensus scientists’ view, help them rebuild their theory. Been quite a bit of that happening over the years, but this would put them in touch with others with the same misconception, give resources to help move towards it. Impressed with nStudy, sits beside the learner, enable them to draw concept maps, highlight what helps them

QS system is measuring to figure out, cluster students who have same misconceptions? Yes. Could be that. Example – electric current. Scientist view is it’s conserved, but some think it’s consumed, might put the latter together, or pair them up with someone who thinks it’s conserved. Some of the SNA we’ve done on classroom networks shows that students who do most poorly cluster together and only talk to each other, the more successful ones have many more connections, so perhaps clustering the people who don’t understand together might not be so great. Some are very embarrassed about misconceptions too. Others say Ok, my idea’s wrong, how can I change. QS system would know more about the learner – e.g. how do you think these lights here right now are working.

Third group:

It’s an app, available soon. Scenario – you’re a calculus student faced with an integral that’s hard. People have QS profiles that include whether you know how to compute that integral, but also whether you’re willing to help. Also people who can give instruction in French, Italian. Can locate person in library who could help you. Book keeping too, people have an account – I have computed so many integrals, now it’s your turn. It could be anything, not just integrals. That book keeping, helping others, already happening virtually in Finland and the tax authority is trying to figure out how to tax it.

Using QS system here, student could also be directed to a Khan Academy video. Similar scheme working in communities like Stack Overflow, earn points for answering. Aardvark, bought by Google, same sort of thing. Public recognition, comfort levels differ.

Fourth group:

Context-aware mathematics tool. Instrument a phone, track what students say, tracks maths terms, collects it, tries to repurpose it and display it to student that would help with metacognitive scaffolding, recognise their prior skills in mathematics. There was an upside down balloon in this scenario but nobody understood why it was there or what it meant.

Maths performance relates to context – e.g. can do maths fine for illegal drug transaction but can’t do it in class.

Final example

At Madrid university. Students report with a simple app how easy or difficult it is what they’re listening to at the moment. Teacher has augmented reality glasses, gets little check marks or question marks above the students’ heads. (Blimey.)

Group discussion

Assuming that you’re convinced that research in this area is relevant and useful – what topics need more research, and how.

It’d be interesting to see if there’s a chance to get it through an IRB (Ethics Committee). Depends how extreme the test is, but also what the value is. Erik’s joke is he works for a Catholic university, so he doesn’t ask permission, he asks for forgiveness. People’s experience of this varies. This community really needs to build up some best practice that people can take off the shelf. One person’s belief is you have to keep asking people. It’s complex. Delft have an approach on value-based design, make the principles come in at the start of the design. Gates Foundation have invested in some work, not so much a technical challenge who can do it, or even an economic challenge, it’s cultural, psychological and political factors. A straightforward study, following IRB requirements, using galvanic skin response measurements, info co-opted by people who weren’t a fan of Gates foundation to say they’d use the info to fire teachers, led people to go after someone else claiming mind control. You have to get the conversation right, better learning experiences, not disrupting privacy.

Need better infrastructure for software – almost insurmountable problems with getting even simple data. If it data belongs to the learner, very hard.

RescueTime, Lift – can we create apps like that that’ll show persistence gains? Studying effect, try to demonstrate it creates a lift. Goal like e.g. have to complete 4 courses. Is your study time efficient? Could give all students a sub to RescueTime to get more productive. If it works. We did that in a class, but was a danger – students became very focused on being efficient and it destroyed intrinsic motivation, became ‘I’ve done 4 hours on this’ rather than it being a cool assignment. Many things easily become the opposite. Tricky, fine line. It’s worth trying. But no magic solutions. It’s a design problem.

How far are we from measuring how bored someone is, or how much attention they’re paying? In a lab environment we’re there. In a classroom, all sorts of economical factors. The technology is there now. Can get devices that do this. Can do it with camera-based system, cheap and easy and already there. Discussion about how reliable they are. Like discussion with sleep quality sensors – we’re not there with consumer products. Important point – the label we attach is a construct. Can measure neural firing, skin conductance – and labelling that is interesting. Ryan Baker’s stuff is interesting, off task is no longer a good label. We’re really far away from operationalising this across multiple people and contexts. John Behrens work, what emotional attributes means is slippery. Far away from understanding ‘attention’ or ‘boredom’ – not just sleep state or alpha waves. The meaning of the observation is yet to be determined. Fitbit tells you the number of steps you’re taking, it’s a pattern it detects – it counts different on your ankle, wrist etc, but still better than nothing. Knowing something about who’s looking at you in class is better than nothing. A little delta is big. Not accurate now, but can build up evidence. We don’t need to understand attention to be able to observe that something is going on that’s not normal baseline behaviour, might indicate a problem. Might not know what’s causing it, but cue to investigate. A poor indicator is better than none. GSR is wonderful, but what those observations mean is pretty slippery. (This was quite a debate.) 50 years ago we called it memory. Now have cognitive load, short term memory, etc. 10y from now might have more nuanced measures driven by these sorts of measures.

The technology is there, but what is the biggest obstacle to everybody adapting it? I’m working on brain imaging devices, how can I convince them. Get more involved in experimenting. One obvious thing is to have nice case studies, give examples, we used it here, have some arguments that it helps with this, this and this, build up from there. Have to prove value, beyond doubt. Example, social networks, being able to see picture of my friends – it provides value for people, very compelling case. This tech is getting there but not quite. Why aren’t we doing it on ourselves? Some of us are. How many are tracking something? Pretty much everyone, in some levels. How could we do it?

Wanted to ask people if we can have socio-metric badges, spot connections – willing for next LAK conferences? General approvals in the room. Pick location for conference relative to the legislation on this. (laughter)

Make it usable – don’t focus on schools, larger institutions. Make a better Fitbit, get QS systems adopted by individuals. There has to be a reward for individuals.

(I was going to mention about the interesting tradition of self-experimentation, using the data you gather through the quantified approach. But I forgot.)

This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission needed to reuse or remix (with attribution), but it’s nice to be notified if you do use it.

About these ads
Explore posts in the same categories: liveblogging

2 Comments on “LASI13 Wednesday (3): Quantified self”


  1. […] LASI13 Wednesday (3): Quantified self […]


  2. […] Above are the slides I used for my session on ‘learning analytics and quantified self’. I would write a bit more about what happened at the workshop, but you can get all the details from Doug’s wonderful #lasi13 lifeblog… […]


Comments are closed.


Follow

Get every new post delivered to your Inbox.

Join 63 other followers

%d bloggers like this: