LASI14 Monday (3): Public event

Liveblog notes from Monday evening at LASI2014 – the Public Event.

Harvard - Duxford

Garron welcomes everyone. Bringing the local community together with the international community here for LASI. Go to the person you most connected with and join in a conversation with them.

George Siemens: Introduction & Overview of Learning Analytics

Thanks for joining us. [His welcome slide says that tomorrow is Canada Day!]

Some of you will have heard this before. I’m going to get you to think with me what are the changes that makes analytics more relevant.  How many of you remember this? [Shot of Yahoo! from about 1994/5. No! Was 1998.] 5 Resources on lectures. 2.4m websites in 1998. Then Google turned up. Why did we move from [Yahoo] to [Google]. Today, total number of websites is just short of 1 billion. (992,014,374)

Through the lens of research, we’re trying to discover relations between things. When we make sense, it’s primarily through putting things in relation to other things. Science is a process of discovery. If you’re ill, and I bleed you, do you get well; if I give you penicillin, do you get well. It’s a process of discovering things. “More is different”, Anderson 1972. 1550 Contrad Gesner – a confusing and harmful abundance of books. Unreasonable effectiveness of data – Peter Norvig etc. Transition in thinking.

Early, start with new methods – index of indexes, encyclopedia. Abstractions. Visualisations – an image that captures how people vote based on colours. We rely on these cognitive approaches to understand what’s happening, because we can’t grasp it with the unaided human mind.

More recently, patterning by software. Trails of ‘the many’. Networks.

The problem for which analytics is the solution is abundance.

Viz is a brokering entity between quantity and cognition. Analytics are proxies for human cognition. Until we can get in the human brain, massage and watch what happens when someone learns at a neural level, we create methods to get a proxy insight. Grasp things that wasn’t preveiously seen. Entire field of science through a viz.

Essentially, LA is a tool to think with when we can’t think with the context we’re functioning in. Most defined by abundance, and related complexity.

More broadly, at university. School system created around model emphasising content. We do that with learners, K12 or university. Abundance of info means we must think in a less course-centric view and do in a more network-centric view. If you knew what I knew, you could create a knowledge map of who I am, and then compare my graph with structure in a discipline how close I am to achieving mastery. Will happen more in future as entire fields disappear under our fields.

Thinking in complexity, abundance, extending limited range of human cognition with a range of tools and technologies.

[Wow, that was whistle-stop high paced stuff.]

Zach Pardos: Learning Analytics and Machine Learning

Zach’s from UC Berkeley. I’m going to take us down to a lower level, give examples of ML in education I’ve been involved with. Also the ethical frameworks. Sensitivities around surveillance, experimentation.

Three topics. Two examples of ML in education. 1 using student clickstream data to improve instruction. 2 studing student affect at scale. Lastly, ethical frameworks – Asilomar Education Convention.

Using student clickstream data. Students are learning from answering problems on tutoring systems, receiving feedback and help. They’re interacting with each other. And they’re also learning from going to class. [Photo of sleeping students in theatre. How can we recommend better pathways, and provide formative feedback to teachers? Many MOOC teachers working in the dark. Don’t get to look in to the students’ eyes.

Example from edX, Harvard/MIT. First course taught, Circuits and Electronics. Example of the data. Learning pathway, towards some learning objective. It can be conceptual, e.g. cultivate a love for math. We’re measuring did you learn this thing. Example of circuits problem. Sarah answers first part incorrectly, then goes to a video, looks at it, comes back, answers same question incorrect again and the second part incorrect. She goes to a book page, next page, then comes back and answers first part correct, then second part correct. What resource was most effective? We have Sarah’s log here. If we have thousands of Sarahs, or tens of thousands, can we infer efficacy? Probabilistic graphical models to analyse this, knowledge as latent, observables are the right/wrong. Propose a model, have to have a way to evaluate it. Typically, EDM approach, to an N-fold cross validation. Train model on training set, come up with hypotheses between knowledge now, future, and prob of answering. Then use trained parameters to iteratively predict test set responses. Go one at a time. Update next step. Come up with list of predictions and actual answers, come up with metrics to grade our models. If ask what should you measure – low level, more affective, conceptual – these metrics are a kind of guide, what’s more predictive, maybe you should go with that.

Real idea is not to see when they mastered it, but what pattern of behaviour did they display that led to that. These are the questions we can ask of these data. Recommend resources to students, feedback to instructors about what’s working.

Second topic: studying student affect at scale. Are they engaged? Are they bored, confused? EDM has developed methodologies for measuring this in log data. Go in to classroom, do field study using a tutoring system. Note down their state, write it down. Associate it with log data features, do ML mapping, apply that to the students you didn’t observe. Did that for a whole hear using Assistments. Measured boredom, concentration, confusion, frustration, off-task, gaming. (Pardos, Baker et al JLAK 2014). Result observed – seen across many: more frustration correlates with better scores. Maybe means you cared enough. Confusion is positive if it happens while you’re receiving help.

Asilomar convention – respect for rights of learners. Can imagine injecting some randomisation – but that’s blurring line between students being experimental subjects and just doing their learning thing.

Phil Winne: Self-Regulated Learning with nStudy

Big data and learning analytics to leverage learning science when N=me. Going to let Powerpoint regulate me.

Thanks to funders. Claims. Experimental findings not much help for N=me. Will be cluster of learners like me. nStudy gets details. Have to make a transition from classic learning science in to a different model.

When I learn online, it’s not a well-designed experiment. Much more confused.

[this is rapid-fire multimodal presentation – lots of text coming up, with Phil talking over it].

What we find in classic experiments is not the same.

I’m not a statistical average. As a learner differs from average, predictive accuracy worsens.

nStudy. The idea, things go viral, can make progress. It’s a plugin for FF or Chrome. Logs data server side. Curriculum is anything you find in .html or PDF. They’re not directed. Study space on left hand side, resource on the right. Can annotate, quote, etc. Bookmarks. Chat. Quotes. There are suggested prompts – after CSCL – e.g. ‘What is this part about? Can you give a general summary?’.

Can explore, filter, and see when you last explored things. Note template.

The data gathered – traces. It’s observations about what you did. Quote could mean – metacognitively monitor it; or I plan to review. #tag, again monitoring, assembling into a category, planning to seach by the tag.

Can look at study events. Event counts, patterns. Transition matrix. Graph of the transitions. Maybe show these are learning patterns.  Analyses of content, stuff LMSes describe. But inside it all too.

Ask questions – how are these events patterned? Does something get elaborated later?

nStudy lets us look at self-regulated learning, it’s a personal research program. It’s my research program when N=me.

Caroline Haythornthwaite: Crowds and Communities – Social Network Analysis

Network madness – a node, a relation, a network.

Most of you are familiar with social network. Actors – people, groups, organisations; tied by relations – strong or weakly. Together, they’re graphs, networks. Network questions – who learns from whom? What do they learn from each other, by which media? What benefit accrues to the network? Interested in the outcomes developed. What does the network hold that the individuals don’t? How do resources flow?

Strong and weaks ties. Strong ties, old-fashioned friends: see often, communicate often, share secrets, use more media with them. Freely-given source of info. But in same social circles, so have same info, but they’ll share what they’ve got. How do you build strong ties through online media. How do you motivate sharing in crowds? Weaker ties are very important. Not in the same circle, so have new information and resources. How do you bring them in to a learning environment? What’s the right balance between strong and weak so you have innovation and commitment.

More than just pretty pictures. Who will know what at the same time as the others? Example of someone right in the middle who’s the only path between different ends of the network, and a group who are entirely unconnected to the rest. Also shows you the resilience – here, if you take that person, network falls apart.

Strucutre tells you tails. Denser networks are more classwide conversation, thinner is smaller workgroups. Can discover groups – e.g. find core team not listed anywhere.

Automated analysis, still concerned with who talks to whom, and so on. Look at ways to get more out of the networks that we have. More than just who answered to whom. Looking inside the message itself to see who else is part of the network. Name network. Combine text analysis with the node analysis.

On Twitter, look at who mentions and/or replies to whom. Example of health care learning group – single large component, with periphery of observers. What were they trying to do? Look inside the network – who are the big players, what kind of people are they? Content providers, communications, advocacy – colour code and show the structures. Find out what people learn from each other. Tag conversation to find out what they’re doing with each other. If it’s a learning tie, what does that mean? Look at what people learn from others – expect exchange of information about the topic, but mostly it was about how to teaching in the classroom. Upper level, PIs talked about facts. But method people talked to each other about methods.

By their exchange of sci teaching techniques, have high school and elementary school networks, but only one link between them. Marten de Laat, asked people what they’re talking about, takes it back to schools. Using networks to design social processes.

More about networks – look at media, changes over time, motivations to contribute to crowds, also explore them – Netlytics, SNAPP.

Community Discussion

Garron takes over, thanks everyone, encourages people to talk to each other in groups. Which they did.

This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission needed to reuse or remix (with attribution), but it’s nice to be notified if you do use it.

Author: dougclow

Experienced project leader, data scientist, researcher, analyst, teacher, developer, educational technologist and manager. I particularly enjoy rapidly appraising new-to-me contexts, and mediating between highly technical specialisms and others, from ordinary users to senior management. After 20 years at the OU as an academic, I am now a self-employed consultant, building on my skills and experience in working with people, technology, data science, and artificial intelligence, in the education field and beyond.

One thought on “LASI14 Monday (3): Public event”

Comments are closed.