Liveblog notes from the morning of Monday 28 February, the first full day of the Learning Analytics and Knowledge ’11 (LAK11) conference in Banff, Canada.
(Previously: The Learning Analytics Cycle, liveblog notes from Pre-Conference Workshop morning and afternoon.)
George Siemens – Welcome
Thanks to TEKRI at Athabasca; Centre of Educational Innovation and Technology, U Queensland; EDUCAUSE. Thanks to four platinum sponsors – Kaplan Venture, Alberta Innovates, Desire2Learn, Bill and Melinda Gates Foundation. Thanks to steering committee. Thanks to program chairs and program committee. Gill who helps organise and support administratively. Chris too. And Blaze who’s working the video on ustream – will be recorded and made available later. Backchat on #LAK11.
Why learning analytics? We’re in a knowledge economy, and the foundation is data. We need to better understand the data exhaust being produced by students, and the teachers, as they interact and work online. Various initiatives in Canada to make it a leader in the knowledge economy. Shift from physical- to knowledge-based economies. We haven’t had as much time in understanding how to create value in the knowledge economy. That’s where learning analytics becomes important.
President of Athabasca University
Thanks everyone for travelling to Banff. Thanks George for organising.
As president, has plenty of data, just not the right data. Being used to shape and inform decision-making. Incredible capacity to store and analyse data, but we have too much data that isn’t analysed. We want data to be useful, to become the driver of the new knowledge economy. TEKRI is a young institution, in its founding years. Has great expectations.
Have to think about how to effectively use all the material. Some of us are all online, some are still in print. You are present at the birthing of a major initiative, you get to set the parameters.
The ability to capture almost everything a student does, from the first decision they make, their interaction with learning material and faculty moment-by-moment, through exams, and their learning lives. Research ethicist from a traditional university: Do you have permission? Do they know they’re being analysed? Do the faculty know? Is that going to threat the hegemony of the traditional classroom?
I hope it does. You are at the cusp of fomenting a revolution in post-secondary learning. I hope that revolution takes place. Universities that used to measure their success by 100% averages, rather than having the most successful learning experience. That’s what you’re about. Help institutions become excellent in knowledge creation, knowledge absorption.
Athabasca has been a leader in distance and online learning. Institutional analysis people here look at statistics. Internally, we are still at a very young, nascent level. But through our learning here, we hope to apply new methodologies and keep our university in the forefront of this field.
Privilege to sit here and learn, and reshape the learning landscape, internationally.
At Athabasca, asked to help establish new open university in Nepal, using latest in analytics and achieving excellent outcomes. Next few days would help them establish a new and revolutionary way have a developing country become a leader in learning.
Tony Hirst – Keynote – Pragmatic analytics
Tony is the MacGyver of data, says George Siemens.
Exploring how to use data, within the institution and wider. First, a little context and history of his entry in to the world of analytics.
Analytics used by business for years, mainly in the marketing sense. Arrival of applications and data in quantity, has made data increasingly attractive to business – and popular business books.
Slide from dunnhumby, leaked on to the web. How they analyse Tesco (UK supermarket) shopper’s activity. Generates insights about what customers want. Segmentation you hear a lot about – don’t necessarily do when we consider our students. The data leads to insight about your customers. In marketing use, you then create effective marketing to essentially make people buy more of your stuff. The route is still valid for us. The marketing might be about understanding their learning needs, and the changes in their behaviour are more effective learning.
JISC started a series of workshops on Business Intelligence for the UK. Shows significant interest in use of data to make institutions more effective.
Advert from OU jobs website, for a Corporate Data and MI Analyst, in the marketing department. Includes ‘development and delivery of robust models, tools, skills and resources to enable segmentation’.
Much marketing data; Experian in the UK has fine-grained information about people. Same all over the world, available for companies who want to pay for it. If you think Google or Amazon know a lot about you, it’s nothing compared to what the marketing companies know about you, and has been known for several years.
Use of the data is somethng of a journey – advertising, prospectus, register, retention, graduation, fundraising. We’re missing a trick there. After graduation should engage with them as lifelong learners – we know a lot about them, should be going on for 20, 30 years. But the systems put post-graduation effort in to fundraising.
Within the marketing area, we could make use of course catalogue information for monitoring how helpful that information is. Google is main driver of traffic (though may be Facebook now). Come in to course page, tied to qualification description, course search, and related courses. Qualification description often uses language that a graduate of it would understand … but it might not be what people are searching for. This is also key for discovering OER, if you are publishing them.
When playing with data, I like to look for structure. The OU website has hundreds of courses, hard to grasp the whole of its offering. Recently the OU started publishing Linked Data, with lists of courses taken with other courses. He created a map – using open source Gephi – mapping these connections. Identified clumps or clusters, and coloured them liked that. This is ‘visual analytics’ – can see the patterns; wouldn’t get it from the individual course pages.
Graph-based representations simply have nodes (the dots) and edges (the lines between them). All you need is a list of edges – two columns of data, showing which node is connected to another. Tools are getting easier to use, make use of simple text representations.
Moving from the marketing timeline, to registration/retention/graduation. Academic analytics here – predicting the likelihood of progression, building models. Might not be about improving student learning individuals – could be student-focused learning analytics.
Various sorts of reports and techniques for making use of the data: descriptive reports, prescriptive models, predictive voodoo – black box recommenders where you don’t know what’s going on. Prescriptive models have an understanding in them.
Descriptive reports – example from Dave Pattern @daveyp at Huddersfield – looked at books borrowed together, added borrowing suggestions, and graph shows books borrowed stats went up. Also looked at correlation between use of library and final degree result. Can be used to control your own behaviour. Feedback system – engineering model – negative feedback, closed loop control system. Have some reference measure you want system to attain. There’s a controller. The system itself has an output that you can measure via some sensor, compare with some reference measure. Difference generates error value that feeds in to a controller that affects the system. Here, we’re measuring the output, not the change. If you make changes, you need to make some impact on measurable output.
Predictive models: from Google Analytics, from his online course. The course ran over 10 weeks; blocked pages for each week as a block he could measure in Google Analytics. Pattern of engagement changes over time, showing broad progress over time of cohort. Big jump when there was an assessment in week 5. If was running again, would get data from Google Analytics, look at unique visitors on each week’s pages, and see what percentage have visited each course page, to get a better feel for the extent by which people have engaged with each page. Could experiment with timing of the assessment. Also get a feel for whether weeks are overloaded or underloaded. That data is all time series data. In time series data, can get trends. Google Trends – shows volumes of search terms over time. Time series data has various properties – can demonstrate trends over time (increasing/decreasing), seasonality/periodicity, and noise. Can detrend the data – e.g. linear detrend. The up and down trend can make it difficult to spot periodicities. The first difference of that, generates quite flat thing. Then run autocorrelation (how similar a series is to itself when shifted in time) which pulls out peaks showing the periodicity. Other things with time series data – do Fourier Analysis – any complex wave can be decomposed in to the sum of simple sine waves. See periodic behaviour in book borrowing – changes might indicate things going wrong, or well.
Voodoo magic – recommenders. Often black boxes with little understanding of the models inside. Presentations coming on that. Many of them are learning systems, can improve over time through usage. Goes back to closed-loop control. Machine learning techniques help.
Few more books – O’Reilly Collective Intelligence (recommendation engines, clustering tools); Visualising Data; Data Analysis with open source tools – pragmatic detail on how to start doing stuff with data.
Google Analytics is one tool. Quick poll – how many people use this? (Many.) The default reports can be misleading. At OU, have VLE/LMS stats, sitewide tracking, library analytics, course analytics. Not reconciled, nobody who’s job it is to join these data sources together. Look at how effective these things are as websites – are people visiting all the pages, how long do they stay, do they click on the links? Feedback to instructor, but also as basic website improvement.
Measures from Google Analytics – one view is length of visit – but the size of the bins are different! 0-10s, 11-30s, 31-60s, 61-180s, and so on. Depth of visit cuts off top bin at 20+ pages. Beware of headline figures!
Often get reporting of means. Useful statistical measure for some things – if normally distributed. Less so if have a long-tailed distribution. Be suspicious of normal statistics! Always be suspicious of means. Anscombe’s quartet illustrates this: four sets with the same statistical properties, but distributions very, very different. Also Simpson’s Paradox.
Segmentation is critical!
Lots of public signals – lots of data that individuals, students are throwing off in public. Course choices students have made using Course Profiles app in Facebook – students volunteer information about courses they’re taking and planning to take. Can produce visualisations to feedback to instructors about what course their students took before, and go on to. Also FriendFeed data showing students commenting on each others’ space, plot levels of engagement. See which students are talking to which in public spaces, including e.g. Twitter. Graph of commenters in course forums, can make teaching interventions around that.
Discovered networks in public spaces. The use of hashtags on Twitter and Delicious enables you to identify communities. Just knowing the tag, can discover the community. See which tags are used with which others. Can collect over time and see how it drifts.
Engagement around a link on Twitter. There’s a service that lets you see people who’ve Tweeted a link. Looked at a link – To lie or not to lie – which was retweeted a lot. Tony grabbed list of all followers of people who’d tweeted it. Picked out those who’d have seen it 3 or more times, then segmented it. Original link tweeted by the OU; then Nigel Warburton, philosophy lecturer @philosophybites span out – but a different community reached by Tony tweeting it.
For open online courses, can get good view of the audience and its structure.
When you do look at this data, can make you uncomfortable. Need to be mindful of the ethical dimension when we’re collecting data.
Michael Atkisson & David Wiley – Interpretive practice
From BYU and AllenCommunication.
Will be available on slideshare.net/opencontent
David starts.
Interpretation and science – a confusion of science with positivism. Want to talk about that relationship. There’s a feeling with social scientists – have physics envy, real scientists use numbers, are objective, replicable. But around learning analytics, educational measurement. What a person knows is not directly observable – can’t crack open George’s head and read it off. So we create behavioural proxies for this. Online learning is even worse, because the classroom proxies aren’t even there to observe directly. So build a second level of abstractions. And all the observable behaviours online are in a restricted vocabulary of mouseclicks and keypresses. Always at least two layers removed in an online sense from the phenomenon of interest.
Westerman’s argument in paper from 2006, taking his arguments and applying them directly to learning analytics.
One argument about operational definitions. Exploring happiness, irritability, whether someone knows something. Have to say this is what I mean by happy, knows calculus, and so on – they’re not just a lens, they’re like a dive mask; can’t see any other way after you’ve defined it.
BYU named most popular university by US News. Raises question – what do we mean by popular? This was around yield – percentage of students admitted who choose to attend it. Have to be careful of the headlines. Who picked that definition? First you have to define the constructs, before you can measure them.
Time on task is an example educational researchers use a lot – strong relationships supposedly between that and outcomes. In face-to-face context, can tell if e.g. someone is gazing intently at a book and guess that she’s reading it. Much richer than statistics of time on a particular HTML page – can’t tell if they’re engaged. Some technical things that help. We share a commonsense idea about time on task, but right-thinking people could come up with different operational definitions of it. Two people could get different answers from the same data.
Danger of ‘letting the data tail wag the theory dog’ Vic Bunderson – need to understand at the start, and at the end. Can we really call it success, if we make predictions but don’t understand how or why it works? Particularly inasmuch as we suggest interventions in people’s lives.
Now Mike.
If not positivism, what are we looking at it? Doesn’t examine meaning, or judgements researchers make. So alternate philosophy is hermeneutics. Research is about creating meaning through interpretation. Tradition in looking at Biblical texts, but now in many fields. Valuable because couches observations in terms of meaningful social practice, rather than abstractions or reductions.
One way to make sense is through metaphor. The information processing model, a keystone for cognitive science. The metaphor becomes more real than the observational data. Model is great, as long as mind is doing things that computers can do. But when looking at creativity, play, exploration, suddenly metaphor is lacking. Previously, the steam engine was a popular metaphor for the mind.
Could remove meaning entirely – reductionism – ‘laws’. Is the mind just the sum of its chemicals and synapses firing? We say there is something beyond the frequency data.
Hermeneutics sees meaning in concreteness; is a property of the real world. Important because social practice takes place in the world. Behaviour is nested within social practice. If behaviour is concrete, and within social practice, how do we observe and interpret behaviour in virtual environments?
You get data from your VEs; can it be organised by the social practices that are taking place? Propose clustering behaviour data in terms of social practice – what they’re doing in the world is what gives it meaning. Examples: readings, etc. How does this happen? Structural equation modeling, multilevel data structures, continuous, longitudinal measures – ground data in the real world. Traditionally, have latent factors to identify from observations made, see how they map to the models. Traditionally at a single level of data. Nested models. With longitudinal data, multiple observations per person. Or nest observations by task, ground the data within what’s happening in the real world, find the meaning.
Maybe there are better ways, hope you can think of them. But this is one way we’re proposing – nest it in behaviours in the real world.
What happens if people follow these recommendations? Sometimes asks, what would happen if all that could happen was what’s in your company’s training manuals? Need to be good stewards, and not regress to Behaviourism 2.0.
Questions
Someone: Not a philosopher. But early parts about positivism interesting. Why are we forming these models? Is she reading, or looking at a book, or just appears that her eyes are pointing at the book. Contextualise in terms of bottom-up data mining?
Mike: A fundamental question for us to deal with. A philosophical difference. Have to have a theory first, something you want to find out, before you just look at data. If probability is your only guiding factor, from bottom-up, there’s a chance of making things correlate or regress that have no meaning in the real world. Could come to false conclusions. Not that there isn’t value in data mining – an important part. But has to be guided by philosophy of what’s happening in the world. Fundamental assumptions, need to be questioned and debated, not just looking at data haphazardly.
David: This is letting the data tail wag the theory dog. Going on a fishing expedition, temptation. Without any theory driving interpretation. Simpson’s paradox – can find things that aren’t really there. Is value in exploratory work, but to let that drive your theory creation is dangerous.
Stephen Fancsali – Variable construction
Philosophy at CME, and Apollo Group.
Has positivist background, but have a lot of common ground. Going to talk about variable construction.
Two key objectives, ideas: First, take confusing correlation and causation. Don’t want to do that. Framework to represent this rigorously. Secondly, some preliminary results in an ever-present problem in online education, and how it can be framed in that framework.
Often interested in student behaviour in online courseware. Often retention, business end. But usually want to get at learning outcomes, predictively and causally. Frequently, we only have access to complex raw data. E.g. online forum messaging data – just get user, timestamp, forum, content, substantive flag.
Important conceptual distinction between predictors and causes. Predictors of learning outcomes may be useful for diagnostic purposes, but they need not be causally related. Age-old problem!
Parallel with a disease – look at disease, its causes, and symptoms, and other causes of those symptoms. Predictors are the symptoms, causes, and other causes of the symptoms – all useful in diagnosis. But if want to treat someone, want to focus on the causes.
If we want to spot high- or low-performing students, we need diagnostics. If we’re interested in enhancing student performance, we need to know the causes of student learning outcomes. Want causal knowledge before doing interventions.
Formalism – causal graphs. Hypothetical example of final exam score. In simplistic model, has two causes – student studying, and their ability. Studying might be cause by obligations – family, employment. And by their motivation, which also causes class attendance. With this model, class attendance (logins to class), is a predictor of final exam score very well. But if want to make an intervention to improve final exam, need to know the direct causes of their final exam score, and work on those with interventions.
Causal graphs have decades of work behind them on graphical models as a representation; also on reliable discovery of causal structure from observational and experiemental data. Previous work focuses on data presented at an appropriate unit of analysis. But e.g. forum log data isn’t in that form. It’s not clear, need to transform that data to make it useful in this framework.
Two ways to proceed: ad hoc from intuitions, expert opinion, theory (in conference paper); OR can devise a data-driven search for variable constructions (very recent work).
Data to look at here is from an online graduate-level economics courses – focus on messaging, and aspect of ‘resource’ access (an online chapter). Explore ‘causal predictability’ from ad hoc variables, vs search-based variables. Focus specifically on messaging data – is impoverished dataset, but if it’s successful, shows the potential.
Ad hoc variables are count of messages, message count from private messages from instructor, and chapter view count. Learning outcome measures are final exam score and course grade. (Excluded demographics, because interested in actionable interventions.)
Search process for variables: start with data assembled per student – many options, large here. Background knowledge and theory enter in here. Then iterative process. 1. different operations on variables – sum, max, variance, log, discretize, interactions between variables – gives you more constructed variables, several hundred of them. 2. Prune variables. Maximise number of distinct sources, drop if e.g. highly correlated, pick the one that’s more predictive. Gives smaller subsets of constructed variables. 3. Build causal graphs that could explain data, using method allowing unobserved common causes. 4. Do causal predictive modeling – determine average causal predictability. 5. Assess – which sets do well? How well? Prune again, or stop if they’re doing well enough.
Example: from student messages, get word count per message. Could find minimum message length. Take the log of that. Discretize – if above the median or not. Then from there, consider interactions with others. Number of variables explodes, but you can prune it down.
Preliminary results – ad hoc variables: final exam points, and public group message count have no causal predictive information – possibly confounding common cause. For course grade, found R^2 of 0.31 for variable construction search for course grade, but 0.07 for final exam score. But for ad hoc variables, no predictability at all.
They find unambiguous, true causal information about the course grade (expected). But not clear expectation about final exam score, but still get some causal information.
This is extensible to other times of rich log-style data – not just in education but beyond.
The tools he uses for causal discovery algorithms are available.
Questions
Someone: Didn’t use demographics?
Is in the paper in the proceedings. Treat it as stuff you can’t intervene on, it’s exogenous. Can’t manipulate someone’s age.
David: Clear from data that faculty are assigning participation, it’s part of the course grade. They represent that no cause represents learning in the final exam. Ethical observation – shouldn’t you go back to them and show it’s not measuring learning?
If we knew final exam was a perfect measure. Course isn’t standardised. Can’t throw final exam away.
David: I’m saying final exam is a good measure, but if participation isn’t driving the final exam, why grade participation?
Did have impoverished data – nothing about how group projects went. Don’t want to make too big an inference. But is a criticism.
David: How do you make this actionable?
Good question.
THESE NEXT LIVEBLOG NOTES SUPPLIED BY GUESTBLOGGER MARTIN WELLER – thanks Martin!
Ari Badar-Natal & Tomas Lotze – Evolving a platform
Ari Bader-Natal (Thomas Lotze) from Grockit
Doug Clow – iSpot analysed
[Slides will be on my Slideshare account later.]
–
This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission needed to reuse or remix (with attribution), but it’s nice to be notified if you do use it.
4 thoughts on “LAK11: Monday morning”
Comments are closed.