Second liveblogging post from Tuesday at LASI13 #lasi13, the Learning Analytics Summer Institute, at Stanford University, CA
Plenary: Learning Analytics in Industry: Needs and Opportunities
Chair: George Siemens
Speaker: Brian Whitmer Maria Anderson (Instructure), Alfred Essa (Desire2Learn), Wayne C Grant (Intel), John Behrens (Pearson)
George introduces the panel. These events aren’t cost-free. Need to find people who share values – sponsors. LASI includes Gates Foundation, Desire2Learn, Pearson, Canvas/Instructure, Western Governors U, PSLC DataShop, MacArthur Foundation, Intel Education.
Without effective analytics, we’re making decisions in the dark about education. As we advocate for reform, important to do it through the lens of data, evidence to make decisions. These organisations have contributed to help make this event happen.
Panel focuses on interests, needs – what would you want from a research lab, more broadly about expectations. We’ve spent time on these relationships – researcher to researcher, students, foundations, corporations, agencies.
John Behrens – Pearson – What we are looking for
I only have fourteen slides.
Four major influences on data analysis – formal theories of stats, developments in computers and display devices, larger bodies of data, emphasis on quantification. From Tukey & Wilk 1966! John Tukey’s work, had some great insights, working with e.g. first satellite data – I’m only now appreciating that.
Individuals to hire – they’re nice, smart, inquisitive and boundary crossing (not just learned a bunch of stuff, but people who really are doctors of philosophy), business/pragmatic sense – practical, understand issues of being in an organisation rather than promoting self, plays well with others. Know there’s a reality that a perfect product in 3y means competition will have got the market.
Learning analytics – learning and education. Political goals, changes. Want to balance research and development. It applies in the K12 space as well as universities. Some unique, some common issues. Pearson have part doing schools, part universities, part MOOCs, part LMSes. Everyone needs to think statistically and computationally. And have some sense for psychology and learning sciences.
Fundamental flow about how LA happens – LA happens in the symbols, but it’s about the world. We have to understand the world, track it back there. Can have greatest analysis and interpretation, but if can’t communicate it, it doesn’t matter.
O’Reilly PDF Analyzing the Analyzers – Harris, Murphy and Vaisman 2013. Different kinds of analysts and skills.
Al Essa – Desire2Learn
Want to provide an industry perspective – the dark side of LA! Three comments, two claims, one challenge.
Comment 1. Shirley Alexander’s suggestion – what are the big problems we’re trying to solve. Supplement that – we want to solve big problems, but see if we can do it in a simple way, the most efficient way.
Comment 2. From a framework perspective, we’re after (from industry) how we take research and move it to product development Four steps – research, validated research (reproduced, have confidence), research indicates a line of enquiry -> prototypes, products.
Claim 1. LA is really not sufficiently rigorous right now empirically. What do we need to get to that point? The missing step – four phases. Lots of activities in research results, experiments, publications. But I don’t see which results are promising and have they been reproduced experimentally. Coming back to Shirley Alexander’s report. Some initial experiments done, potentially great payoffs in terms of learning gains. But we don’t have repeated experiments to validate. Would like to see more of that.
Claim 2. Emphasis on big data. I love math, big data, Hadoop clusters. Let’s not overestimate Big Data, or underestimate small data. There’s a lot can be done with small data, and using research. Let’s not preclude that. In the interests of economy and efficiency we should look for opportunities of small data.
Two examples of astounding results. Eric Mazur, essence of the flipped classroom before the term was in vogue. In physics, trad lecture mode completely ineffective. Gain tremendous learning gains using interactive classrooms. Corollary of that, one of his results is closing the gender gap. From Harvard, and other universities, women who come in to study intro physics at universities, in preparation, there 10 percentage points behind men. Starting off behind. As they go through the course experience, men in trad classroom, lecture mode, the gender gap remains. Based on Eric’s work, claim is interactive teaching can close the gender gap. If it’s true, this is huge. At least in physics, we can close the gender gap. An experiment done, but so far as I know not repeated. Potentially generalisable. Hypothesis – in STEM disciplines, can we close the gender gap? Another generalisation potentially is other types of gap – low income students, minority students. Lots of one-off experiments in LA/EDM but if this claim is true, can we reproduce those results? In physical sciences, if someone says light bends and I’ve shown experiments – the validating experiments are the important ones.
Another example, Carl L/Wyman (?) in physics, UBC, Boulder Colorado. Two groups – one traditional lecture, one interactive. Students in the treatment group scored twice as high as the trad lecture mode. That’s astounding. 2.5 standard deviations. Who’s reproducing that? What are the boundary conditions.
Two big claims out there. As a research community, we’ve not looked at those results and tried to reproduce them.Challenge – vendors who want to produce research-based tools. But lots of research is one-off, we don’t see the reproducibility. That’d enhance tool development.
George: D2L have backed us since the first LAK conference at Banff in February.
Wayne Grant – Intel – Learning Experiences Group
Some background in Intel Education. When we go out, it’s not the first thing that comes to people’s mind. Microprocessors, lots of heat. Missed the mobility move. We’re working on that. Intel committed to ed since its inception, investing billions. Approach from slightly different point of view – primarily design.
My group responsible for getting close to end users – teachers, students. Design whole solutions, the devices themselves from the processors up to deliver interactive experience, provide multiple means of interaction. Big on perceptual computing now. Also engagement. Three design for learning principles.
Focused on four pilars – products/hardware – reference platforms/designs – e.g. for K12 the beautiful things you’re using will break; software – enter learning analytics; locally relevant content; implementation support.
Changing conversation worldwide. Moved off the ‘will computers make a difference in test scores’ to ‘I need to develop knowledge economy, computers are going to be there, what am I going to do with it’. 1:1 is real, what does the teacher do to support interactions with that device density. Looking to personalise, looking for adaptive systems, to simply manage the learning context.
Leads to notion of the Gartner Hype Cycle. Where is learning analytics?
Audience: still on the upslope of expectations. MOOCs are ahead of us and moving fast.
Gartner puts learning analytics at the peak (2012) – analytics in the service of adaptive learning. To what extent are you moderating this perspective? This will influence the penetration of your research in to products. As part of my travel, go to education conferences. There is nobody not talking about learning analytics and having it in your products – publisher, LMS vendor. As a commercial vendor, you must make that claim. How do we reconcile the distance between reality and hype?
George: Used Instructure for LA course.
Maria Anderson – Instructure
Great to hear other perspectives. Canvas/Instructure is a much smaller company. Don’t have the same needs, so taking different perspective. I handle research, get requests from people who ask for access to the data. Hate this question – it just says ‘can I have access to data’ – I can’t answer that question, and it’s not useful to me. I want very specifically, what are you looking for. e.g. ‘I need time on task data’ – time in the system, time on that class – there’s lots of places we might track time. I don’t even know if I can fill that request. So make sure that when students come with requests for data they’re specific enough I know what they’re trying to get at, and what the purpose of it. Also requests when they ask for ‘all the data’. I’m not allowed to give data from Institution A all the data from institution B. So keep scope manageable – typically at the institution you’re at.
Another thing ignored in the community – we’re producing great analytics, products, systems. But haven’t changed habits of instructors and systems. I was teacher for 10y, it was black box of data ll that time. Gut instinct, maybe assessment once a semester. Now the data is much greater, but habits are not in place to do anything with that. Vendors create analytics platforms that allow instructors to get insight, but they’re like graveyards, no teacher habit to look at it. So insights in to where that data is surfaced so it gets used, training for what to look for and what to do with it. It’s a two-fold responsibility – on us, but on institution too. Maybe have a data diving day. Driving a market for more learning analytics. We run numbers on what’s getting used, it’s not being used as heavily as the hype says.
On a personal level, those doing research continue to put out preprints. Ironic that having been in education when I was trained to look at scholarly research, but in industry it’s almost impossible to get hold of it. I think every research article should have a couple of paragraphs saying if you believe this, here what that means practically. We have product engineers who read about what they’re developing, but don’t understand half of it. Course, that might put academics out of a job.
What kinds of skills should graduates have? Understanding engineers and how to work with them. That’d be a useful skill. Know a programming language, databases, the language of software development. Agile, ticketing systems, how software gets build. You can learn that but it’d make the transition easier. Software companies 101 would help graduates land on their field.
George: IEDMS has stuff open online. J Learning Analytics will be open access. Handbook of LA will also be an open document. We’re trying to do that to make research accessible.
Questions
Stephanie: Al, liked your comment about big data, small data. Example where big data helps small. Eric Mazur’s work, stuff that backs the importance of active learning. Flipping the classroom may not be the solution you’ve painted it for. Big Data at U Mich, intro level gateway courses, identifying a grade penalty for taking these classes – regress this class grade against GPA, find pattern where most students get a grade lower than their GPA. Grade penalty bigger for women than men. Big data – consistent across large number of courses. What’s going on? The fix may not be flipping the classroom. They all give standardised multiple choice exams. If a class where get a grade lower than you’d have expected, especially if you’re a woman, you’ll find a class where you can do better.
Al: Wasn’t let’s ignore big data. What’s powerful and exciting is running experiments, combining big data, complementing it with small data. One of the small data expts Mazur did was efficacy of demos in class. How do you interpret that? Researchers can work on that, what other data do we need to reproduce those results.
Sidney, U Notre Dame: Wayne question. Where would you put ITS on the hype cycle.
Wayne: Good question! Shows next slide (with lots on it). But ITS isn’t there. Gartner produces this publication. AI only now starting to recover from a trough.
Joseph, Stanford: Interested in knowledge brokering, linking research and industry. Specific, actionable ventures that’d promote this process? For example, a competition format. Can companies offer competitions where there’s a prize at the end – and they make their solutions available. Or consulting/collaboration model. Take on senior faculty as consultants. Or take on graduate students etc on a temporary basis as consultants. Problems with those, or examples of things we could do? Like having a broader blurb about papers. Training for industry – training for data insight about Hadoop and so on, places them with jobs. Industry sponsors students, gets first choice to interview.
John: I lead a research centre, have collaborations with university departments. Some more psychometric, some general content. Broad portfolio of relationships, internships, profs, grad students. Some collaborative where people say we love working with you, share data and problems and resource to solve basic research issues. Issue around a ?caggle type situation, it’s complex right now for Pearson. I don’t own the data, the product people own the data – or do they own the data? Who knows? Also interesting about data ownership – Pearson has a number of business models, helping organisations in different ways. We make assessments for states, e.g. Texas. We don’t own that data, Texas owns that data – and legally it can’t leave Texas. We’re just helping them collect it. Other situations more LMS-ish, more tutoring-ish. Not all data has the same socio-political provenance.
Wayne: Engaging with Intel, several levels of infatuation. Internship, undergrads to postdocs. Contractor consultancies. Up to Intel Science and Tech Centres, at university level to support a discipline. Could be learning analytics. Broad way to engage that keeps you pure as an academic.
Ken, CMU: Mixed feelings about the hype cycle. Maybe it’s a rational phenomenon. Need to get enough involved so when it drops off it’s still there. Phys ed research was a new player once. The newest newcomers are computer science. I say great, come on board. Encourage them to be aware there are huge volumes of studies. One comment – the Dept of Ed, What Works Clearing House. Practice Guides are quite good – e.g. organising instruction. Can get access to lots of studies replicating results.
Wayne: Fine to say to an academic there’s a huge body of studies. Corporations won’t do that (read all of it). Synthesis of panel of experts, make a recommendation. How do we make those more visible to the corporate side.
Ken: I agree, that’s why I brought it up.
Al: It’s not that we don’t have results that are reproducible, or people aren’t following up to validate claims.
Ken: We don’t have enough of that.
Al: It’s Shirley’s point, Stephen’s point, what are the big problems we’re trying to solve, and which ones have the most promise in terms of significant learning outcomes, concretely. The pipeline from research to product innovation, the friction I see is there’s all that research, could spend a lifetime combing through it. That’s a challenge for us as an industry. It’d help if SoLAR and the research community say here’s promising research results – whatever – this is highly promising, let’s try to close whatever open questions there are, how reproducible it is, is it generalisable, under what circumstances. Focus on some specifics. That’d help industry – e.g. that stuff it out there.
Wayne: This is where the general world see LA. If we were to poll the community in this room.
Ken: I’d put it at the peak too.
[disagreement]
Ken: Not just LA, but the broader space of learning science, cog psych. LA brings something unique and powerful, but if it ignores it, bigger dip. If it takes it in to account better placed to flatten it out.
Nicole, Utah State: Reactions and questions for Maria. Was in industry, now researcher. Asking for data in general, or all of the data. Some of us have been burned. Can be frustrating for us too. Don’t know what underlying model is. We don’t know what’s available, non-obvious proxies. Importance of learning across boundaries. We try to post our papers publicly. We don’t get credit for writing extra stuff about the papers. Are you sharing stuff back too? To improve impact on the student and the teacher. Analytics, data, dashboards – a lot of that relies on the company doing those dashboards, getting a better understanding of why people aren’t using that. Is Instructure supporting research on adoption and diffusion, how to present data better. If support comes from there, we lose a bit because not controlled, but gain in contextualisation.
Maria: Getting data back out. We collect a lot more data than we surface. Instructure has always had policy of collecting data, then back-engineering features. So can look at analytics from old features. Want to make sure you can do that. Want to surface data up to the users.
Nicole: Is this presented in summary, overview?
Maria: Can surface it quicker if not in a usable format. If visual, requires a lot of UI testing, it takes longer to develop. That’s what we want. If you want the data faster, it’s in CSV files. We have same goal – users want to intuitively understand it. Grade book is ideal time to see the analytics is my intuition, but we should test that. We’re doing more of that. I think that’s where you guys can help with feedback.
Judy: If we’re in a hype cycle, what do they mean by that, or understand by that?
Wayne: That is the problem. It means different things to different providers. It’s become something you must have, therefore it’s a tagline on your product brochure. How long you read a particular passage in some cases, in others test questions relative to the whole class. It’s all over the place all under one moniker. I find that scary. Puts us all in a position of recognising that companies, unlike universities, have large marketing budgets. Very good at taking a message and placing it in numbers we can’t compete with. Given that, it creates the trough of disillusionment. To the extent we can’t modulate that. Back to example of ITS and AI, virtually disappeared.
Al: We’re trying to solve a problem around student success. Business model of HE has changed overnight. From what can be volume based to value based. Funding in education, principally US/North America but similar elsewhere – the funding has been based around headcount. That’s changed. Many states have flipped that model. Tennessee – it’s value based. Show me your students are progressing. Progression, value – are they learning and progressing. The tools we’re trying to bring to market need to address that problem. The research community – vendors are going to make claims. Some say we lie. But say Ok, demonstrate using data that when you say your tools lead to success, can you demonstrate that? If research based, all the stronger.
–
This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission needed to reuse or remix (with attribution), but it’s nice to be notified if you do use it.
Its interesting to know there’s a handbook of LA coming out and thanks again for all the great blog posts