Steven Warburton: Welcome and Introduction
Head of Department for Technology Enhanced Learning, University of Surrey
Analytics appears three times in the Gartner hype cycle curve. At the peak, content analytics, very closely related. Big data at the top too. Prescriptive analytics – what is the best course of action. Beyond descriptive, then predictive, to prescriptive. (ouch) Help people make the best decisions.
Predictive analytics story: from Target – How Target figured out a teen girl was pregnant before her father did. Guessed whether pregnant from purchasing patterns, sent targeted flyers to daughter, father angry.
OU Innovating Pedagogy report – learning analytics there. NMC horizon report – learning analytics again there on 2-3 year timetable.
What are the benefits? For learner, tutor, institution.
Erik Duval concern -programming out the human. We should be in control of those algorithms, thinking about human value.
Michael Moore: Big Data in Education: Theory and Practice
MSCIS, Senior Advisory Consultant – Analytics, Desire2Learn
Based in Florida.
ILP – analytics capability and maturity model. First stage – recognising the data itself, having access to it, and reporting. Without the data, can’t do anything with learning analytics. Second stage, look at the data, forecasting what you expect to happen. Third stage, optimisation, strategic. Fourth stage – what do you want to happen? – prescriptive, predictive, advanced adaptive and advanced predictive analytics. That’s the biggest application.
Analytics portfolio for D2L – whole range of things. Student Success. Degree Compas. Analytics Essentials. Insights reporting tools
Video! We want graduates, success. Far to often they get frustrated and withdraw. North American statistics about non-progression and completion (not good!). Students come from rich, diverse backgrounds. First gen learners, and non-trad age students. Degree Compass – help make good choices about courses; star system. Mobile app. Frees up academic adviser time to spend on one-on-one sessions. Within the learning environment, many tools. First, class progress view – dashboard. Second, student view, gives them about how they are performing individually. Goal so they’re not one in a large class but have personalised experience. Tool coming out next year – adaptive learning tool. Makes adaptive recommendations and adapts pathways based on student’s skillset. Dynamically add in content based on strengthening them where they’re weak. Mobile. Insights – product – student success platform. Scatter plot of student performance compared to others. Visualisations of grades, based on categories in the grade book. Also sociogram of how students are interacting using the discussion tools – so ID where they’re not participating or interacting. Aim to improve student experience, personalise. Based on Insights, lots of reports. Learning environment reports – login activity, tool usage, outcome achievement. For individuals, aggregates at class, dept, institution. If you wait to end of semester, it’s too late.
Degree Compass. Optimised course selection tool. Goal to optimise course selection for students – choose courses where they’re likely to be successful. Based on global centrality, major centrality, grade prediction. Gives 5* rating to student. Interfaces with student information system, single-click interaction.
In Tennessee, early research with 2 univs, 2 colleges, predicts grades to within 0.6 of a letter grade on average. Typical prob of A or B is 62%; Degree Compass recommended courses, it’s 85%. Straight relationship between credit-hours earned and classes they took based on DC. (Looks spurious correlation to me! More classes will obviously correlate with more credit-hours.) Reduced gap on performance of Pell grant students (disadvantaged) and non-Pell.
Draws info from student information system, degree system. Interfaces with various degree audit systems, portals, and with Banner student information system. Students can self-enrol.
Two predictive modeling engins and predictive algorithm: grade prediction engine/model, and centrality prediction engine/model.
Today, it answers which course you should choose. But they want to expand it to program or even career recommendations.
With Student Success System: “These two together are a powerful analog for the onboarding” [sic – classic line!] They provide a great lifecycle experience.
Adaptive Learning tool Knowillage LeaP, will be incorporated in to D2L soon. Adaptive learning engine based on students experience. “What if textbooks could learn … from you?”. Content can be continually changing. Great learning analytics about those learning experiences.
Example reports. Statistical analysis – of exams. Outcome achievement. Progress on competency achievement – outcome alignment, mapping. Academic risk report – student engagement report. Learning environment login activity, by role, by individual, etc.
Idea: get to know your big data.
Eleni, Birkbeck: Are the grade probability shown to students:
That’s configurable. Typically not. If you choose to turn it on you can.
Eleni: From what I hear, the student success is mainly based on grades, other probability is how they can be successful based on that. But HE was supposed to create some surprises, unknown areas. If we say, I’m good in maths, I’ll follow maths courses, what about other areas?
Predictive recommendations area. Focuses on courses that are required. General education courses would fall in to it. It’s just a recommendation, not a mandate. Still freedom of choice. Students can choose. Can do their own elective courses – we make recommendations, but they have freedom to choose. In Student Success Module, grades is one component, but there are five domains we evaluate. We look at content access, course, grades, discussion involvement, preparedness for college (from SIS).
Yishay Mor: Devil’s advocate. Heard that one of the research findings that analytics help the students who are in the greatest need for help, with lowest success. Is there a risk that we’re entrenching segregation – if you come from a minority, you’re less likely to succeed in e.g. law or medicine, and the system will funnel you to less prestigious degrees.
Mike: I don’t think so. Based on individual aptitude. Could be based on college entrance exam scores. Or their academic experience within the university. In law, if they have success in those courses, that’d be represented in other recommendations. Based on individual skillset, rather than profiling or based on their background. The research is trying to indicate that helping students from more disadvantaged backgrounds – e.g. first generation learners, there might be a gap in that information on e.g. picking brains, getting recommendations and guidance.
Adam Cooper: Analytics: As if learning mattered
Three sections: My perspective, Threats/risks, suggestions from how to avoid them.
Working as service to JISC. Range of techs and standards. LA within educational institutions, rather than research.
Defining analytics – actionable insights through problem definition and the application of statistical models and analysis against existing and/or simulated future data. Not about knowing for the sake of knowing. Problem definition is crucially important part of the process. Embedding domain knowledge, world view in to the problem.
Learning vs Business Venn – Insights to support educational aims and objectives vs operations one. But big overlap – retention, student satisfaction, employability, achievement, applications – clear rational for investment.
Structure to Manage Complexity – recursive nature of what people see and understand. Senior Mgt Team on the outside, then subject/discipline member, then course team, class teacher, down to individual student/learner.
Intersectional doppelganger – threat from both sides.
Quote from Dai Griffiths – the ‘accommodation’ – people who can’t deal with the microstates. We’ve developed ways, systems, to allow managers to manage the situation without having to go in to the depths. LA can shift this balance of power. The ability to look through in to what can happen, could destablise this truce.
First threat: Interventionism. We’re in difficult conditions. Danger of using data that looks informative on the face of it, but nuance is required. Temptation for decisions to be made on interventionism.
Second threat: Business Intelligence tradition. In industry, and vendors sell this idea – big system, management reports that are quite flimsy on statistical level. They may not get us to insights. Problem of ‘build a big data warehouse, produce management reports, key performance indicators’ – have a role, but need to be cautious. Danger of following this tradition thinking learning analytics are the same as business analytics.
“We want to optimise student success”? – phrase is hiding a lot of important questions. Is it a well-specified statement? Optimisation is about tradeoffs – improve it against constraints. What is it that we are trading off against?
“Maximise probability of success” – not the way I see education. That means we should worry most about people who are likely to just-fail, and ignore the ones who will succeed or fail regardless of what we do.
What does success look like? Really? Says who? Not self-evidence, contested.
Does everyone understand? Ofsted definitions – attainment, progress, achievement, enrichment. Distinct there – but does everyone share that understanding?
Technological solutionism – Evgeny Morozov. Defining any problem as a neatly-organised one with computable solutions. His point, it presumes rather than investigates the problems that it is trying to solve.
Example – 3M and Six-Sigma – total quality kind of thing. Applied it to R&D, it was a disaster. The factors that make Six Sigma effective in one context make it very ineffective
“Big data – it may be big but it’s not clever.” Bigness by itself doesn’t solve meaning. Big in technical terms, things a normal database can’t deal with, e.g. petabyte scale. If retail, you certainly have big data. I question whether we have big data in that sense in educaiton.
“Big data is like teenage sex: everyone talks about it, nobody really knows how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it.”
Threat Rating : U_BI *(I+D)^(U_TS) – Uncritical BI tradition, Interventionist tendences, big Data, Uncritical [missed this]
Why am I worried?
Data is useless without the skills to analyse it. Educause report – shortage is a problem. Can make progress by e.g. investing in specialists, e.g. OU investing in statisticians and data wranglers. Operating a shared service, HESA, UCAS, JISC all making progress in this area.
The managers are teachers, lecturers. Soft capability – not the ability to do the analysis, but an appreciation of the limits of the technologies. Expertise to comprehend, critique and converse (rather than originate).
Visualisations are the option of the people. 3D distorts – 3D charts are trying to trick you. Tufte and sparklines. Minimalist sparklines, but with start, end, low high values. Excel 2010, Google charts – no scale, useless.
Significance and sample size. Correlation is not causation.
What does this give us?
Collaborative, participative process, where all the factors, understandings, values, can become part of a wider process, understanding what the context is. Action research, if you want to. Collectively understand it.
Build soft capability. Participatory design processes. Develop and prototype organically. Reflect on effect in practice. Design and develop with the educator in the system. Optimise the environment, not only the outcome. Be realistic about scale/quality of data and its meaning.
May get pushback from students about use of data they have created.
Mental image: participatory design leading the tamed wolf of learning analytics.
slides at is.gd/AaiLM
Pamela?: Early slide, second point was asking the right question. What was the first one?
Adam: No idea. Ah – was actionable insights. Things a rational person would take action on.
Steven Warburton: Personal learning analytics – I was intrigued.
Adam: Left hanging on purpose. Imagine, a lot of capture is in the browser, could store things in the local database before ending up in the central one. More control. Incorporate activity outside the learning environment. Critical, think about action, what actions would the learner want to take. That will lead us to different path.
Pamela Wik-Grimm: Using Targeted, Data-driven, Student Surveys to Improve Online Course Design
Manager of Learner Outcomes, eCornell
Slides here: https://lms.ecornell.com/courses/1222869
If you get nothing else today, you’d have gotten more than your money’s worth. Absolutely the right questions. Where we’re looking is at the prescriptive piece, developing courses, based on the data that we collect. Want to pick up on – actionable insights, and defining the questions.
eCornell, we inhabit an interesting space. Corporate entity, wholly owned by Cornell University. Mandate – not credit courses, but executive education courses. US and worldwide. Ivy League education in to a changing executive market, we are very revenue driven. Very agile. Education and corporate training market. Want to stay best of breed.
Classic course evaluation – is for looking at the next cohort, not this one. Tend to be generic surveys. Results iffy. I can predict my evaluations from my gradebook. Research on situational bias, gender of instructor, time of year will make difference to course evaluation. About 30% of students are motivated to feedback – the ones with a beef, or who are superhappy.
Defining the problem – we’re not there yet. Still looking at this, creatively.
Example – course data from multiple courses. Example of data – engagement patterns. There are 50 assets, look at how many times they’re clicked on. Mean around 45/46, vast number well above that, great number well below it. Three outliers are the course wrap-up page, stay connected, my next steps. Why? Why do students return to some assets over and over again? If predictable pattern, how does that compare to other courses.
There are two quizzes, items 4 and 6. 6 took more hits. Can take quizes until 100% – they have to. So ask students why, what was different. The other four ones are submitting the project. Can see a lot of hits around the first project. See students going through the course, but some going back. See a lot of random data in the middle. Are problems in progression, in the middle, but also at the start.
We have 25-30,000 students, so can find statistical norms, find deviations, and ask what’s going on there.
Targeted course surveys. We’re just starting. New product recently. Make it a value-add for the students. Respond to student issues. When you survey, you get bias, because of who takes it and their motivations. The timing – getting away from waiting until the end, instead asking you to tell us what just happened, what’s working or not for you. Questions we’re asking, want to be responsive to the data patterns. Next section, we’ll ask around those points. You can see the data, but the data is not clever. Much better to ask the students. Perceived value to students is high – if they perceive there will be a response. Improving quality of data.
Sample question – not what do you think of this course. But ‘how confident are you that you could now write a comprehensive business plan’ – high/low, not 1-5 because they get that wrong. ‘If you rated yourself less than 4, what would help?’ – 5 or 6 possibilities. Gives us relevant data. If see this isn’t being completed, might think it’s too hard, but if survey and students say it’s because taxes are due and we’re all too busy, that’d suggest different action.
At the end of every module, have a check in like this. Did you get it? And if not, what do you need? Did you complete the exercises? And if not, why?
At a student level. Survey individual students. Looking at engagement patterns. In first quadrant, can see students going back over the content. So ask, what’s going on. Completion patterns – some just go to the graded components. And that was the student who did best in the class! Course designed to be sequential, don’t need to go back. But if see a student who is all over the place, would want to ask what’s going on with the course design. This course leads to certification, and each piece you have to do builds on previous ones. 37 students in this cohort, only 1/3 completed it in the way it was planned – so back to instructional designer. Pageview data. Using Canvas as the LMS/VLE. Use the API to pull the pageview data out.
Targeted surveys. Specific students! E.g. “Will you be able to complete this assignment by tomorrow noon?”, “If not, what is the primary issue you are having?” Tool to send to students who e.g. haven’t submitted yet, scored less than some point. Combo of looking at data, but not making assumptions – asking students to help us understand it.
Targeted, relevant surveys that provide real student benefit during the course session will both help students succeed and provide more accurate, detailed data for course designers.
Test that, as we gather thousands of thousands of students’ data, to generate norms.
Steven: What are you doing with visualisations? Working with designers?
Pamela: We’re in the process of working that out. I head up our product development dept. Work out what’s helpful for instructional designers to see. We’re in the problem discernment phase.
Bart Rientes: Moving from Learning Analytics to Social (Emotional) Learning Analytics
Senior Lecturer, Department of Higher Education, University of Surrey (soon to be OU)
Was on holiday in Indonesia. Photo. People running in different directions. Running – some running away, some looking what’s going on. School in the mountains, teacher was following how students were progressing over time. They had a tracking chart. We’re doing LA anyway as a good teacher. Is there anything new?
Research on mathematics education. ALEKS – adaptive learning environment. That’s where LA will go to. System so smart, can predict which exercise is just on the reach that students can get. Individually tailored. As a teacher, get an overview of how students are doing.
Future of learning analytics:
Social learning analytics. Most of the learning is invisible. Learning is a social phenomenon; they bring a backpack of their own experiences with them. Integration of LA with educational psychology. Visualisation of the invisible.
1. Social learning analytics
Social LA focuses on how learners build knowledge together in their cultural and social settings.
Most of the learning takes place outside the formal context. If most is outside the classroom, what can we measure? Centrality graph. Social network analysis- shows interaction, from whom do you get useful information. Some students are very central, some are further away. High performance students, structured equation modelling – previous performance is predictor of performance. But social networks too – social networks is the most important predictor – with whom they are networking. Informal network can’t measure it from the VLE. Students create a Facebook page … and I’m not on that page.
Can we get more intelligent elements in to this.
2. Integration of LA with Ed Psy
Erik Duval again – lack of clarity about what exactly should be measured. LEt’s get our hands dirty. We know a lot about learning, but most LA focuses on what’s in the database. Important opportunity to include what we know.
Example – dynamic interaction of synchronous and asynchronous learning. Sync video conference, then async forums. Four periods of that, then final exam. Classic educational data mining. Looked at how motivation affected, influenced how people behaved. Would expect intrinsically highly-motivated students to do well. Take hardcore datamining approach. If you post a lot in the first week, you do so in the second week, etc. If add motivation (intrinsic, extrinsic, amotivation) – intrinsic motivation leads to posts a lot. By only focusing on the data (usage, contributions etc), miss the motivation which is the underlying cause.
3. Visualising the invisible
Fish cooked in Indonesia. 20-50% of learning cross-boundary knowledge spillovers occurs across teams; 30-90% of learning occurs outside formal learning setting. But measuring informal learning notoriously difficult.
Social networking approach. Enrolling yourself in a MOOC etc, do so as an individual. Are put together with many others. Teacher will group you. Many groups learn within the groups; but people have relations outside the groups. So get knowledge spillovers between groups. Model it, can see some groups who mostly link between themselves, other that reaches across groups. These professionals, when they go back to their own work, they – we hope – incorporate what they learned within the department. Present back to peers, reinforce learning. Hope for transfer back to home environment. Measured in one year. One person in the course looked a bit lonely, bit detached, but looked at later, has five or six learning links outside the programme. With two programmes, 100 participants, very dense network of links. Explosion of formal and informal learning processes. The challenge is not the low-hanging fruit – how many clicks – that’s easy. The difficult challenge is to discover those backpacks, informal relations that influence how people learn.
Focus on social learning analytics, integration with educational theory, try to visualise.
Pamela: What data can you collect around social learning?
Bart: It’s harder to get. Students themselves know really well who they talk to. Use that in data analysis. Bit of extra work to collect those informal networks. But once collected, have a very rich dataset. Questionnaires, or user behaviour to detect it – e.g. from Facebook.
Michael: For research purposes, how do you obtain that data from outside the formal context?
Bart: As well as the class list, just ask in open form – with whom are you discussing this? Qualitative interviews with them, why are some using the external network, some not. These professional development programmes are outside the normal school. But the school relations are vital.
Michael: So most via polling, forms?
Someone: The tyranny of participation. Social networks so much to do with personality. What are implications of this? Many arrived by hunch – better connected students do better. What is the actionable part of this?
Bart: Not that people who are more central, or more popular learn better. Can also be on the outside of the network, but so long as you have good links to the key brokers in the network, that’s still successful. In medical programme, they’re not aware that informal network is so important. So build LA to show them where they are, and who are really useful people to go to – next step. My wife is extremely introverted, social networking stuff is really upsetting. Difficult balance.
Someone: Added motivation as a variable. Found it along the way? Someone not motivated at the beginning, but became so during the learning journey?
Bart: In some settings, did a pre/post comparison for motivation, most of the time – we used Academic Motivation Scale – is pretty stable. Tried to play around with the environment to make extrinsic more intrinsic – it’s difficult to do, hard to shift.
Steven: Show social network graph to the learners, what benefit? If show informal connections, they didn’t realise they were so important?
Bart: Some just showed black nodes, asked them where they thought they were in the network – predictions were pretty good. Next step to internalise LA with those informal social network measurements.
Someone: I think, when we look at social networks, lot of work via the group dynamics. Not just personality, but group. Can see inside social graphs?
Bart: Yes. Lots of research on group dynamics, intergroup dynamics, how those things factor in. Can use social network graphs.
George Mitchell: The Learning Ecosystem – A Content Agnostic Adaptive Learning and Analytics System
Chief Operations Officer, CCKF Ltd, Dublin
Start conversation by understanding who the audience is. Interesting taxi driver conversation this morning. Asked, “what do you do?”. You typically pitch to your audience. Unexpected level of grasp of the detail from cabbie – so changed the pitch.
Little company. Until 6 months ago, had no customers. Seven years old. R&D, approaches to understanding learning. Not an adaptive learning or analytics company. But are an education company, teachers in K-12, and HE. 15y in the education field, lecturing compsci/AI. Have to move from fundamental researcher to interest in pedagogical techniques, to deliver effective teaching and learning.
Goals – provide a personalised learning experience, appropriate material at an appropriate time. Simulate or emulate a good teacher, remain subject and content independent. Previously worked on e.g. predicting growth in cancer cells. Trying to forecast, predict, understand what the next piece of learning should be.
Other companies, it’s often driven by the content, STEM subjects. Humanities, or CPD, optimising the learning, it’s different areas. Adjust based on instantaneous feedback – not the final assessment.
How to break down the system – target knowledge, but separate intelligent engine: profiling, determin knowledge, ability metrics, learning paths. From AI perspective, how do you prime the pump? If system blank, tricky. Pre-tests are pretty dumb. How do you devise them, select the content you want to assess on, pick out an area of knowledge, then neighbourhood search. Filter backwards to understand their prerequisites. Once done that, can predict ahead, based on their foundations – where shoudl they go, and what sort of context. Based on learning styles. [Uh oh.]
Work with people with large blocks of homogenous, hierarchical learning content aimed at general capabilities of a student cohort group. We turn that in to a directed graph. Map knowledge space, logical connections between elements, pre-requisite and other relationships. Not a single curriculum map, but an interconnected set of them.
Break down the curriculum graph in to fundamental items, competency based model, break into pedagogical sections. Can track specific competencies at a very granular level – individual questions. Competency-based learning very popular. We had this already done.
First talk to VC. Then second to faculty members, fear they’ll lose academic rigour. Fully engaging people in the online delivery. Real time evidence for course evolution.
Pre-requisite graph, linkages between concept items; weighted links. Look at evidence built over time, adding to connection strengths. Can evolve the curriculum rapidly. The linkages you thought you were teaching aren’t really there; you’re teaching your idea, hoping students will pick it up, questioned on it, represent it in a format you understand.
Example. 4-5y ago in North America, grade 6 class, 11-12 yo. Fractions – addition, multiplication. Students had fundamental difficulty in understanding division. Looked at learning logs, data, discovered students logging in late in the evening, studying in their own time, moving backwards, on foundational learning items, building on that, then moving forward. Understand where students are coming from as much as where they’re going to. Focus on adaptiveness.
Instead of standard A/B grade, use probability distribution for each node. Likelihood function – how much true belief we can base on each item within the connected network. Allows us to generate likelihood function for success on next nodes. Hard to convince people this evidence is better!
Change to how you perform assessment. What is the type of analytics we should be providing? BI background, understand myriad of things we can do, but people expect it to fall in to narrow columns that’ll grow year on year. Need to change how we think, and how we assess.
Look at a learning path – directed graph of a particular subject area. Different students proceed in different manners. Want to understand that fundamental knowledge. ID topic student wants to learn, they start there, move forward. But if want to understand them, assess first, understand, ID only the gap in their current knowledge and teach only that. If they give evidence on the gap that shows weakness in prior knowledge, that’ll be fed backwards, will be presented to subsequent students as a required item. Profiling student, and content. What if you have different learning styles? Multiple content layers. Ask them if they’re teaching on Bayesian systems, bring out notes on them, we ingest that content, layer it across the curriculum, so that intro to attribute of belief, give 4 or 5 ways of teaching that, each a different type of profiling one – e.g. visual, kinaesthetic approach. Profiling student, and content. Instructional designers. Evidence for instructional designers about effectiveness of their approach. Can give direct evidence of amount of time spent, and knowledge demonstrated. Trying to respect what the student already knows.
Case study. Content agnostic. Rolled out in first pilot. Now university-wide in several US universities. Many subject areas – English lit, history, accounting, comp sci, biology etc. Student numbers went from 600 to 55,000+. Take course material, ingest it directly in to the system, automatically. Daphne Koller question – data analytics, adaptive – slow to do, can only do it for a small number of courses. Can’t do it as a full roll-out. How can you transform a new institution, or existing institution? In all tech companies, the ability to take legacy content and import it in to an adaptive learning system, or modify and change that content to give real value to the end user. Will be very difficult to do. If tacking on adaptive learning, hard to understand where that data fits, how granular it is.
For-profit institution. Has students that don’t come from traditional background. Late learners, with employment alongside online learning. Non-campus. Predominantly have high level of attrition in 1st year, say 25%. Very difficult for any for-profit company.
Student experience. Student view – what am I going to know, how am I doing overall on my milestones, where do I start. It’s a very visual system. Show next steps. Nodes, with concepts, and shows you which are required ones, work through each learning node. Page on e.g. psychology. Differing pieces of information – like a textbook, but green/blue/red boxes, pulling in prerequisite material. Also give probability distribution function, show evidence going in against what they’re doing – it goes more pointy and to the right means it’s going well. Even 11yos can understand it.
Faculty experience – shows individual students, knowledge state, covered, time spent. Can look at questions, can template them so system can generate variables, can present it to a learner e.g. 200 times each one different. Helps with cheating in class – can’t copy answers, but student-student/teacher interaction. Key factors for faculty dashboard. Real-time faculty analysis. Statistics, knowledge trends. Errors in questions.
Interesting journey we’re going on.
Pamela: Competency-based learning. Badging?
George: Not doing purely competency-based learning; the system lends itself to that. Badging is obvious evolution. The simple answer is yes. The longer answer, how do you want to brand yourself, not sure where competency-based learning. Paper in Summer from Western Governors. GBL is just online learning with a different tag.
Steven: Profiling. Are they something that’s portable? Can you take it somewhere else?
George: Good question. The fundamental data is transportable, but the predictive analytics aren’t.
Bart: Like what ALEKS are doing. Excellent when have strong axiomatic structure to a subject, but when e.g. ethics, literature, it’s quite difficult because no yes/no percentages. How do you do that?
George: We have an NLP system, does a filtering pass. True assessment, NLP assessment of essay questions is many years away. Use a submission approach, to faculty member, they provide the grades. The pre-req hierarchy provides linkages. In English, definitive linkage between C20th American prose, difficult to attribute that. So don’t have a single curriculum graph. Could be an evolution of a curriculum graph for a curriculum, but have to let individual teacher morph it.
Someone: Very interesting about generation of unique questions. Adaptive learning design, the teacher takes the model somewhere else. How this works?
George: You have to go back and look at the questions you can create as variable-type ones. Mathematical ones in the sciences lend themselves to this – give range of data values to compute across. For STEM-based, have a complexity engine, to what complexity are you expecting the generated questions to be. Modify the variables, can get complex numbers as the answers, might be outside the scope of the learning. So need to be able to refine the spread of possible types of questions generated. In our question engine, can lock down the types of questions that will be generated, limit the scope of them in mathematical scope of them. In English, natural language field, work with Faculty member to re-pose the question, generate a stream of text, from a block. Not completely human-emulating questions, but select from a block of approved types of questions that you have.
Annika Wolff & Zdenek Zdrahal: Improving retention: predicting at-risk students by analysing clicking behaviour in a virtual learning environment
Knowledge Media Institute, Open University
Work done at the OU, predictive modelling to find students struggle. Commonly struggling students don’t ask for help; timely help can make the difference between success and failure. Limited resources, hard to focus/direct resources so they have the most impact.
In regular learning institution, hard to tell if they’re struggling. At the OU, all the contact is mediated through the VLE, opportunity to look at students through their data, see what it tells us about their behaviour. Develops a predictive model.
Three data sources: demographic (Age, gender, previous study); Assessment (ongoing assessments – OU courses typically have seven tutor-marked assessments – TMAs – and a final exam); VLE (learning content, forums, quizzes – every activity logged). Hard to understand VLE data – hard to know what it means when they click.
Typical VLE clicks – average clicking in each week. Tutor activity is pretty consistent. But student activity declines – could be getting turned off, could be drop out, or could be getting better at learning/more strategic. Can see peaks, which reflect the week in which the assessment is due. Think of student activity period as time between one assessment and another. But nothing in week one doesn’t mean inactive, could be biding their time.
Can look at levels of activity, between TMA activity. No VLE activity doesn’t predict failure. And high clicking doesn’t predict success. Not much correlation. What is important is a reduced level of activity compared to your previous one – if you were high clicker, but now reduced, that’s a problem. Whether they’ve changed their behaviour is the thing.
Problem specification: given demographic data, asessments (TMAs) as they are available, VLE activities between, conditions students must satisfy to pass -> goal to ID students at risk of failing the module as early as possible so that OU intervention is meaningful. Can not-submit a TMA and still pass – can substitute mark. Other courses have different rules – so model needs to know this.
Say we’re at assessment3: we have history – demographics, TMA scores, VLE activity. Build model to estimate the future, and hopefully influence it.
Prediction at first assignment – TMA1 as focus, because it’s a good predictor of success or failure, but still time to intervene.
Build the classifier. Take historical dataset, whether students have passed or failed. Classifier tries to draw a line, so if on one side, pass, if on the other, fail. Then give it a new set of data, hope by plotting it in the same model, can predict whether they’ll pass or fail. We can assess our model on historical data that we pretend we didn’t have – find out whether the prediction was right. Gives a measure of accuracy. Early work, decision tree model, not using any demographic data (because didn’t have it), only on assessment and VLE activity. Has nodes, with properties of students – e.g. assessment 1 score, tends to appear at the top of the tree if it’s the most important one.
Preliminary findings. Didn’t preprocess data. Got pretty good accuracty. F-measure 0.6, 0.9, 0.85 (1.0 is best) on prediction from VLE/TMA data whether student will pass/fail the next one. Third assessment period, using only assessment data not very good; VLE bit better; best when combined TMA and VLE data, got more and more accurate.
Push boundary back – Naive Bayes network. Accurate prediction of first assessment score. Sex, Education, VLE clicks number, new/continuing, sex. The first TMA good predictor of final outcome – students who failed first TMA, had 90% prob of failure overall. 40-59%, had .75 success, and so on.
What’s the role of demographic data in prediction? It’s crucial. With first case – fit certain demographic profile of gender, educational background, probability of failure is 18.5%. But if break VLE data down in to categories, improves accuracy so 64% prob of failure if no clicks, up to 6.3% if high-clicks. Suggests you can overcome your background if you engage with the VLE. Did this with many demographic profiles. If they don’t engage, increases the risk they’ll fail.
Can we predict TMA1 from VLE activities 1 week before the TMA1 deadlin? How about 2, 3 … weeks? How far back can we go and still get good prediction? Ongoing work at the moment, feed those in to real modules at start of next year.
Our view is feed this back to tutors, module teams, gives them more info about where to intervene. Not looking at feeding to students, but to tutors. Looking at a dashboard, provide info to student, not just at risk, but why. E.g. has not engaged with VLE, at least one TMA below 40, has not submitted TMA5, etc. Two versions of the dashboard.
The VLE data is very important. Prediction is easy at the end of the course – especially from the assessment data. But it’s too late then. Want to do very early prediction, provide it at a time when it can have an impact.
Adam: It would be profitable to change course pattern, so small 7 week microcourse with TMA1, then decision point?
Annika: It’s possible. We trust module teams to structure courses to be good for learners. Don’t confuse prediction with causality – forcing people in to VLE isn’t going to necessarily cause them to pass more. Could recommend structure to module team.
Someone: Can you describe how you measure a click?
Annika: We’re not doing any closer analysis. We have done that, on different clicking behaviour, trying to take out forum discussion, or quizzes, or updating profile – but it didn’t have a huge effect. A lot of work to refine it in to the individual activities. At the moment, it’s just they’ve done anything, e.g. put in phone number, or chat on forum about something unrelated. Despite no fine-grained, it’s because it’s not necessary to build a model that’s usefully predictive. Which we thought was quite interesting.
Pamela: Your work based on idea that intervention will change outcome. Low clicks, low engagement, high probability of failure, feedback to tutors. Have you tracked whether those interventions changed those outcomes?
Annika: Haven’t implemented it as a live system. Smaller scale studies have shown that focusing, just offer cohort of students more support, improves their outcomes. Others have shown that too. If we go live, want to track effect. But difficult, could have different profiles in different years.
Someone: Consider giving data to students?
Annika: It’s not within our research department. We’re not looking at how this is being used, because there are OU people who are expert in that, the tutors themselves who want to provide this information, or see. We’re just building the model. Other people are discussing whether to show it to the student. Have to be careful what you tell them, but interesting if done right.
Someone: When you do implement, concern that feedback will affect your data and affect its relevance?
Annika: If applying model on previous historic data, apply in new presentation, in a way doesn’t matter, it matters when you assess how effective your model was, have to include that factor. If then develop a model on that presentation, data to know that interventions were made.
Someone: Don’t know how varied, but have you used that across all your programmes, or multiple models?
Annika: OU has large number of courses. We have selected a subset of modules. Typical – in direction they are tending to go. Within our data, tried to build a model on one presentation and apply it on another module, does reduce accuracy but not as much as you might think. For best accuracy do need to understand individual modules. Surprisingly good results applying to models it wasn’t built on.
Steven: Can we use your model?
Annika: Yes, in a sense anyone can. We’ve talked about how we process the data. Can do a Bayesian network, or decision tree – we’re just defining the problem and pointing out the interesting thing. Demographics is tricky, every institution has its own contexts.
Sara Hershkovitz & Ernest Lyubchik: Data driven blended learning: Going from a heterogeneous classroom to a targeted focus group
CET & Selflab.com
Selflab & CET (Israel) conducting an adaptive learning experiment, to validate Selflab’s adaptive technology. Analytics, new instruction methods that became possible – how did teachers react to this information.
Selflab is a company, adaptive enabler – business is enabling adaptive tech in publisher’s platforms with plug and play. There’s a publisher, with content and users, and Selflab integrates with the platform from behind, with the server, content agnostic, enable platform to run but serve different content to different users, without modifications to the current designs.
Approaches CET – Center for Educational Technology, leading educational NGO in Israel. Promote achievement and academic excellence and create equal opps for all children. Developing content, new directions, professional development.
Pierce & Stacey (2010), Int J Comp Mathematical Learning.
More than half of Israeli K12 schools. Use data to change how teachers teach student, and how the system teaches. Conceptual difference about mistakes and analysis – if student makes a multiplication mistake in an algebra question, needs help with multiplication, not algebra. Selflab system automatically understands this.
Present a question, and there are different ways that they could get it wrong.
Experiment – six classes of 4th grade. Fractions without prior instruction, receiving computerised instruction and exercises. Not yet doing the adaptive work, but gathering data on pathways. Look at what teachers can do with the data about which skills the students have, and which are lacking. Different reports given. Offline reports, simply sent to the teacher, showing each student ranked according to performance in different skills. Also online live reports, can select groups.
New instructional methods – students receive instruction from the computer, with teacher working directly with those who require assistance helping them with targeted sessions. Blended environment. Understand the teacher and how they change.
Student feedback – surprised at detailed level of knowledge – “this is right where I am now, how did you know I had a problem with that?”. Teachers enjoyed the organisation, feedback on their own work. Can see which explanations work.
Big impact for low achievers. Unmotivated students. Can’t drop out of elementary education, but they had to all purposes. Reacted well to ability of teachers to give feedback/support and the system itself. Improved performance and achieved similar grades as the rest.
Results: correct answers: control group 75%, with std 11%, vs experimental group 86% with std 6%. (! – data obviously skew/ceiling effect). Personalised students made 14% mistakes vs 25% frontal. Statistically significant “T-Test, ALPHA=0.01” (?).
Future plans to look at adaptivity, understanding ability rather than score. What is the difference between success rate and ability?
Some questions are hard to guess but easy to solve if you know the material. (Less so if you don’t read Hebrew :-).) Good at distinguishing ability. Other questions are easy to guess, but difficult to solve. Questions with the same success rate may have vastly different difficulty! This is about understanding content, what really is the ability of the student, not just how they scored on the test.
Educate the student according to his way (Book of Proverbs). Lead the man on the way he walks (Talmud).
[liveblogging sketchy in this one, and stopped early to get ready to present]
Doug Clow: The Funnel of Participation: moving beyond dropout in MOOCs, informal learning and universities
Institute for Educational Technology, Open University
[no liveblog – my presentation!]
[me presenting a similar paper at LAK13]
I asked if there was anyone in the room who didn’t know what a MOOC was – there wasn’t. At least, not that would admit it, or even shift uneasily in their seat in a way I could detect.
I went off on an unscripted rant about how we as a learning analytics community should get information about the evidence for our interventions, and how we should do RCTs. Raised in the questions too.
Uri Baran & Richard Maccabee: Predictive learning analytics based on augmented Moodle VLE data
Predictive Analytics Solutions Architect, IBM UK, and Director of ICT, University of London Computer Centre
[sketchy liveblogging as I come back from presenting]
Richard. They provide Moodle service to 150 institutions, mostly FE, ~35 HE. 2.8 million students enrolled – but about half are no longer active. Wet-finger estimation, about half are active. Interesting to explore what we do with analytics. Partner with IBM, within the community. Predictive analytics solution architect – not many of those. Let loose on anonymised set of data. After a few weeks work, first opportunity for me to see it.
In developing world, massive undersupply of education. In US dropout. Area where technology can help, through analytics, technology-enabled personalised learning.
Come at it from three areas: personalised education, learning content management and delivery; intelligent, interactive learning content.
Moodle Data. Used for accessing resources, tests and assignment submission, chat and discussion forums, surveys, workshop submissions and assessments. Three metrics: composite year grade, week 4 engagement quotient, week 7 engagement delta (direction of travel). Need a point where the model is accurate enough, but not too late to do something about it.
Artificial data here, for presentation.
Also other data sources, including: everything. Halls of residence, how far from home, how many visits they make, everything. Degree to which we can access it. We want to find out what’s important, not make assumptions of it.
Part 1, data mining, develop high risk of dropout identification model. Applying decision management process combining this with business rules to allocate optimal interventions per high risk students. (Financial!) Deploy results to CSV, or to BI to deliver it to a range of users. Active report. Use BI to administer the interventions programme.
Live demo! (But I can’t read the text, alas.)
Data mining process. Take icons from a palette, connect them to others, data flows through it and performs tasks. Simple tool, no programming skills required. It’s a stream.
Data source – student data – names, email addresses, gender, retention risk (whether stay/leave, to learn from), demographic indicators, educational history, level, English, then composite year grade, the week 4 engagement, week 7 delta, age delta from cohort.
Clean data, partition it – development set, test set. Produce a model. Yellow nugget represents it – range of info about how it views the data. Shows which are significant predictors – weighted average of assignments and quiz school, then wk7/wk4 delta, then resource coverage. – but this is just made-up data. Get a set of rules that describe a decision tree. Takes you down a set of decisions based on the weighted average, down to sub groups, until you can’t divide any more, gives the decision.
Can visualise the decision tree itself.
BI tool. Gives probability out of the model, set rules. Example – if prob fail is >0.7, select them. Apply business rules. Have a set of intervention options – personal tutor support, change course. Alter timetable recommendation – e.g. measure of course attendance is low, weighted average of assignments is a certain level. Can apply more than once. How do you decide which one you should apply?
If you go through the process, find 75% need personal tutors, can change/manage that and get out what you want to get out of it. Then prioritise side, which prioritises the interventions on basis of financial principles. These need not be the ones you choose to apply. Equation: (prob to respond * revenue if they do) – costs = likely profit. Chooses one or more of the most effective from a cost perspective, gives you set of high risk students and their associated recommendations – what you do then is a different phase.
Can batch score, or do it real-time.
Tutor-facing interface. Select grade vs subject. So e.g. most people with Fs are likely to drop out, can see how the drop out is associated with the data. Can look at how the dropout students are doing.
Can’t dictate what interventions to use to institutions. Way to work out what the right intervention is is to sit down with the student. Whether that’s realistic or not is a different question.
Steven: How long before this is used in the service you offer with ULCC, to be available to people?
Richard: We’re bringing in to ULCC an Enterprise Service Bus, can pull in data from other institutional services. This is early days, at the moment, looking only at Moodle, hope to correlate with other live data. It depends! We will talk to institutions, potentially offering it. If it really does work, could be used by others, not just our Moodle customers.
Steven Warburton: Concluding thoughts
Not going to try to summarise the whole area of learning analytics. But will pick out five points – there are many others.
First – the range of talks was impressive. The different types of perspectives – learning analytics, academic analytics, big data. Still an emerging field, people take viewpoints.
Adam raised a lot of the issues. One about responsibility, we are showing data to people – who is taking responsibility? If we show it to the learner, do they have to take responsibility? The tutor? How should institutions take responsibility? If we are an analyst, how do we present it and take it.
Third point, ethics, about the way you show data, but also about ownership of the data, around personal data. Personal learning analytics mentioned by Adam.
Fourth, really big, the nature of learning itself. Analytics works well in some areas, but learning is very complex. You have to define the end point. Understanding what learning is is difficult – what if it’s invisible? How many do we need to look at to triangulate? have to have good models.
Didn’t even look at where assessment fits – many analytics use assessment to demonstrate validity. Didn’t look at that angle.
Finally, algorithms and humans. Intervention – what do we mean by that, what might be a good intervention? What is the good intervention? Which is the best one? Are we in an endless cycle? Ernest gave us really interesting model – why not use blended analytics, in the classroom. As we’re teaching, we’re able to act – that’s the best actionable exposition. Teacher just being able to pull out the focus group. Relevant to teaching practice. If we do want insight in to interventions – we have been studying it for some time, it’s not a new areas. Meera – John Hattie, meta-analyses of many educational interventions and studies. Looked at what does have an impact, and effect size. Suggest we look at the literature to get insights from those.
This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission needed to reuse or remix (with attribution), but it’s nice to be notified if you do use it.