LAK14 Thu pm (8): Institutional perspectives, learning design

Liveblog from the second full day LAK14 – Thursday afternoon session.

Movement

Session 7B: Institutional Perspectives

Techniques for Data-Driven Curriculum Analysis. Gonzalo Mendez, Xavier Ochoa, Katherine Chiluiza (Full Paper, Best paper award candidate).

Xavier talking.

Siemens & Long (2011) Educause classification of learning and academic analytics. Focus at the Departmental level, aimed at learners and faculty.

Government says have to redesign curriculum. Questions – which are the hardest courses? How are they related? Is workload adequate? What makes students fail? Courses to eliminate. How can learning analytics help here? And help curriculum redesigners.

Wanted to design techniques. Using available data, no time to collect data, do surveys, etc. Grades are always available, so focus on those. Want metrics that are go/stay, not using them as decision-makers, but as discussion starters. Also easy to apply and understand. Want to ‘eat own dog food’, apply to our grades on CS program.

Finding which courses are difficult. Could be good students who do well, or easy course. There are two estimation metrics, alpha and beta, difficulty and stringency of grading. Three scenarios – course grade > GPA; easy course for that student. 2 Course grade=GPA, or 3 course grade < GPA, was bad for them. Aggregate students, come up with a distribution, get the average deviation of the GPA of the students. Should eliminate the ability of students.

Real examples, but the distributions are not normal! So don’t tell the full story. Most difficult course, algorithms analysis, most is on the bad site. The easy course, it’s more on the left side. So new metric, the skewness of the distribution.

Also compared to asking the students and teachers (perceived) vs the estimated ones. Some on both lists, but not all. Perception is not the same as estimation. Why? They think the courses are difficult, but data say no. Example, physics doesn’t appear in perception, but is very hard according to the data.

Dependence estimation – if they do well on this course, how likely are they to do on another one. Map of the CS curriculum, which are prerequisites, maps, and which belong to which one. Simple Pearson correlation – but a lot of them! Many very low correlation, like 0.3. But ‘computing and society’, isn’t linked to much, but correlates with HCI course, and discrete mathematics (0.6ish) – surprising.

Maybe we should rethink prerequisites? [I’m not sure that follows from things being uncorrelated.] Why ‘Programming Fundamentals’ doesn’t correlate with other programming courses?

So ran Exploratory Factor Analysis of the different courses. There is lots of difference in the professional part. Five factors – basic training, one is advanced CD stuff, one on programming, one around client interaction, and one they couldn’t make sense of – electrical networks and differential equations.

The grouping is off (from their expectations) – Programming Fundamentals isn’t in the programming factor. (Makes sense since it doesn’t correlate.)

Wanted to look at dropout and enrolling paths. Expect they all start happy, and they drop out over time. Sequential Pattern Discovery using (something) – SPADE.  60% of students that drop out fail Physics. Hard course, and is a main problem. Only Programming Fundamentals is in the top list for dropouts of the CS courses. So we’re dropping them out from basic courses. So – start with CS topics?

Finally, load/performance graph. What they think they can manage from what they do. Present a graph of courses they have to take – it’s quite a big complex graph (7 x 10 matrix, not very sparse). Simple viz – density plot of difficulty taken vs pursued difficulty. Most of the density is on the straight line. (Pass all the courses you take.) But big blob below – most are taking 5 or 4 courses, and failing one of them. Maybe suggested load is unrealistic. Can we present the curriculum graph in a better way? Or recommend the right amount of load for each student?

Very simple techniques and methods. Used only grade data – so can apply in your university. Discussion starter questions – and now have lots more questions, but they are more actionable and specific.

Redesigning courses based on new questions in April. More ambitious goal – to create techniques that can be transferred to practitioners.

Questions

Q: As faculty member, difficult of course is difficulty of material and grading scale. Could be easy material but you’re a tough grader. Can you confront that problem? Normalised distributions over time? Teasing out difficult material or grades?

We have courses taught by different professors over time. In the process of redoing it by course/professor. Bit politically … yeah. (laughter)

Q: Interesting to apply this in Law schools where normal distribution is imposed on the class. [Only in the US, I think?]

Q2: Because GPA an average, how do you take account of the standard deviation, which must get smaller over time?

Move these metrics to students, yes, in first year you have variation of the GPA. But this is of the whole program. It’s the GPA of people who’ve left the university. If want to do it with 1st year students, yes have problems.

Grace: We’re finding GPA is not a good measure, esp in 1st year because there’s a higher dropout rate. The 0s pull down the average. Remove withdrawers?

We use the information from students who take the course. We don’t use GPA for dropout analysis.

Q3: Faculty have engaged, maybe adopted analytics techniques. Do you get traction, or is it a rabbit hole where you do more complex action?

Faculty have been very open. When presented as, just look at this data, without conclusions. When you draw conclusions, then you have friction, if you say ‘this is a hard course’. If you say, ‘this metric says this’, no value judgement, it’s Ok. These are not (good enough for this). You need human insight to get some judgement.

Stephanie: It is very difficult to get faculty to realise there are problems, and how they may fit together and cascade through. Appreciate this work, showing you what the results show. Responsive to data about how things are going.

Q4: Talked about understanding course load, what should be recommended course load. A lot of what you might want to include might have to do with responsibilities outside class – a PT job, that stuff. That data’s probably not available. What of that other info would be useful, and how might you collect it?

These metrics are being done in a system where professor asks them about working outside, regular activities, so we can take that in to account. We know there is a problem of load. The cause, we have to explore that.

Hendrik: Also busy with playing with study load. Rule of thumb from experts at the moment. How do you measure the study load so far? Is it just the professor saying? Studies with more accurate or transparent measure?

Easy. If 4h, have 4 credits, 5h, 5 credits. Aware of studies of workload from (someone) at U Ghent, ask students to log their workload. Maybe they have a paper about that.

The Impact of Learning Analytics on the Dutch Education System. Hendrik Drachsler, Slavi Stoyanov, Marcus Specht (Short Paper).

Hendrik presenting, from OUNL. Totally different study, normally I do recommender systems. We took opporunity of local LASI in Amsterdam, only Dutch people, from companies, HEIs, K12. Some analytics at that event to get a measure or idea of how the LA concept is appreciated by that community.

LASI-Amsterdam in 2013. SURF – umbrella organisation for Dutch HEIs. Have an LA SIG. Also Kennisnet, for K12 in Netherlands.

Group Concept Mapping. Identifying common understanding of group of experts – brainstorming, sorting, rating. Post-its, write them up, cluster on the wall, then voting. It’s same approach, but computer supported. Then applies robust analysis (MDS, HCA) then presented in conceptual maps, which are rated. Many viz possible.

LASI did brainstorm at LASI. Then sorted online afterwards. Then rated on two criteria – how important, is it feasible.

For one participant, they have a sorting arrangement, generate a square binary similarity matrix. Do that for all of them, get a total square similarity matrix – multidimensional scaling. Input square matrix, get an N-dimensional mapping – often 2-dimensional map. LAK data challenge paper did this for LAK papers.

Hierarchical cluster analysis, neighbourhood of statements and how similar. Use this as a hub for a semantic analysis. Need to decide what hierarchical approach makes best sense – have to explore the data.

‘One specific change that LA will trigger in Dutch education is …’ – the prompt for brainstorming. Then sort the responses, put them together. Each does that themselves. Then rate from 1-5 for each response, how important, how feasible.

Two hypotheses – H1 the most important will be less feasible to implement. (seen this in previous work). H2 sig diff btw novices and experts in importance and feasibility (see this too).

Descriptive stats – N=32 people entered statements (60 people total). Sorting phase, 63 started, 38 finished. This takes a lot of time – 108 statements, takes 2h. For GCM, that’s Ok. Importance and feasibility ratings similar.

Also asked participants to self-rate expertise. Most – >50% – said they were novice. 44% advanced, 2% expert. Most were established professionals in their field (>10y), but not in LA. Most teaching-skewed, some managers. Nice spread of organisational context.

Point map – nice scattergram of points. Cluster maps suggested, and decided 7 clusters. System suggests topics, but manual oversight too. Had several – students empowerment, personalisation (close to each other). Risks way away from everything else, but close to management & economics. Teacher empowerment closer to feedback & performance and research & learning design. [Nice]

Then add the cluster rating map – each block is higher depending on how important it was seen. But teacher empowerment top. Feasibility very similar, actually. Risks not a big issue.

H1, pattern match importance vs feasibility. Surprisingly, people think teacher empowerment is important, and feasible. Personalisation is important, but less feasible. Research & LD in between; student empowerment important, but less feasible. Management/economics and risks right at the bottom.

H2, novice vs experts different ratings. There is not! Really very close. Took novice in one group, and intermediate and expert, basically agree about most things. They speak the same language. Same for feasibility – although experts do tend to rate things as less feasible than novices. (Looks probably significant to me, if they had large enough N.) Consensus that teacher empowerment are important.

Need to reject H1 & H2. Dutch community (at LASI) highly agrees on topics that are important to influence the educational system with LA.

Final message: the Netherlands is ready to roll out LA.

But want to explore data, e.g. H3 sig diff in sectors (HE/K12, business/educ).

Partners to run a GCM on LA in their countries – contact us!

LACE project is running a study on quality indicators for learning analytics. One further step, we will build an evidence hub, bring studies in, get an overview that provide insights in to how LA affects certain topics.  bit.ly/LAK14.

Questions

Grace: Halfway with this in Australia?

Did not manage this with a LASI. Have to select the people to contribute, can do it as many as you want to brainstorm. Sorting and rating takes 2h, you need experts, who are considering it. Then run whole thing online. Used it at Amsterdam to push it there and have fast result.

An Exercise in Institutional Reflection: The Learning Analytics Readiness Instrument (LARI). Kimberly E. Arnold, Steven Lonn, Matthew D. Pistilli (Short Paper)

Kim talking. Working on a similar concept. New instrument: LARI. Looking for feedback, collaborators.

Facing a challenge. Lots of practitioners want to get in to it. Researchers go by choice, practitioners are mandated. Without much direction or understanding. 2.5-3y ago realised this was an issue. Important consideration is to know if you’re ready or not as an institution.

Readiness definition: willing, prepared, immediacy. In LARI, LA requires time, resources, money. Intensive.

Use the same ‘Ready Set Go’ picture as Hendrik on the slide!

Successful implementation won’t happen accidentally. Logistics can be daunting. Institutional reflection is critical. So how can we facilitate reflection? Needs to be comprehensive. Sometimes from IT depts, with educ researchers, but perhaps not all depts. Needs to be cross-disciplinary, diverse experience and skills. So more realistic understanding of resources.

How do we get to assessing readiness? Especially in a large institution, but evne in a small one there’s a lot to include. An national view – Horizon Report. And Innovating Pedagogy report from the Open University. (yay!)

Educause ‘analytics maturity index. Great tool, focuses on analytics at large, aimed at individuals. But wanted something down from the landscape, but up from the individual. Institutional profile, that’s a prescriptive diagnostic too; situated in literature, and formative, based on parsimony/practicality/proactive.

Matt takes over.

The instrument, developed items based on the literature (readiness broadly, and analytics – meta pieces). ECAR Maturity Index. Practitioners’ experience too. Also some original factors – ability, data, culture, process, governance/infrastructure.

Convenience sample, N=33 over 9 institutions (8 US, 1 Canada). Focused on R1s, where have a foothold in learning analytics. Survey distributed.

Started with 139 items. Exploratory Factor Analysis, eliminated 42 survey items. Then a second factor analysis, eliminated another 7. Third, no noticeable change to left it.

The factors changed – went to new ones. Ability, data, governance/infrastructure stayed. Culture and process came together as a single one. And another one was ‘overall readiness perceptions’. Surprised by that.

Final 90s survey items, Exploratory Factor analysis, Cronbach alpha 0.9464, explained 55.7% of variation.

Steve takes over. Institutional differences – plotting each factor for institutions with at least 2 respondents. There were sig diff for ability, and for overall readiness factors. Very mixed picture, but that may not be a bad thing. Helping to reflect on data, making it actionable for student success. Institutions can wildly differ, yet get to similar places in terms of effective changes.

Limitations – sample of convenience, only one non-US institution. May not be applicable elsewhere. Policy implication in international settings – looking for international partnerships.

What’s the future? Iterating the instrument. Done the factor analysis, it’s leaner, though still dense. Maybe split up by different people filling it in. Want to create tailored feedback and automate. Low/medium/high readiness, feedback about that. Prompts for exploration – how do you stay high, etc.  Want more activities, and international partners.

Want beta partners. Lead individual at American institution. 10-15 individuals in various roles representing the diversity of job roles associated with analytics. Honest & constructive feedback. Participate in follow-up study on use and usefulness of report. lari.pilot@gmail.com for contact.

Questions

Q: Applaud the instrument. In factor analysis, have <50? (yes) Caution you. 50 is considered poor on factor analysis. Want 250, 300. Solutions converge as you add more things. It needs a lot, a much larger sample to get the instrument. Do have high Cronbach alpha, but that’s because you have so many items. Really try to increase, make it more parsimonious, less than 10 questions per factor. N<50 is insufficient.

Matt: Yes. This was a starting place, even with small N is enough to talk about. We are at a beta realm. Looking for more. Comments well taken, will heed them.

John Whitmer: A dependent variable – indexed to the accomplishments of institution, or …

Steve: We’ve heard LA is great, but 2d in to the conference, it’s complex, lots of work to implement it at scale. It’s those 5 factors.

Matt: The measure of dependent variable is in those 5 factors. It’s a reflective tool. The constructs around implementation are broad enough, but institution applies them in their context, rather than an arbitrary context.

JW: Think this is great. I would love to have this indexed to institutions that are accomplishing a lot. An important weighting of significance.

Steve: Working towards that. Institutions like you … So get more, other colleges have found … Put examples forward for folks. Contextualise it for institutions.

Bodong: Who should fill this survey?

Kim: We don’t know yet. We say, should have at least 10 people, in 10 different areas take it. The instrument isn’t robust enough for us to say what we need. We have some guidelines on people we should approach, e.g. data governance, IT, education dept. We’re at the beginning of this, building scaffolds.

Competency Map: Visualizing Student Learning to Promote Student Success. Jeff Grann, Deborah Bushway (Short Paper)

Jeff speaking. Capella University. Deb had conflicting meeting, Chief Learning Officer, executive sponsor of the work.

Context: Capella is fully online, in Minneapolis. For-profit, 35k students at a distance. 1st institutions online to be regionally accredited. Our recognition by Dept of Ed for direct assessment, a way students can proceed rather than seat time requirements, move as fast as they can learn. So around competency-based education.

Students – 75% female. 40y average age. Primarily graduate students, 25% undergrads. Mostly PT, gaining something in their career. Relevancy is a prime topic.

Career advancement – two populations. Employers, interested in filling positions with skills, talents, making sure they’ve had experience and want to see success. Universities, structured around academic programs, containing lots of courses, with activities on which students are graded. But how do these two relate to each other?

At Capella, middle layer – look at outcomes people’ll need in jobs in the future. Define the competencies you’d need, and criteria to measure those, based on faculty judgements (can they be reliable?). Then define that, that’s the hard work, straightforward to connect to the program offerings. So get assurance that we’re aligned to industry.

Competency-based education from Nat Post2 Educ Coop 2001. Four-layer model, with fixed traits at the bottom, demonstrations at the top. Assessment is working in the top bit of the pyramid. How do we measure those competencies.

Have a fully-embedded assessment model. All of these are aligned – as metadata – in course authoring environment. To make a course you have to do all of that mapping. Not had a big obvious effect on students.

Another is the Scoring Guide Tool, to gather data. This is the workhorse, to evaluate students’ work in courses. Criteria defined against competencies – nonperformance, basic, proficient, distinguished. Also feedback/comments. Autocalculates a suggested grade – can change it. Most find that very helpful. Can weight criteria too.

That tool generates lots of data for learners, university. Many reports internally and to accreditors. Have a gap – haven’t shown the data to learners in aggregate way. Combine with assessment data. Over 1m judgements made, not a lot of reporting.

Committee-based idea: many data displayed, with drill-downs and indicators that are not clear. Design time focus on learner display. Not same display for all; customise. Conceptual map for learners. Web design. [Slides too small to read for my week eyes.] Took it out to learners, focus groups. Many confused, misconceptions. So more simplification, display for just one course. Bar to indicate two things for each – status for how their scoring, and how many they’ve been marked on, to see tracking over time. Hard to do conceptually, and to bring the data together.

So planned a pilot in Q1 2013. Several courses. Used data to produce an email, descriptively told them the same info, automatically sent when tutor finished the grading. Students very invested in demonstrating each competency to finish the course – can finish early. Flexible schedule. 100% of the emails were opened – that’s an amazing open rate.

A final round of design, launched for graduate business programs. So shows course level progress, and how many you’ve been marked on, how many assignments out of how many. Then circles, in colour to indicate status, and amount of colouring is about how many they’ve been marked on so far. Then also access to previous courses, print button. And tutorial, FAQs, and – link to instructor.The circles pop up with particular details of judgements on each assignment, and which ones are coming up next.

Took to prospective learners – and they liked it on first impression. Typically, learner’s first reaction to competency-based education isn’t like this. Visual brings this home for students.

Usage stats – it’s voluntary. 12k learners accessed (out of ?35k); peaks at end and beginning of term. Saturdays are not good days.

Unprompted qual comments – pretty positive and what they’d like. Negative feedback is that can’t go back in time to see old one (data availability one).

Not much effect on ability. Those who used it had slightly more distinguished, less non-performance. Good news is it wasn’t really harmful to students. (laughter).

Re-registration rates for summer courses – those who used it more likely to enrol the following term. Multiple regression analysis – competency map usage in to their predictive model – significant effect even after adjusting for powerful covariate.

Get a good high-level vision, but be flexible about implementing it.

Question

Q: Comment. Worked as web designer. Great case – simple as it looks, but can tell it’s highly refined. First good cut product, they didn’t realise it, then refined based on comments. Blows my mind. It’s good stuff. Anyone, take a look at those slides. That dashboard is brilliant.

Thank you.

Q2: From ?Ashford University, same profile. Echo previous congratulations. Sometimes you present LA to students, there’s concern about student trust of the data, and intrusion on privacy. Do you only summarise at personal level, or show cohort data?

In our first draft, expected curiosity about comparing to class, normative comparison expected to be main use case. But students only cared about meeting faculty expectations. Great news, easier to build and easier ethically for small courses. Some have two learners in. Instructor has access to every learner’s map for their courses; academic adviser has access to their advisees.

Session 8B: Learning Analytics and Learning Design

Educational Data Sciences -– Framing Emergent Practices for Analytics of Learning, Organizations, and Systems. Philip Piety, Daniel Hickey, MJ Bishop (Full Paper)

Daniel presenting. thanks to MacArthur, Google, Indiana U for funding.

Realling in the middle of a big paradigm shift. Notion of four or five distinct communities dealing with educational data. Value if they interact more.  Some common features, and offer a framework for Educational Data Sciences.

What is a sociotechincal paradigm shift – more like Tapscott than Kuhn. Context, internal, emergent. First, digital tools create vast quantities of data. Qualitative shift from institution to individual, e.g. badges. 3, Expansion of academic knowledge – Common Core, there are unmeasured standards. They’re not skills, they’re dispositions. 4. Also disruption in traditional evidentiary practice. Who controls data, who gets access. Happening across four communities.

Educational data. Finance data 1980s-2000. Manufacturing similarly; retail 90s to 200s; healthcare form 90s. Education late to the game. Lot of pressure to make stuff happen really fast, coming from technical sector, Gates. Educational data is fundamentally different, needs to be handled differently.

[We are special snowflakes! No, really.]

Book – Phil Piety Assessing the Educational Data Movement. [Note to self: get hold of this.] Design science and learning science.

Landscape on two dimensions – level (age, basically),vs scale – individuals to systems.

Academic/institutional analytics – post secondary/organisations. Institutional analysis, early warning systems. Stodgy offices in every university cranking out data. Office of Institutional Research. Open Academic Analytics Initiative. Hathi Trust. Lots to learn from.

Systemic/instructional improvement – No Child Left Behind. Data-driven decision making. Should be called test-driven decision making. High stakes tests. K12 organisations/systems. A lot of people, every K12 institution has a data person. Some synergy there. Practice of data use is ahead of research. Only a handful of people in the room here.

Learning analytics/ed data mining. They are about as similar as one society – CSCL, JLearning Sciences. HE/continuing, individual/cohort level. He wonders how distinct these communities are.

Learner analytics/personalisation, all by the individual. Driven by Gates, Dept of Educ, Khan Academy.

Lots of work in lots of places. Early warning system, brings together several people.

There are common features across these four communities. Rapid change. Boundary issues. Disruption in evidentiary practices. Visualisation, interpretation, culture – dashboards, APIs. Ethics, privacy, governance

Four factors that make all educ data unique: Human/social creation; measurement imprecision (reliability issues); comparability challenges (validity=wicked problems); fragmentation, systems can’t talk to each other. Storm likely this Fall when every State tries to use short-term gains on high-stakes tests to assess teachers, will see recapitulisation of what happened in the 90s, seeing rapid move of teachers in rank. The existence of Sharepoint proves that systems don’t talk to each other.

Common principles to unite. Nod to Roy Pea. It’s interdisciplinary, draws on six areas, but all those draw from computer science and shapes all four of these.

“Computer science is the root of all [pause] Evil!” – audience jumping in.

Learning occurs over different timescales, helps resolve appropriate purposes.

Digital fluidity – same artefacts serve different purposes. Historically, institutions said need some data, use it for one purpose. But people are using them, the data, and figuring something else. Marist University using Salesforce.com, cellphone records (!!), we know how much time students are spending in the Library – if >5h we know they’ll do better.

Any artifacts are adopted and adapted. Take NCLB. We know now, it was originated in Clinton White House, Democrat initiative. Bush, saw huge opportunity to break confidence in public schools by setting impossibly high standards, and it worked.

The data is a flashlight, shine light. Really, it’s a lens, and an imperfect one. Helps to get the right lens.

Summer of Learning, from badges work. Data from Chicago, huge insights to be had. Scaling up to 10 cities. Let’s think more broadly.

Questions

How many think it’d be nice if these met every two years?

(Almost nobody.)

Alyssa: I don’t know that I agree with the way you’ve categorised them. Compared to ICLS and CSCL. There’s a difference in what people are topically interested in. Differences we see in EDM and LA are in the approaches, ways they’re looking at problems. Comparable to educational technology, and learning sciences. Interested in similar things but different tools. Overlap, but conversations are not the same. I’d be hesitant to go to a 2y model, and lots of changes, so that’s too long.

I did blow through that part.

Art: One downside in trying to combine and grow bigger is conferences get so big you can’t find people you want to interact with. Once upon a time, a sociologist studied how big a community can be before you have a society. Estimate was 200 active participants before it works. (stops working?). ITS alternates with AIED. But each year there’s an event. Always ask, how are they different, have to scratch your head to find a reason.

More argument is about finding ways to find commonalities, synergies.

Stephanie: The LA world, other people are here who don’t come from CS – unless claims physics, biology, etc. Lots of really interesting people coming out of the disciplines, especially ones who’ve dealt with data for a long time.

If I’ve made people made, Phil wrote the first draft of the paper.

Phil Winne: Data often unreliable, in a big way. But all data is like that.

Does that truly characterise it all? In finance, there’s not much slop there. Educational data, yeah, it’s about the messiest data there is. No, that’s not true.

Phil: If all educ data has reliability issues, implication of working with EDM, or LA?

Should be careful. Nobody is saying let’s stop analysing that data. The train isn’t going to stop. We need to be careful, thoughtful.

Q: Data are not to blame, it’s what we expect from it. We imbue too much meaning in to the data. Asked for data, but they don’t want data, they want an answer to the question. Big steps between them.

Q2: Data so dirty, teachers are said to be teaching stuff they’re not, students in classes they’re not. Right now it’s so poorly collected.

Hendrik: From data competition. It’s not just panel discussion that shows the difference. EDM focused on model that predicts how successful they’ll be. Variety of topics, the topics increased a lot, divergence in the domains. LAK is more a hub to reach to educational people, EDM is more tech, data-mining driven. They can both exist.

I feel safe in arguing EDM and LA have more in common than there is with the other communities.

H: Yes, I agree.

Designing Pedagogical Interventions to Support Student Use of Learning Analytics. Alyssa Wise (Full Paper)

Same issue: how do we make information actionable, working with students.

From Simon Fraser University.

What do you picture when I say ‘learning analytics’? Some images – data; some about scripts and calculation; some visualisations. I think about the teacher, late at night (photo of a bearded guy with beanie on his head). What is going to affect things next day. And students, figuring out what it means, how are they doing, what changes do they need to make.

If LA to truly make a difference, have to design for impact on larger activity patterns of teachers and students. Links to Nancy’s talk on disruption, and what we’re disrupting. We could disrupt it for the positive.

If we don’t, many technologies never made a big impact. Pile of junk in the corner. [Nice, clean graphical slides – whole-screen photo with no overlay.]

Focus on day-to-day use of learning analytics. Because – best thing is to put data in the hands of students (shoutout to my talk yesterday). They are the most enabled to make immediate, local changes. Chance to activate metacognition. Empowerment not enslavement. Brings up democratisation of access to data. Cost of doing interpretation, student with data is only chance of one-to-one analysis.

Challenges and opportunities. Need to know the purpose, the learning design, to understand it. But students don’t necessarily understand that – and that’s critical for interpretation. But sharing why we’re doing what we’re doing with be good. More likely to engage in the ways we’re hoping. Not just why engage, but what does good engagement look like.  Also transparency, rigidity of interpretation. Danger to optimise to what we measure.

Moving from learning analytics, to learning analytics interventions. Not how to calculate and produce the visualisations, but how we support people using them. We won’t understand if what we’re producing is good until people use it.

Many locally-contextualised questions: When to consult; Why; What do they mean, and what should they do; How does it all fit together with everything else.

The goal is an initial model for designing pedagogical interventions to support student use of LA. Two foundations, 3 processes, 4 principles.

Focus on Learning analytics, and learning activities – as an integrated unit. Not just reflecting afterwards. Integration. Use of analytics should be part of the design. Provides local context for sense-making.

2 conceptual questions – what metrics, and what do productive/unproductive patterns look like. Better to look ahead with this, rather than afterwards look at what data you have.

Practical questions – link learning goals, actions, analytics and make that clear.

Grounding – making this happen in a classroom. Example from e-Listening Research Project. Tracking data in online forum. What they do when they post, and how they attend to peers’ posts. Take that data and ground it in what’s important about discussions.

Set out clear guidelines and discussion about the purpose of online discussion. And about instructor’s expectations for productive process of engaging online. And then how analytic provide indicators of those processes. So exposed to ideas of others; attend to range of others’ ideas; percent of posts read. Takes junk scan data out (drop data for just open post and move on).

Guidelines for students. One is broad listening, show them analytics. Forum interface is interesting. Also % posts read; better to view a portion in depth than everything superficially.

Agency. The creepy test. Are we being honest? Would I feel creepy if someone did that to me? Working with them, not done to them. To avoid that, work with student agency. Establish personal goals, maybe microgoals or not adopt big ones. Have some authority in interpretation, provide human context, and decide actions.

Goal setting/reflection is part of analytics process too. Want individual goal-setting, no one way of success. Guidelines are nuanced, starting point for thinking what to do. Goal setting as explicit part of the activity. Online reflection journal in the LMS.

Then reflection – data informed. Have reference points to think what the data means. Personal goal, class one. Structured in. If the analytics are always there, consult too much, or ignore. So timeline is important. Weekly here. Assess goals themselves too.

Two other points – dialogue, and reference frame. The reference frame is what does it mean wrt theory, peers (easy and available, but not always the best), and vs individual goals. Dialogue – big problem for scalability, but important. Space for negotiation around interpretation. Takes us time to figure it out.

[battery ran out on laptop, lost slides]

Dialogue powerful, the interpretation isn’t unproblematic. Can have powerful dialogue. One teacher to many students isn’t scalable, but can do student:student – and that helps the agency too. How do we bring it in as part of the learning process.

[battery backup, back on very quickly]

Can get dialogic comments that illuminate the data – e.g. had to renew visa so was out of town, hence lot metrics.

Embed it was part of learning practices for students. Think about it now, not making good tools and apply it later. It’ll help us build better tools.

Questions

Simon: This should become a reference point for work going forward. Clearly priority on real time feedback to the students, to monitor themselves. Ethical issues, students feeling exposed by the visualisation? Or not providing ones you feel run the risk of that?

That’s implementation-specific. With the combination of the agency and all that, students didn’t have issues. But it wasn’t all the time, they got it on a weekly basis. First day of a month-long forum isn’t indicative. Work on the time window gives different results. Worried about real time, so had it go, here, after a time period. Didn’t have issues of exposure but that may be elsewhere.

Phil Winne: Bring analytics in to student practice should be at the centre, helps them improve. Think about what data can do, people have biased memories, they remember recent, not middle, etc. Psychology of memory, might shape types of data. When they have traces of things they’ve done, activities. (I mangled this, it was clearer when he said it!)

In Phil’s project, they review data, and set goals. Also project at Michigan, students look at activities and assess how they are likely to do, take action. Think before, and after, and become active processors.

Q: More about the visualisations and differences in interpretations. You gave an example, need for dialogue. What lines of thinking have come up?

You can have lots of different ones. The picture here is the interface. But we ask students to think about this in two perspectives. Available all the time. Think how are they doing. Their posts are lighter colour, can see if they are in one place or all over. Also take responsibility for the discussion as a whole, not just teacher.

Q: Different comparisons, peer comparisons used because easiest. Speaking to what students are seeing, what metrics.

The students who weren’t doing as well, some already know, but all felt badly and didn’t know how to change it. Understand how they’re not doing as well.

Q: Comparing to their own performance.

Yes. Everyone can’t be above average.

Daniel: Continue debate with Phil. Alyssa is a graduate of program I’m the chair of. I’m in to communally-regulated learning. I want them to think, talk about the visualisations. How much discourse about the learning is taking place?

One of the principles are dialogue. They’re both important. Don’t want talk without thinking. Need to think about individual, group and link. Helps see data in different ways. We’re asking students to have more and more, have dialogue, and dialogue about the dialogue. But it’s not either/or.

A Cognitive Processing Framework for Learning Analytics. Andrew Gibson, Kirsty Kitto, Jill Willis (Short Paper)

Kirsty talking. Emphasize the nature of co-author. Andrew is PhD student, comp sci. My background is physics. Jill is in Faculty of Ed. Trying to go to the educational practitioners and find out what they want to know.

New to LA. A lot of dilemmas – aim to understand learners, but focuses on students poised to fail. Metrics to judge learning, but are they good proxies for learning? Organisational analytics easy (!) to implement, but not aimed at individual learner. Learner focussed ‘the holy grail’ – but hard/disruptive to implement.

For me, looking at LA as quality information that’s going to improve learning outcomes. Lots of educational theories about that. How can we improve our understanding of learning? Enjoying these presentations.

Trying to start from a dumb but established theory – Bloom’s taxonomy – and use that to inform data capture. Cognitive OPeration framework fro Analytics (COPA). At many levels – outcomes, assessment items, learner completion of assessment tasks. Can do in a unified way?

Bloom’s Revised Taxonomy – set of verbs – Create, Evaluate, Analyse, Apply, Understand, Remember. And subcategory verbs. Lots of verbs!

New API, xAPI, has Subject Verb Object statements – so could have a direct map to Bloom with a suitable ontology.

Looking at one course, new. Australian sector changing. Australian Qualifications Framework (AQF) – all new courses have to be compliant. Set of learner competencies, guaranteed at each level of learning. Each unit has Course Learning Outcomes (CLOs), aggregated CLOs meet specific course outcomes at that level. So massive curriculum redesign effort to comply with this by 2015. Opportunity here – if get data capture within it, can go a long way.

This is a unit, compulsory science unit on science in context. Holistic introduction, where science is in social context. But for some reason that equates with learning skills for most academics – about writing, giving talks. So tensions. Had a lot of arguments about teaching skills or ideas.

The CLOs, huge documentation about where it fits in to the AQF structure. Short bits of text – which include verbs. So counted the verbs – e.g. how many times ‘understanding’ occurs. Map to Bloom. The CLOs are meant to map to the assessment tasks too. So can do the same thing there too. This unit has 3 major assessment tasks, aimed at different CLOs.

Once you do that, we find there’s a disconnect between course and the assessment – assessment has 20%, but course 4% for Creating. Evaluating is 33% for course, but 11% for assessment. Not even consistent in the documentation! Problem, at least from the QA level. We need to get more coherent, so that’s useful.

Nice thing here. This was small scale, not automated. Bloom is well respected educationally. Data easy to understand and communicate. It’s the kind of analytics that educational professionals (are likely to) want. Can map consistently across domains – CLOs, assessment, individual learners, both formatively and summatively. Metacognitive aid.

Future plans – our documentation is coming out of our eyeballs. When we mark, have descriptors of what they’ve done, mapping that to the learner is possible. Could do it automatically for all units. Other cognitive models? This is just tagging, but could do e.g. distributional semantics to extract theme verbs rather than straight tagging. Really want to give the data back to the learner.

Questions

Q: Had conversations at NC State who wish for this, using Bloom. Lots of educators do want to use it, for their objectives.

Really surprised people hadn’t done it. Thought wow, this is really dumb.

Q: Straightforward, but actionable.

Hendrik: European qualification framework, have that already 5 years. Mandatory all courses should apply it. Also PhD thesis applying LA to map back to the EQF things, created a dashboard to show teachers. Recommend it. Did not apply Bloom.

Sounds useful.

Simon: Obvious next step, interesting to know what the course designer makes of it.

I can tell you, I’m the course designer for this unit, which is why I had access. It meant, last year, we were teaching a ‘workshop on writing’ – students hated that. So I could redesign it, very much in development. Now have more correlation. We only just started writing the unit this week (?). I know what I was thinking, did I put that in to documentation. I can do that! Haven’t done it yet.

Simon: A next step would be to try it with other people. Worry with any approach, it’s not a proxy for what it looks like. Take it to an educator who says, I’m just using different language. Simple is good. Seems promising approach.

The AQF has simple documentation. They are using those verbs, because they have to in the way the reporting is set up. At the assessment mapping, that’s more plausible. Would like to look at more distributional semantics approach there.

This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission needed to reuse or remix (with attribution), but it’s nice to be notified if you do use it.

Author: dougclow

Data scientist, tutxor, project leader, researcher, analyst, teacher, developer, educational technologist, online learning expert, and manager. I particularly enjoy rapidly appraising new-to-me contexts, and mediating between highly technical specialisms and others, from ordinary users to senior management. After 20 years at the OU as an academic, I am now a self-employed consultant, building on my skills and experience in working with people, technology, data science, and artificial intelligence, in a wide range of contexts and industries.

%d bloggers like this: