More liveblogging from LAK13 conference – Thursday afternoon.
Issues, Challenges, and Lessons Learned When Scaling up a Learning Analytics Intervention
Steven Lonn, Stephen Aguilar, Stephanie Teasley
Idea is to scale it up – ‘on the road to scale’.
Working with STEM Academy, helping ‘at risk’ engineering students.
Originally, data from Sakai, hand-cranked using Excel. Massaged it, made it in to managable reports for advisers to see how students were doing on each given assignment. Took assignment data from gradebook, sense of how students are doing. Fine for first year but labour intensive. Want to automate.
Student Explorer. Not giving answers to advisers, but data to explore with students and have a conversation with them.
Hit a crossing point. As an academic unit, had to work with ITS who have a business leaning. Couldn’t always give them their whole attention. They want to hitchhike on an existing BI platform – BusinessObjects, with a new database Extract, Transform and Load (ETL) presentation layer – but that had itses own challenges.
BusinessObjects used widely. They found some usability gaps between previous Excel version and this one. It looks very similar. Had some new functionality initially. Previously, could only see the current view, but with this they could see weekly snapshots so could see how current status compares to history. One problem – in Excel, click on current view, and it replaced the window, then click back. But the BO system, it kept opening a new tab in the browser – quickly problematic for advisers. Another – paged view, for about 500 students. Hard to see what page you’re on – only one little view that said “1 / 1+. Also calculation gaps. Lots of nuance, edge cases with grades. Example of large chemistry class, that’s a shared class so the LMS has four sections of 100 students each. In Excel, massaged this by hand with long Excel formula. In automated process, couldn’t do that – so blank counts as null not zero. Functionality in gradebook where you can have an assignment graded that doesn’t count towards the final grade. So might put in attendance, helpful for advisers to know. Also access gaps – to get in to the system, you need special authentication – including two-factor authentication, many advisers didn’t have them. Also delivered in an IFRAME, but it didn’t work so got ‘Invalid session’ periodically. Performance gaps, they want to work at scale. Polled the data, polled everything – which crashed the LMS for all 12,000 people. Need to load test. Automisation gaps – some manually massaged, have to send manual CSV file with cohort, students, advisers.
Now doing an alternate route. BO was helpful, but not all the way there. Partnered with another campus organisation, using similar UI but with some tweaks. Will leverage existing system used by 75% of advisers. IT have infrastructure that works for that, will load same db and ETL layer in to another tool.
Hope lessons from our road will help your road!
Stephanie: Details of the dashboard was last year at LAK, so you can see them in the paper last year.
Chris: Important to talk about taking academic work and rolling out out campus wide. The academic labs are producing and institution is consuming. One challenge we’ve had is that our IT organisation expects that research lab stuff is a bit crappy, doesn’t work, not load balanced – but they hire full time people to work with vendors whose products are similarly problematic. How did they look on what you did?
Stephanie: We had the opposite. Provost very interested in analytics, told IT org to do an analytics projects – the IT people came to them to find something.
Chair: Have you disseminated this. You have a STEM progamme, what was the feedback from the instructors.
Stephanie: The academic advisers have this data, not the instructors. It’s not a very intelligent system. We trust the advisers to have the intelligence to match to resources on campus, things we can connect you to. Do want to scale up with a dashboard for instructors. But some say they don’t need it. We have to change some of that thinking.
An evaluation of policy frameworks for addressing ethical considerations in learning analytics
Paul Prinsloo, Sharon Slade
HE institutions have collected student data for a long time. Have policies in place. But they’re not keeping pace with learning analytics. So looking at how they should develop.
UNISA very large (?300k students), South Africa. OU in UK, 250k students.
Who benefits from LA and under what conditions? Need involve all stakeholders. Students should be active agents in the data process.
Issues around consent, de-identification and opting out. Massive sets of data, think poses no risk. By registering, they provide implicit permission. But when it triggers customisation, often based on unseen criteria, they have a right to know how it’s being used. Few reasons not to tell students: growing need for a system of informed consent. Students can opt out, and make the implication of that choice clear to them. Consent needs to be refreshed regularly. Need trust and data protection. Also responsibility for students to provide correct and current data.
Vulnerability and harm. To students and other stakeholders. Implicit and explicit discrimination – support based on ?random personal characteristics. Validity and impact of labels – treated as different from others. Potential of stereotype; student risk profiles. Identity is transient. It should be transparent and regularly reviewed. Need to let students re-label themselves. Ensure analyses robust and on suitable datasets. Reduces potential for allegations of vultnerability and harm if it’s transparent. But could be risk of student abuse. What about students providing incorrect information – perhaps to get more support? Many policies say can terminate registration if data wrong.
Collection, analyses, access, storage. Issues around data from outside the LMS, particularly external e.g. social network services. Need explicit consent, and extra responsibility around that data. Need to take care not to amplify error in the aggregation process. Avoid bias and stereotyping, acknowledge incomplete and dynamic nature of identity. Risk of perpetuating bias and discrimination. Students have right to be assured data protected against wrong access. Students should have overview of stakeholders granted access to their datasets. Ensure effective governance. Most important step is to create a comprehensive data governance structure. Need for resource allocation. Understand drivers for success – e.g. overriding policy of targeted recruitment and retention – and decide prioritisation – make it transparent.
Data policies – reviewed all policies in OU and UNISA. 26 policies! Including research, data protection, records management and some bizarre. Who benefits? No mention in UNISA, focus on protection (largely of UNISA). In OU, shared responsibility and Charter with nice-sounding aspirations. Alumni office has responsibility to update alumni and get donations – details held indefinitely.
Consent and de-identification. UNISA – they may monitor, surveille – about employees, not students. Students as consumers. Don’t have to say, may only opt out as research subjects. If research, anonymity and opt out possibility is mandatory. OU data protection disclosure = explicit consent, maybe be used to provide specific support.
Vulnerability and harm. UNISA – no mention on tailoring support. Disciplinary code about materially false information, about university not students. Don’t define misconduct about false personal information. OU highlights empowering students at key points, using data to personalise. Student responsible to ensure data is correct. Doesn’t discuss detail, or who makes decisions about ‘best interests of the student’.
Collection, analyses, access and storage. Much on general data processing. But UNISA only implied for student stuff. OU has clear data protection policy. Covers data transfer on need-to-know basis (but not provide), protected from unauthorised access. Made aware can get access to their own data. Neither have mention of methodologies to manipulate data not how student could learn about it.
Conclusion – policy frameworks largely on research, academic data. Have guidelines on data, links to legislation. Both policies have institution as main role player. Approach to LA is not very student-centric, primarily as objects of analysis rather than partners. Students only have to make their data correct and current. Relatively simple models for focusing student support. Current OU high-level report on learning analytics but focuses on things other than student participation.
Q With education, I agree with a lot of things, I like the analysis. HEIs often deal with training, we need professionals to serve our community – doctors, engineers, lawyers. We have 50 seats in our medical college, 900 who want to come in. Is it appropriate to allow them to opt out of technologies that can help them learn or reduce workload. We need to scale.
Really interesting. Hadn’t considered that issue. We take students without educational qualifications (necessarily). We should be more transparent about what we’re doing with student information. People should have the right to find out more about what’s going on.
Aggregating Social and Usage Datasets for Learning Analytics: Data-oriented Challenges
Katja Niemann, Giannis Stoitsis, Georgis Chinis, Nikos Manouselis, Martin Wolpers
Katja presenting. Talk about data we collect about the learner, less the learner themselves.
We collect a lot of usage data, social data. Normally stored in different formats, hard to map them to create larger databases. Open Discovery Space portal, for teachers to connect, bring together other portals. We don’t have information on learning object metadata, but also portal data on social and usage data. Many formats – CAM format, NSDL formats, Organic.Edunet format.
First study investigated 3 datasets, tried to map to a single dataset. First was the MACE portal, learning objects in architecture stored in CAM. Data like clicks on, tagging, searching. Learning objects from schools. Organic.Edunet, from organic agriculture in own custom format.
Much detail on specific data mapping, and how different formats were transformed in to others. Decisions to be made, not simple mechanical mapping.
Created two merged datasets – A with user information, and B with all events from all datasets, but no user information.
There’s no one-size-fits-all solution. Have to investigate the domain. We created another new format, to contain the other data formats so they can all be mapped in – its less work. In future want to go deeper, do some integration. If two objects are the same, they have different IDs on different portals. Mapping and merging only really helpful if it can do this. Want to experiment with the datasets.
Chair: You have different datasets. How often is the harvesting process done, where does data reside?
Once a day, but incremental, only harvest new information. Have one central repository.
Chair: How can this be incorporated for e.g. educators, to find relevant information, or for creators to market their resources?
We have a lot of very interesting LA tools. You can only use them with data in that specific format. That’s a challenge to create one format where you can use all the tools for all the datasets. This is also a chance for the educators.
This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission needed to reuse or remix (with attribution), but it’s nice to be notified if you do use it.
One thought on “LAK13: Thursday (8) afternoon”
Reblogged this on odlsharonslade and commented:
Summary of our paper in here (2nd paper of 3, Prinsloo and Slade)
Comments are closed.