Liveblog notes from the Sixth International Conference on Learning Analytics and Knowledge LAK16, 25-29 April 2016, University of Edinburgh. Monday is the first day of pre-conference workshops, and I’m in, Cross-LAK: Workshop on learning analytics across physical and digital spaces.
The workshop is in St Leonard’s Hall, University of Edinburgh, which is an amazing building: a Scots baronial former school, St. Trinnean’s School for Girls, the inspiration for the fictional St Trinian’s – and this workshop is in a room named after it.
Roberto Martinez-Maldonado, Davinia Hernandez-Leo, Abelardo Pardo, Dan Suthers, Kirsty Kitto, Sven Charleer, Naif Aljohani and Hiroaki Ogata
Roberto welcomes everyone. Quote: “Online learning doesn’t happen online.” – Peter Goodyear.
Four themes. Three aims: define the research gap, guidelines on principles for R&D, dissemination of R&D across spaces.
30-second introductions, with pre-prepared single PowerPoint slides, on auto-advance. Kept on our toes by pseudo-random order …
Panel question: Why is it relevant for the LAK community to explore student’s activity in blended learning scenarios where they can interact at diverse digital and physical spaces?
Panelists: Cynthia D’Angelo, Davinia Hernandez-Leo, Abelardo Pardo and Naif Aljohani.
Research focused on classrooms. Online games. Games alone need curriculum support too. So interesting in combining teacher’s understanding of the classroom, administrative things, they are best to help interpret digital outcomes. They have planned the curriculum. How to make decisions about what to do next? Teachers are best placed to do that.
Thinking about using analytics to help them make better decisions. Next Generation Science Standards in the US, tech-based assessments to gauge progress towards them. Delivered online, want to deliver tools to help teachers make decisions based on that data. Another driver – main reason – NSF project looking at collaboration among students in classrooms. Groups of 3 working on shared iPads. Lots of work around CSCL, want to take advantage of that data. Get more data by getting kids to type to each other – but the problem is that’s not a natural way for students to collaborate. They want to talk to each other. So partnering with SRI speech researchers, to capture the speech data. Not looking at what they’re saying, more how much talking, how much talking over each other, pitch, tone. Half-way through project. Goal at end is a prototype device that’d collect their speech, log data, as they work through the activity. Indicators to give teachers during the class. 30 students, 10 groups of 3, hard to know what’s happening everywhere in those spaces, so aim to give them better information about that.
Universitat Pompeu Fabra, Barcelona
A hard question, with many possible answers. Wordle-type analysis of the titles of the papers. Why cross-analysis techniques? These reflect a less partial view of reality.
Support / understanding interaction. For evaluation, monitoring, assessment, predicting performance. Example working on the physical space, teacher monitoring what students are doing in the city. Awareness, reflection. Self-regulation, orchestration, regulation. Recommendations. Seamless transitions / mutual influence between spaces. And re-design.
Many challenges – the four from the workshop. Plus right questions for learning analytics. Needs from diverse stakeholders. Tension between generic and ad-hoc solutions.
University of Sydney
Credit to others – including his kids, who help him by asking why, why, which forces him to think more about the question. Many of you get data already. I’m asking you to focus on the final objective – improving the student experience. Why do we have to work with data, expand across spaces? To improve the learning experience for the students.
Learning occurs in multiple locations, so need to look in multiple places. Sometimes on a bus, or a field trip, or on the way in to campus. We need to widen the scenario. It may happen when I’m preparing material. Keep the focus on trying to improve. Another trick, use a concrete scenario.
Example at Sydney – beach surveying course. I have no idea about beaches; it made me think. Master’s course, as part of assessment, do technical measurements on a beach. It pushes me out of my comfort zone, thinking about the data sources. We send them out, we don’t know what they do. They come back and do a poster. Could give a lot of details about what an ideal student would do. If you had a lot of data about their activities, you could give them feedback as if you were there. So e.g. weather, location, field notes, pictures, FitBit? Possible for just a field trip. How does this translate to your institution? Maybe you don’t have such nice beaches as we do in Sydney.
Try to answer the question, given I have these data, how do I make it better for the students?
King Abdulaziz University, Jeddah
Credit to Erik Duval for inspiration about student-centred analytics, sad to hear he has died.
MOOCs, data from Blackboard. Moved to academic analytics, learning analytics. In my PhD, looked at student with mobile, how do we make sense of that data to have a more comprehensive view.
Q (?Sven): The beach scenario, we did a similar activity with WeSPOT. Students could use the data too, to give argumentation. Teacher can ask why did you get that result, student can show how they came to it. If I’m a student, I can look at previous traces, and their results – see a nice result, and find out how someone got there. Useful for students too.
Abelardo: I’m a bit vague, abstract. Suppose beach trip is not masters students, but on a MOOC. So not a prospect of scaling 1:1 – that’s very good, tutor feedback in detail. If we generalise that scenario, let’s lift the restriction on the numbers, look at 30,000, 50,000. Here at Edinburgh a MOOC on physical exercise. Could have a similar conversation about data, try to scale it to a MOOC of 50k – can detect patterns, combining multiple sources. Less a one-on-one conversation, but like that. This 1:1 interaction, all academics say that’s the ultimate goal. We shouldn’t forget that. Powerful view to say, can’t talk to all 1:1, but what alternatives do I have?
Q (Jeff Grann): The challenges to the Cross-LAK vision. Many have to collaborate in new ways, to integrate multiple sources. How do you foster those collaborations? What challenges have you seen?
Cynthia: Work with speech researchers was originally a struggle, different ideas about a good research question. Have to find right partners. Bigger issue about setting aside enough time in the beginning to talk through the important issues so you’re not talking past each other. Challenge of multi-modal data, synchronising, talking to teachers – there’s just tables everywhere. It’s been a real struggle getting equipment, cables, make enough time to set up. Speech data, harder with headsets, have to register with the system with baseline data. The more complicated the data, it’s multiplying the effort with each data source. Three students, stereo mic, log data, ten groups. It’s not insignificant, those challenges.
Q: Thought of looking in to other disciplines? I’m working with neuroscientists. Gathering huge amounts of data. Also facing problem of diversity of different types of data, struggling with same types of data. Working on metadata descriptions, standards. Big challenge, could learn from that.
Cynthia: Also have “oh, we have loads of speech data, wanna analyse it?” but without knowing more about it, it’s hard to analyse. It’s not more data is better, better data is better.
Q: Balancing generic approaches against ad-hoc. Think about institutional governance, sustain at a broader level, connect parts of institution. Not just within a given study or pilot, but how the institution works broadly. What do you do at your institution?
Davinia: Can really depend on the tool. Right now, the sophisticated tools are mostly ad-hoc. Real scenarios for learning are very complex.
Abelardo: At Sydney, you have to have this face-to-face interaction at different units. So ICT support. We have a BI unit, in charge of providing data warehousing – not as powerful as we might like. Typically also interaction with innovation unit, connects with your position. Have to articulate those units of collaboration. Challenge to convey a unifying vision. BI will know about reports, nice PDFs for Vice Chancellors. Need to modulate it, looking for more comprehensive data collection – need to negotiate. The innovation unit has to serve as a bridge. ICT units come from point of view, we have sensors, can get data – you have to articulate why you want it, and that they’re not driving, it’s a team effort. Need data governance. From above, vision of the institution. A data custodian, someone who says we have rich dataset, make sure everyone access in the right way with right permissions for right purpose. Some centralisation, ethics. Insert intermediate layer between the field and what the institution is providing.
Naif: Same problem in my university. Using data, student privacy, big brother concerns. Multidisciplinary. Have Blackboard data. ICT have other data. Use of devices. We gather a team, bring a committee representing each Deanship, talk to them, get the data from different parties. All say, student privacy, have to ask opt-in/out. At Southampton, UK, tried to do an expt, said have to come back next year and offer research opt-out. A challenge. Prototypes, fake data with dashboards and visualisations, show them what they would get if you do share your data. It’s the data itself, from a technical perspective. Too many techniques, convert the data from semi- and un-structured data to structured, using semantic web techniques, others. Building the team, there are challenges.
Q: Making solutions not general statements. Thinking on Abelardo’s scenario, students wearing special devices, taking data from the beach. But that would be very ad hoc. I was thinking on connecting data in general to the learning outcomes in context. Connection between learning style and learning analytics.
Abelardo: What really creates a powerful learning design is thinking about your objectives. That helps you select data sources. Systematically in the learning analytics space, connection to the design processes. Something that will get us closer to generic solutions. Also, right to mention, your own design, you are hierarchical and follow the objectives, but link to the data. You see the connections for that scenario. From top-down is the most useful.
Cynthia: Other work I do, with Carnegie Foundation, foundational math for college students. Curriculum all online, although students in classroom. Data from online learning system, aim to help teachers. Data mining, which variables are more predictive of good outcomes. Designed prototype teacher dashboard – four key variables, they select which they want to be part of their overall metric, sliders to change thresholds: which students are fine, which flag for troubles. Say not assigning for homework, could turn off that variable. So can weight it differently. Take what we’d seen from the bigger dataset, and they can tweak it to their specialised situation. Trying out next year.
Naif: Course-based learning analytics. Data from Blackboard, some teachers don’t use forum discussion, so need to visualise that data in their dashboard. Avoid problem of information overload. We designed a system, teacher clicks indicators, and those communicate to the student, in this course I’m interested in the forum, time spent online. Then provide stats, visualisation, written feedback. Good that teachers use this tool to help in their courses, not one size fits all. Customised system for each teacher.
Q: Glad Fitbit mentioned, that’s our approach at OUNL in these experiments with 10 participants. Came to witness how challenging it is to find the set of features to look at, find common language to combine data. Want to ask, do you see a difference about collecting data, exploring correlations – e.g. number of steps and performance – that are descriptive of what happened in the past, and the predictive ones for the future. Bringing back example of coast surveys – predict how many people are going to come to the beach given the weather condition, waves etc. Do you think it’s more valuable to have predictions about the future, or information about the past?
Davinia: I think predictive is very complicated, so many factors.
Abelardo: Two parts. First, the Fitbit. I placed myself in shoes of the academic – my learning outcomes drive me to think the Fitbit might be suitable. I might have a different outcome or scenario where Fitbit doesn’t make sense. Have to come to terms with this. I want to know how many steps on the beach, not the weekend. Second, predictive. Some more useful than others. Work on blended learning behaviour, these are models, Ok results. But some other studies, physical exercise, see correlations not that strong. Probably need to go back to the drawing board. May be wrong question of the wrong data source.
Q: Two questions, one more specific. Cynthia’s project on collaborative learning, speech recognition. How much of the data you collect are you able to separate in to chit-chat or fruitful collaborative speech? More data is not better data. Second question, Abelardo’s example, a great example. A lot to collect data about students’ physical engagement. That doesn’t mean cognitive engagement when the student is on the beach. How much of the data you collect can you relate to cognitive engagement? I believe there’s great value in physical engagement, it’s a pure requirement of cognitive engagement.
Cynthia: Good question, figuring out off-task. To bound the problem, designed short task activities, about half an hour. Designed so the students have to talk to each other to solve the problem, coordinate their answer. Also an issue, many students off-task, or not collaborating, or working silently, or chatting about the weekend. Make sure the right dataset to train on. ML to see whether collaborating or not – what if we get no bad collaboration? Turns out that wasn’t a problem. [laughter] Get a range of data, get all the different things so the system can differentiate talking off-task, and talking and collaborating. Working on it, but seems it can distinguish. Have a scale, high-level are Ok but intermediate hard to distinguish. Not doing speech recognition. That would be harder. If we had that, it would be easier to do collaborating/not, but concern using this in school, privacy issue in collecting what they’re saying. So trying just the speech action data – who’s talking, who’s talking over whom, etc. If those end up not being enough we’ll add in speech recognition as a last-ditch effort. But looks like we won’t need that.
Q: Doing this at Sydney with interactive tabletops. Need a person or speech recogniser to [classify it]. But can correlate it with e.g. physical actions, then you can construct the idea. Don’t have to tell everything from one channel. Better than having just one observer. Build the puzzle.
Cynthia: If they’re talking and doing the thing, they’re working, if they’re talking and [not using the device] they’re usually off task.
Abelardo: We have to pay attention between the engagement the data relay to us, and what’s really happening. Have to be aware of that dichotomy. You can have no data at all, just the report of a fantastic beach trip. Guess they did a perfect job. Other extreme, very comprehensive dataset, make the assessment automatically. Researching in between, which is very powerful. Use it as a conversation starter, I know you did this. There is an interesting and fertile space between completely automatic decision-making, and lack of data. Very comprehensive data, can have a really good expert conversation, the data will help.
Q: Learning doesn’t happen only during university or schools, we talk about lifelong learning. What about ownership of the data? Important that learners own their data?
Davinia: We all agree, the owner should be the learner. We should approach LA taking in to account the ethical perspective, the owner is the learner, at all levels, not only lifelong learning.
David Spikol: Lots of things happen in off-task activities. Thinking of technologist with philosophy of tech lock-down. Sometimes we focus activities on the dataset, not the learning activities. Lots of the activities we design don’t translate to real classrooms. Particularly creative learning, the off-task might be the solution that separates a genius from a poor performer. Our systems have to grapple with that. How do we design serendipity back? We’re driven by performance metrics, pass rates. That’s fine, maybe the 3-5% we keep it’s really important. But Davinia’s concept, how do we design better experiences? Question of how we design them to reflect what really happens with learning, it is a creative process, we can’t reduce it entirely to a dataset. A challenge. We all see value in it, but how do we keep serendipity in? I like data! And make experiments to make situation where learners forced to do a certain thing you hope can be used. But maybe start with keeping learning as dynamic and wonderful as it is.
Cynthia: Not that off-task is bad. Certain amount of off-task is good for collaboration, cohesion, morale. Issue for teacher if spend 5 minutes off task rather than 30 seconds. Context, teacher knowing this group, it’s fine off task for a bit because that’s normal. But when they’ve completely stopped working, that’s a problem.
David: A good teacher knows that.
Cynthia: Yes. Provide info to teacher, they make a decision based on what they already know.
Like lightning presentations – 2 minutes for each paper! Full list and full text on the Cross-LAK microsite.
- Visual Learning Pulse – Fitbit, RescueTime, xAPI, Activity Rating Tool – rating productivity. Machine learning to find flow maximised, feedback traffic lights. 1st expt N=8, next N=10. Challenging to use them all. Need common representation in Learning Record Store. Non-sparse data like heart rate, and sparse/random-spaced data. Still not meaningful results.
- Comprehensive Learner Record – Jeff Grann. Lack of this is an issue, so proposal for competency data model shared across stakeholders.
- How can technology integrate of monitoring in the orchestration of across-spaces learning situations? Previous work, GLUEPS-AR, help deploy design from teachers. GLUE!-CAS, architecture. GLIMPSE info structured by learning designs.
- Orchestrating C21st Learning Ecosystems. NTNU Trondheim. Visualise video lecture attention. What analytics should we collect to understand and reconstruct student experiences. Design-based research.
- Profiling high-achieving students for e-book-based learning analytics. How to analyse the logs?
- Learning activity features of high performance students. Kyushu University. Determine which students have a good results. SVM. Turned out, 15 classes, presence in class was most crucial.
- Automatic generation of personalised review materials based on across-learning-system analysis. Combining LMS, ebook logs, lecture materials. 41pp of data to 10pp.
- Flipped-classroom example. Learning design approach. Davinia and Abelardo.
- Towards a distributed framework to analyze multimodal data. RPi, cameras, mics, sense everything in a classroom or experimental setting. Publish/subscribe model.
Hands-on Activity 1
Small groups. Develop a use case definition. Discuss scenarios and challenges: What is going to be feasible in five years? Want an exemplars, describing the scenario, and the challenges.
Many different contexts. We all capture things online quite easily, but harder to capture things outside the classroom. Lots of discussion about ethics, student perceptions, staff perceptions. Contrast with data disclosed to Facebook. Tension perhaps around grades, judging. Issue with parents of students, turning them in to learners. One example, can’t use ‘chance of succeeding’ for individuals … hard to explain.
What’s our use case?
The technical barriers are overcome, we can collect the data across physical and virtual spaces, aggregate per cohort, or ideally per student. Good scenario is design/redesign of courses, that’s easier and more tractable. So for example, if you can show people do better who attend, or do things in physical environments, or not, maybe that can help decision-making. Teacher can see what’s working and what’s useful for learning, and what’s not.
Challenges: Learners may not see the benefit directly to them, but the next cohort. If it’s quite intrusive, e.g. installing a browser plugin, that’s another level of activity. Will need to understand why, what their benefit is.
Data literacy issue.
Want to make more sense than one person interacting with a forum.
Some places are starting get hold of physical location data, using attendance check-in, students generally doing it. Who’s where, when, with whom. Wondering what we can make of that data. Are they meeting and maybe learning in other contexts? We don’t know what they’re doing.
Far future – capture speech! In Library spaces, we can monitor noise levels – just get a noise alert to respond as appropriate rather than walking around monitoring it. Also capturing assessment types – e.g. poster printing task, can predict load on the poster printing services, so can ramp up staff ahead of time. Not necessarily improving learning, but improving the student experience.
A lot of collecting data for teachers, administrators, watching all this data. Some more interested in dashboards for students. More immediate impact – we hope! Maybe in a student counselling context – but they may not trust it (it may not be trustworthy), also issue with parents complaining about reducing them to a number.
Use case: Blended learning. Have solved the data availability problem. Make the data available to the student. (Hard to come up with this – lots of things that might be useful, but we don’t know whether they are?) They’re carrying something that captures tracking data. And some meaningful metadata about what they should be doing. Digitally, we’re almost there. Link to some physical data.
Challenges: Technical ones to make it happen. Usability, for the user. Privacy and ethics, and perceptions. Not knowing whether it will improve.
Improvements: We know check-in data capture is feasible; can improve with some information about what they’re doing. Maybe not typing it in, how to analyse multiple ways; perhaps pre-defined lists. Perhaps prompting from a system knowing what tasks they’re likely to be doing (from what courses they’re on, time point, assessments done and coming up), also working from who they’re working with. Point them towards helpful resources – e.g. if you’re using a bunsen burner, the resources about using that safely are to hand. Helps with buy-in. ‘Nudging’ – maybe you’re together with your friends in the coffee shop, system says perhaps you should talk about that collaborative essay you should be doing?
Drone hovering over you, like a teacher circulating in a classroom! Slightly nightmare scenario. Surveillance-enforced curfew…
Specialist space, e.g. lab, if you use it when supervised or with specialist support, vs if you’re in there by yourself.
Spider-Man principle: with great data and analytical power comes great responsibility. (Fun follow-up discussion: what is the learning analytics equivalent of Uncle Ben dying? Quite scary.)
Pitch to your institution. A set of guidelines, that’d help you deploy data across contexts, to help learning.
Be concrete. Ok if they are very specific.
Time commitment is an issue, cost to put this in place.
Guideline 1: Strategy & top-level buy in. You should have strategic buy-in across the institution, and be aware of the political context. Senior-level support, in terms of finance, resource, policy and direction of the organisation need to be in place and aligned. [Otherwise it’s limited to individual classroom-level innovation.]
It can be hard to find things at this level.
Guideline 2: Purpose and ethics. Improving the learning experience for learners should be the overall goal, and you should think clearly through the purpose of what your activity. This links in ethical issues as well as direction and strategy. Helps give evidence to learners of the benefit, and the need, for capturing the data. You need to have an idea around the ‘so what?’ question about the data you’re gathering.
What’s the equivalent value proposition to the Facebook one? Better grades is a little bit problematic. Focus on the grades, exams, is not helpful. Previously, learning and grades were separate. What if you did? Gaming the system, acting more mechanically.
Guideline 3: Gaming & incentification. You should bear in mind the risks of metrics and targets becoming inappropriate focuses for activity. There can be risks of gaming, mechanically achieving targets, doing the minimum necessary to pass. You need to understand the entire context. Could be about focusing on the individual – can compare to the rest of the class, or could do it to your close peers, or people who are similar. Building in a Vygotskyian ZPD-type idea: show how you could improve from where you are, just a little bit ahead.
This has been a longstanding problem with grades, especially where there are compulsory courses you have to pass but the mark doesn’t count towards your final grades.
Data-driven culture, reporting culture?
Guideline 4: Data and tools integration. You should work towards integrating different sources of data and the tools. Lots of parts of an institution may have different types of data, and use different tools. You need something to convince them there’ll be an impact for the students, not just research, to achieve integration. One way can be to appoint a data custodian, with responsibility to reach and enforce agreement. (Links back to senior buy-in.) You also need to build up technical expertise in this area within the institution. And these people cost a lot and are in much demand.
Guideline 5: Student involvement. You should engage the student body, as a community, in developing learning analytics. That helps convince them, bring them on board. Student unions, student associations, engage them; they can engage the rest of the student body. Student ambassadors. Alumni bodies too. Also helps improve your tools and activity by taking their perspective seriously. Design-based research, feedback in to the system, iterative design, user-centred design, rapid prototyping with audience engagement. Could track student feedback while these are being deployed over years, use that to improve what you’re building, also show how previous cohorts have found it of benefit.
Guideline 6: Training/development. You should plan in work to upskill people (all stakeholders – students, teachers, administrators, senior management), improve data literacy, share expertise & understanding. Because that’ll be diverse. This is a short guideline but the work is large and never-ending.
Guideline 7: Evaluation. You should evaluate, test, get evidence of the benefits – or not – of what you are doing. Example of rolling out lecture capture, 25% of project is evaluation, fascinating feedback – students adore it, staff can’t stand it (broadly!). Feeds in to iterative development, improving what you’re doing Guidelines. If the benefits don’t outweigh the costs, don’t do it. Check the costs. Have your own metrics, benchmarks, to monitor progress of your projects. There are existing evaluation guidelines, but they’re not necessarily standard, and how you apply them as you develop them, it’s a tangible way to evaluate your tools.
Meta note: These guidelines are (we hope!) useful at an institutional, strategic level. But they are less use for individual teachers who are trying to innovate in their classroom.
(We had a joke here about wanting guidelines about how to make good guidelines. A quick search for “guideline guidelines” showed the evidence-based medicine already has well-developed guidelines about guidelines. Someone suggested a quick search-and-replace could turn those into learning analytics guideline guidelines. So I had a go, and it worked surprisingly well: learning analytics guideline guidelines.)
Feedback from each table.
[At this point, my laptop crashed; I lost only a sentence or two of typing, but lost a lot of time while it rebooted, so missed most of this feedback. Luckily most of it was captured electronically so should eventually be available via the workshop.]
Q: We discussed case study, rest of activities. What do we mean by the word ‘space’? [In the title of the workshop?]
Manolis: It’s a good time to do this. At the end of the workshop… [laughter]
Abelardo: We have no idea. [laughter] The process we went through, was struggling with the definition. We don’t want to say physical space, because that rules some spaces out. It’s deliberately vague, and inclusive definition. You’re online, virtual communities, and all the spaces in between. In our institution, emphasis on design spaces outside lecture theatres and labs. In virtual space, it’s something in between, not lecture, not LMS, but somewhere else, and we want to examine that too. Spaces are emerging. The idea here would be to be as inclusive as possible. The short answer is the broadest definition you can think of.
Roberto: There’s a prequel about learning across spaces, special issue.
Davinia: About 10 years ago, workshop at EC-TEL. Topic was on learning across spaces, understanding space in a very broad sense. Any tool, learning environment, physical space where learning can happen. There challenges to make links between spaces to provide seamless learning experience. And how to support teacher. LA keywords were not explicit then. Did a special issue in a journal too. We found it was an expression people found comfortable, and there were different initiatives.
Q: SRI last report uses this. Why from ‘across spaces’ to ‘across physical and digital spaces’. Could say something about hybrid spaces.
Q: Or heterogeneous spaces, integrating data.
Davinia: Yes, or blended learning. All these in the workshop.
Q: Multimodal learning is the way to situate it. LMS, mobile phone as well, another space, another learning system.
Abelardo: The number of spaces is emerging so fast it doesn’t make sense to separate them.
Q: It’s important to have some categorisation of the situation of interaction. I don’t know what name. But in distance learning, it matters. You have different profiles for studying in groups, or by themselves at home. The LMS will be exactly the same. Will receive same data from different learning, but you don’t know, because they don’t declare it. We need some descriptors. Subsets are very different.
Q: Request for next year. If someone’s studying cultural differences in learning patterns. That bothers me in LA. The success of the generalisation. I’ve seen, Italy is not Italy but a lot of Italies. If you have lots of people from, say, Naples, you have to analyse that data in a very different way. [laughter] I live with this problem, it’s a real problem. I imagine, in open universities, students from different places, how do you differentiate in cultural criteria.
Abelardo: In my institutions, many students from overseas. Campaign on cultural diversity, sensitivity towards it, push that context as part of the courses. It’s at least acknowledged. LA has not paid much attention to that diversity.
Roberto: LA is quite holistic compared to other disciplines that focus more on algorithms. At least LA takes in to account context, maybe not specific about culture, but it’s opening up that vision. There are a few papers considering that holistic view. It would be good to consider for next year.
Abelardo: Another paper in the conference, almost tackling this. Check in the proceedings. MOOC run in two different languages, exactly the same MOOC. This is clearly a multicultural study, translated to another language, doubt many bilingual people take both. They did a bit of analysis about different indicators of the networks that emerge, and there were differences. Fresco Jocimovic? MOOCs, you have variety embedded on them. You’re still likely to have a community that’d be very good.
Q: Tradeoff of background info, extensive surveys, between having seamless data collection, IoT approach, Fitbit, collect without asking. I tend to that approach.
Q: Privacy preferences should depend on the tool, I should be able to set them and give them to any tool I encounter. That could be something similar, IoT technology would consume personal contextualised details, and adapt to that user base.
Doug: Jenna Mittelmeier – OU PhD student – is working on the role of culture in group work, work checking out. e.g. http://oro.open.ac.uk/43371/
Q: In corporate context, they monitor you. But if you pull out from Facebook, your posting pattern, it could be interesting. Creepy for sure. Learning style, being self-aware, that’s quite a difficult task. Most learners are not aware of it. It’d be interesting to see how far you can push it. How do we get to the ZPD for each student, we nudge them, push them to a more competent peer? How do we find them?
Mary: Interesting question, but necessitates that contextual piece so they’re pulling that, not having it pushed on them.
Q: Next workshop for LAK is the creepiest.
Abelardo: Some of the ideas we discussed, each one of the guidelines, scenarios, they would make interesting scenarios to publish, maybe in the conference, if we unpack those ideas and say this is a scenario in which I addressed across spaces. Reaction of students, etc. These are my ideas. If we go deeper, create experiences, pilots, that would connect with this community nicely. I encourage you to take these ideas and see if over the next year they crystallise in a way you can bring back to the community.
Roberto: Can do a similar workshop for next year, maybe different organisers. Or meanwhile we can do something else. We have the emails, we can create a group or a wiki. I can offer to curate the different guidelines, list of who’s in each table. Put them together as a part of record of the workshop. Keep the conversation going in some forum. We’re open to some ideas.
Q: If there were guidelines published from a respected bunch of researchers, co-publishing with organisations, to encourage ed tech companies to support those, that would be very helpful. I don’t know who those people would be, or who the focus would be, but that would have more of an impact.
Abelardo: That would be a good outcome, if we categorise these guidelines. We could disseminate them in a way the community discussed these issues. I feel a discussion between the vendors, and what I hear in these workshops. I wrote about the lack of communication between what they sell us and what we’re buying. Many people resonate. This is where we could have something meeting our guidelines more. Great idea to disseminate that.
Q: Movement in K12, school districts in America, declared in 1.5y they would have an interoperable ed tech ecosystem, could transfer from e.g. Florida to Georgia. All these systems signed, challenge for these companies to support that. Level of partnership.
Doug: There is movement in this way – interoperable ed tech for LA at schools level – in the Netherlands, coordinated by Kennisnet.
Q: There are also some technical dimensions of LA that could be explored. There is a gap in the integration of data. Sometimes the pattern of the worst of the students is identical with the pattern of interaction of the better. Interacting a lot, engagement based on interaction, clicks on the web, duration. We have seen sometimes the people who are not understanding anything click more than the other ones. There are some areas that have to be cleared. We need development on those cases. Patterns that are not explained. Many encounters about LA, this problem is always there. Analytics and videoconferences – people that interacted most did better, but the others were just distracted.
Doug: We’ve seen something similar in our online interactions. Absolute level of interaction not predictive – some good students very active, but some bad students struggling and engaging very heavily; also some weak students very low activity, but also some very good students engage very little online, because they don’t see a need. But a drop in activity level was a more useful predictor of problems.
Abelardo: The more data sources you triangulate, the better information you can get.
Q: The amount of data we have about the devices students use. We know they’re on campus, we can identify their device, what wireless card, we know from lab stats who’s logging in to what platforms where. We can even identify lag time. Trying to learn how to take that information and in a tidy way correlate it back to demographic study of the students. Data literacy among the students … one barrier is affordability. Computer labs on campus, do we still need them? Those questions could impinge on us. How can we use data we collect for general purposes, look at inequities.
Abelardo: You can connect those very quickly to financial decisions, e.g. do I refurbish this lab, put in 50 PCs, or put in power points. That gets institutional attention.
Q: Context is important. You can make a bad decision based on just data. If you change this space, it influences other spaces. It’s a tricky thing. We’ve gone through major renovations, decisions made on how people use the space. Open space, you see people less, because people don’t like them, so students can’t find professors. All of this is about the context of what it means for learning. Making a one-dimensional decision.
Abelardo: Silicon Valley, open plan, now revisiting it. Who can do that so fast?
Q: LA runs that risk, especially analysing one-dimensional data.
Q: A special issue or something?
Roberto: We talked about it.
Abelardo: But not to the point to having a concrete initiative. Perhaps for the next one.
Roberto: If anyone here has a contact, or prepare it for next year. It’s not that far.
Q: That’d be good, easy to circulate.
Abelardo: We’ll create a Google Group, unless you opt out by telling Roberto. We’ll inform everyone that you dropped out. [laughter]
Roberto: Advertising fully-paid PhD position at UTS on LA across digital and physical spaces. utscic.edu.au
This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission needed to reuse or remix (with attribution), but it’s nice to be notified if you do use it.