LASI14 Monday (1): Introduction & overview

This is the first of several liveblogs from the Learning Analytics Summer Institute 2014, #LASI2014 #lasi14, held at the Graduate School of Education, Harvard University, Cambridge, MA, 30 June – 1 July 2014.

The liveblogging will be gathered together here.

Dragan Gasevic, Co-Chair, welcomes everyone on behalf of SoLAR and IEDMS. Reminds people about the distributed event.

Main goals are to build the field, see where it’s headed, and what the major challenges and objectives are. But also aware that the field is very immature. It’s early days. We need many people to get basic training. So programme in two major chunks. One is the morning plenary sessions, either as keynotes – three excellent keynote speakers coming up – and several panels, which will present existing results and also challenge us to engage in further discussions. Afternoon sessions on Monday and Tuesday primarily organised around workshops – hands-on sessions in small groups. It’s possible we can’t accommodate your first choices for the workshops; they’re only 20-30 people. and we have about 135 attenders. So have a second choice ready. Also check out the website of LASI and see what are the descriptions of the workshops. Some prerequisites – not knowledge, but in terms of technology to install prior to attending.

It’s a distributed event. As well as Harvard, we have 6-7 parallel events, including Africa, Europe, North America and Asia. So when you ask questions, please wait a bit and use the mic so we can stream the questions. The second and third day of the event will have the chairs and organisers of LASI-Locals reporting back about their activities. We hope this event will play a role of a broker, representing all these nodes.

Tag your blog with LASI2014 and register your feed with the LASI aggregator.

Also, if you’re presenting, please upload your slides and post the link in the Google Doc. bit.ly/LASI2014slides

All sessions are recorded, and plenary sessions are live streamed.

One change from last year: more focus on doctoral students.

Thanks to sponsors – Canvas, Desire2Learn, Intel Education, Western Governors University, McGraw Hill Education. Thanks to the organisers and helpers.

Charles updates with some practical arrangements about coffee and bathrooms and so on.

Dragan welcomes George and Ryan.

Harvard University

 

Presidents’ introduction: Overview of the Space of Learning Analytics and Educational Data Mining

Ryan Baker (Columbia University and IEDMS) and George Siemens (University of Texas Arlington and SoLAR)

George thanks everyone who’s joining us. A sign of the impact and range of learning analytics. Agenda here – Intro, SoLAR, IEDMS, What do EDM/LA researchers do, State of the field.

Why LASI? It’s our second-most important annual event after the conference. With a conference, connect to peers and learn about research. LASI is a developmental group. Bringing people from a range of sectors, advancing our methods. Also an opportunity to build skills and knowledge of doctoral students and faculty – who may not have called it learning analytics before.

There’s a mess of LASI Locals running around the world, thanks to Simon Buckingham Shum for organising. Impact on excess of 1,000 participants globally. Thanks again to Charles and Garron for work behind the scenes, Dragan and Mykola for the program, Grace for admin/organisation.

Was going to be in Madrid, but the hosts were disbanded in March. Charles and Garron volunteer to bring it in.

Thanks to the sponsors too. Terrific support. Next year, intention is to hold it in MIT in around the same time scale.

What is SoLAR?

George talks.

Scientific organisation, promoting research. Open and transparent, results can be evaluated and duplicated by others.

What do we do? Generate acronyms. LAK, JLA, LASI, OLA, LAMP.

What is IEDMS?

Ryan talks.

Support mining big data from educational contexts. Slightly different focus. Very similar goals. Sponsor EDM conference, JEDM, EDM-ANNOUNCE, EDM-DISCUSS. We’re kind of the older sister. [But no less attractive, chips in George.] Supporting role in LASI, OLA, LAMP. Also public policy and advocacy stuff.

What do EDM/LA researchers do?

Article Baker & Yacef 2009, Baker & Siemens in press updates. Prediction (+latent knowledge inference), structure discovery (+domain structure disovery), relationship mining , distillation of data, one other.

State of the field

Back to George.

For LAK12, Ryan and I did a paper. EDM is 3 years before the learning analytics conference. Looked at a model of collaboration. Emphasis was on doing a better job, intended collaboration. Broadly stated, similar mindset, approach, goals. SoLAR has benefitted from tight affiliation with EDM, hope to carry that forward.

What is happening in LA? – Dawson et al 2014 (LAK14). Structure analysis. More recently, less opinion, but more validation research. Education doing more evaluation research. Seeing some maturing overall of methods, growing sophistication in ways we come to understand the learning process. Methods all over the board; quantitative research growing, not surprising. Prided ourselves, we had our first conference, we were concerned about a fracture in the discourse around education. On one hand, learning sciences/education, but not the tech skills to make sense of large datasets. On flip side, computer science and more technical communities who did have that, but less understanding of the learning process. Explicit statement, aim to bring these communities together in a dialog. So at any point in time, some of you should be confused.

Trends: maturing of the quality of research. Fifth coming up. As academics become more connected, research methods more finely tuned. More effectively getting at interventions required. Also – focus for LAK15 – focus on making an impact. Taking insight and cashing it out in terms of practice. Also growing prominence of computer science representation. Need to continue to advocate for the friction between the analysis methods and social understanding.

In EDM – over to Ryan.

In part the differences are organic. Exciting to bring some EDM folks in, concerned about technical rigour, and analytics folks keen on making a difference. Also like bringing together learning scientists and computer scientists – I use both. Recently used data mining to see why I wasn’t sleeping so well, then did an RCT to test … and I got a null effect. [laughter]

What’s happening in EDM? A lot of classification and regression. In last few years, increased emphasis on latent knowledge estimation and knolwedge structure discovery. New empirical methods here. Reduced emphasis on relationship mining. Used to be large data set, find association rules – but more focused now.

Also seeing broadening range of constructs studied – metacognition, affect, engagement, motivation, long-term participation. Excited to see greater focus at looking at models that predict across many years. Also work on more broadly generalisable work. That came from learning analytics. Also increased discovery within models analyses.

EDM long had focus on basic research and automated intervention. Increasingly we’re learning from LAK and getting in to reporting, showing it to people. That’s useful. Increased participation on industry. EDM conference, 4 days in London. If you’re not busy this Thursday …

Going forward

Back to George.

Have seen several learning analytics professorships. First chair in learning analytics at a European university. Masters programs too. The field is growing. A great opportunity to become involved, collaborate.

What do we need to be doing better?

Four key elements – from early LAK talk:

  • develop new tools, techniques, people: current tools need geeks to use, we’re waiting for it to become more accessible
  • data – openness, ethics (Facebook example – randomly influencing what people see on their field), and scope (we’re collecting fragments of the process – Dan Suthers: a lot of our data is analytically cloaked, can’t pull together LMS activity vs social media vs ePortfolio, etc; wearables). LMS data is not authentic; social media more learning in the wild.
  • target of analytics activity – what is it that we’re trying to do? Trying to do a better job with personalised, adaptive learning.
  • connections to related fields and practitioners – smart move early to reach out to EDM, fruitful partnership; OLA, LAMP.

Challenges we face. Public pushback – e.g. reducing learning process to the metric, just measuring the easy stuff. Early on, valid complaint, we just use what we have. InBloom, was going to allow open data access, now pulled in to vendors. Issues around that. Social processes, where there’s a power relationships, have to tread carefully.

Several of us are doing a MOOC on the edX platform on Data, Analytics and Learning. Giving an overview.  LAK15 at Marist, CFP coming out soon. EDM2015 is in Madrid.

Questions

Phil Winne: Could you sing, George?

G: I’ll be alright.

Timothy Harfield: Both organisations have a strong emphasis on openness. Why is it that, e.g. the LAK conference proceedings are not open access?

G: First, Ryan’s a nicer guy than I am. We made a decision early on, many organisations – if you’re funded, it has to be within a certain range of journals. In the European context, if it wasn’t affiliated with ACM or a similar organisation they couldn’t get funding. Hard – JLA is open access. EDM, credit to them, we were criticised for this at a panel in LAK12. It’s a fair criticism.

R: it’s an ongoing fight even in EDM to stay open. Many countries, an open access independent conference aren’t taken as seriously. I’m a radical, the future is open access. We have to pay costs for it. We have more citations, more problem getting people to come. The odds of us getting ISI listed are low. Have to be one of the big publishers. Stories from other journals don’t make us think we have much of a chance. So there’s a cost.

Brian: Mentioned friction between CS and learning, could you talk more about that. Where’s the rub?

G: It’s different in different areas. I did a talk at Teacher’s College, looked at what does an average LAK paper look like compared to an EDM one? Dataset. In EDM, enormous datasets – Pittsburgh Datashop hugely influential. In LAK, N more like 200. Methods, what is the social network underpinning the formation of knowledge. Extracting presence through forum interactions – small. but then edX give you a big dataset, entirely different scope. Harder time with those small-N approaches. Tensions around what is knowledge and what is knowable through algorithmic approaches. Educationalists might look at power, fairness, equity, ethnographic lenses. In EDM, you may not find those approaches, less common. Main tension, in SoLAR, don’t want it to go away. More digital trails left – if you have three dogs, give them names; if you have 10,000 head of cattle have to give them numbers. Will have to adopt methods from CS folk. Attributes of the learning experience that can’t be algorithmically understood, though.

Piotr: Open access issue. About credibility. Fact of the matter, provocative statement, the peer review process doesn’t work. Models that give us more rigour, give us more dialogue, that may give us more credibility. Interested in that?

G: We suffer under lack of openness guilt. Looking at different models. Terry Anderson did analysis of impact of IRRODL, open access, off the charts in terms of citations and impact compared to more prestigious closed journals. Want to move towards openness. My goal over next few years is proceedings free and openly accessible. But some might not get funded if not ACM. Addressing multiply stakeholders.

R: I’m more of a conservative on peer review. It’s not perfect. But it’s nice to have some sort of quality procedure to say the items here have a certain quality. Other models could do that, but when you’re a new community, there are some ways of guaranteeing quality, otherwise not credible.

Piotr: Peer review in MOOCs, peer grading, and so on. Finding there are models that generate higher learning validity and reliability. Assume they will translate in to scientific peer review. Higher level of rigour.

R: Of course, actual rigour and perceived rigour. Not closed to different models. Just uncertain.

G: Big bit relates to perception. Online learning, similarity to classroom learning, but still perceived as ‘less than’. MOOCs has changed the narrative a little bit. But perception not changed.

(missed name): Building connections with community of startups. I’m working in Amsterdam, London, Silicon Valley. So much activity in all levels of education. Not always are they well funded in science. Real opportunity to actively reach out to startups, make sure they do the right things. Learn from it, get data from it.

R: Certainly open to more ways to support startup community. I’m involved in NY startup community, lots of ed tech. Taylor Martin is involved. My lab has a MOOTextbook. Trick is it’s incredibly fragmented, even NY and Silicon Valley don’t talk to each others. Any suggestions for how to better to reach out, I’m all ears.

G: Emphasise that. For LAK, LASI, want to include those communities. Sponsors are supporting this conversation. There are opportunities, had startups at LAK14, followup work too. It’s happening, but not explicitly happening the same way. We are first and foremost a research community. If that can help startups, great.

Martyn Cooper: Ethics. We recently had to prepare a paper on LA ethics for our university’s ethics committee. We wrote a lot but main conclusion was there isn’t a lot there. What are the classics of what informed consent means for students whose data is analysed. Any comments or experiences from people addressing that?

R: First, there’s a faculty member who’ll be teaching a course on LA and Ethics, eagerly awaiting it. Informed consent the standard for many. Not sure it’s always needed. Common core (?) says it’s not necessary for retrospective anonymised analysis.

MC: Practice. If you’re feeding back to them.

G: Adam Cooper’s work on analytics and ethics. BJET paper from me and Abelardo Pardo. It’s a conversation. Ethics is a pants drop (?). It kills conversation. The legal system doesn’t give us the right guidance. We are doing more than the research tells us. Lots of it is who is the person overseeing it? Different for teaching vs research. Open question. Don’t have clear consensus. Trying to use existing systems, but don’t have the legal architecture. Huge opportunity around learning data, but need to do so that assists in learning development, without violating basic rights of participants.

Phil Winne: Expectation of privacy is another concept. In public spaces is one thing. Does it apply e.g. in  MOOC? Nobody knows yet.

Doctoral Student’s Lightning Talks

There’s 30 in 30 minutes! A challenge for liveblogging.

Ani Aghababyan: Utah State. Graduating this Summer. Met my future employer at LASI. Started research studies and collaborations. Study concepts Ryan talked about – affect, motivation, engagement. Using EDM, LA methods.

Liaqat Ali: From Simon Fraser U. Interest in learning motivation. Enquiry focus on analysing log data to understand learner’s behaviours. Investigating data from two questionnaires. I’d like to meet participants with interests in evaluating tools and questionnaires.

Sven Charleer: HCI group with Erik Duval at KU Leuven. Learning dashboards, visualising learning traces. Impact on students and teachers’ reflection processes. Hope to meet people, get feedback, collaborate with people, with bunch of data want to visualise it. We have a workshop this afternoon on visual LA.

Regina Collins: From NJ Inst of Tech, in Info Systems Dept, HCI, TEL. New to LA, research looking at complex learning environment. Pedagogical activity. Students looking at internet resources, what they choose to cite. Graduating next year.

Carrie Demmans Epp: HCI at U Toronto. Use of adaptive mobile learning language app. Impact on learning processes, affect, vocabulary.

Nia Dowell: U Memphis Inst for ?Intelligent systems, with Art Graesser. Language and discourse, using comp linguistics methods, psych theory to explore. Workshop this afternoon.

Jon Ferguson: From U Rochester in NY. Teaching and curriculum. Grounded in self-determination theory. Emerging technologies, smartphones, experience sampling, educational and psychological research.

Dawn Gilmore: Swinburne U of Tech in Melbourne. Mixed methods, Goffman, comparing text-based subject to numbers-based. Communities of practice. Back stages and front stages. How and where students seek and share content and admin related to their study. Back stage data, longitudinal questionnaires, interviews.

Stian Haklev: U Toronto. Collaborative and inquiry learning in large UG lectures. Design research, some ML and auto feedback. Also MOOC project with Coursera and edX. Open science is really key.

Wenliang He: California, Irvine. Engineering, grad school, dropped out for education. Dreaming about teaching online course at scale, using students info to do amazing things. All those things are happening. Two components: one, active learning; other, finding a viable project in knowledge network, map to facilitate customised learning.

Caitlin Holman: U Michigan ?info school. Game-ful learning (?), it’s really complicated for teachers. Built LMS to support that.

Vladimir Ivancevic: Serbia. CS, TA. Data mining, analysis, in education. Predictions of exam turnout, programming tests automatically.

Srecko Joksimovic: From Simon Fraser U. Predicting LO from data collected by LMSes. Community of inquiry, CSCL, interaction types theory. ML, data mining. MOOC research.

Natalie Jorion: Learning Sciences, psychometrics, U Illinois at Chicago. Assessments of conceptual understanding. Step forward, but very limited.

Tara Kaczorowski: U Buffalo, NY. Special education, tech to enhance for students with learning disabilities. For core math instruction with mobiles. Supplemental support outside regular instruction. Have created iBooks, practice more accurately.

Vladimer Kobayashi: U Amsterdam. Edworks project. Education and labour market. Previously ML, now LA. Want to use labour market info to build applications in LA. Learning dashboard that supports goal-setting – e.g. want to become financial analyst, what is best path.

Vitomir Kovanovic: Simon Fraser U. LA in social learning environments. Community of inquiry. Trying to understand and build LA related to different components of that model. Automated content analysis. Text classification system. Social presence. Connecting SNA and social presence. Third component, teaching presence. Applying in to MOOCs.

Charles Lang: SFU (again) but starting PhD at U Toronto soon. Video annotation systems, RT feedback. Distributed systems, big data.

Armanda Lewis: [not here?]

Daniyal Liaqat: [not here?]

Zacharoula Papmitsiou: Info Systems U Macedonia, Thessalonika. Determination of predictive factors. Temporal factors. Computer-based assessment. Proposals for H2020.

Drew Paulin: From U British Columbia. Discussion-based learning, developing tools. Social media in T&Ling. Evidence and indicators of learning in social media. Workshop this afternoon on SNA and learning.

Mirko Raca: EPFL in Switzerland. Computer vision, emotion in to meaningful measurement feedback to teacher. How students are reacting. Students perception of your teaching. Formulating a predictive models.

Scott Trimble: From U Texas and Austin. Motivation and research. Supportive practices, self-determination theory, students’ affective states related to learning performance. Tech to track students’ affect and engagement moment to moment.

Jorge Ubaldo Colin Perscina: From Mexico City, at Columbian University, Teacher’s College. Urban planning. Interaction between education and geography. How students decide where to go to school within a certain city, high schools in Mexico City. How can you predict which ones go where? GIS.

Jelte ?de Jong: Non PhD student. Founder of literacyanalytics.com. Based in Amsterdam. Build an alternative for test-based classification to get people in to labelling for dyslexia etc. 1 in 4 schools in NL uses tablets. Inform teachers on specific disabilities. Classification of errors in spelling, viz of learning patterns, spelling progress.

Marcel Worsley: [?not here]

Zhenhua Xu: Distribution and impact of academic emotions in discourse learning environment. Previously research in children’s motivation. Gamification. New to field.

Cheng Ye: Data mining and machine learning. Building cluster, predicting model for skill, persistence in MOOCs, Coursera. How can we know what students need?

Min Yuan: [?not here]

Elle Yuan Wang: Columbia University, Teacher’s College, Ryan Baker is my advisor. Past, motivational research on studying how motivational factors influence MOOC completion. If you’re too fascinated about MOOCs as a platform, you’re less likely to finish. Working on MOOC datasets. Surveys, forums, predictive models to foresee learning gains. In future, career advancement prediction.

Dragan: One quick question. “How do we envision self-policing about privacy, ethics. Code of conduct?” The theme is potentially very-interesting. Turn that question to the audience. Self-policing learning analytics and student privacy?

Josh Baron: Some of us in OSS community, talking from InBloom situation, talking about a Bill of Rights for end users about use of their data. Could sign up to this. A way of self-policing ourselves, getting in front of the issue. There will be a mistake, or someone will do something they shouldn’t have. That’s one idea.

Stian: As a community, work on educating ethics review committees. They don’t have the technical understanding necessary to evaluate our processes. Become very strict about things that aren’t important, and other things they should be concerned about they skip through. Keep ourselves honest, thinking about anonymised data – we all know that the reality of deanonymisation.

Phil Winne: Important other side – educating students and people we’re collecting from. What it means to have data anonymised, and what the positive effects are. If we can show we’re making a positive difference, that’d entice people.

Zach Pardos: Showing results, Bill of Rights. Towards that, convening earlier this month in California, a smattering of people, IRB directors, others, agreed on a two-page document modelled after the Belmont Report. Values. The Asilomar Convention for Learning Research in Higher Education – asilomar-highered.info Needs to be external input.

[coffee break]

Matthews Hall, Harvard Yard

George: Great to see the doctoral students. Introduces Pierre. PhD in comp sci, then CSCL, CSCW. Now MOOCs, Learning at Scale. Last year, we had a MOOC conference, I did a keynote, he told me everything I said was wrong. Enormous impact of Pierre on the field.

Keynote: What does eye tracking tell us on MOOCs?

Pierre Dillenbourg (EPFL)

Orchestration Graphs: Design for Analytics

Slides available here [PDF]

Originally an elementary school teacher, 30y ago in Brussels. I’m in a lab where we do a lot of work on gadgets – CHILI. Eye tracking, MOOCs, gadgets. Unify it with graph theory.

Idea, seven years ago, collaborative learning. What if we connect two eye-trackers. They look at the same display. Can we predict the quality of collaboration by them looking at the same thing at the same time? The answer is yes. If, say, looking at Excel spreadsheet, if looking at different cells, will misunderstand. So looking at same place at same time predicts collaborative working. Graph of gaze ratio at the same object on the screen – say a banana. 700ms before you say it, you look at it. The listener looks at it 1.2s later. So about 2s from I look at it, say it, you look at it.

Pair programming experiment, Scala code. Manual analysis of the quality of collaboration. Video example of low gaze recurrence. – they are looking at completely different places in the code while one person is talking. Co-occurence matrix has the diagonal where they look at the same thing. Good pair, high gaze recurrance – they are close together, following each other. No diff between medium and good pair, but bad pair much lower gaze co-occurence.

In MOOCs. 600k registrations in 1y. Bullshit, these numbers. It’s not nothing. Great opportunity for us to do analytics. Example – semi-transparent hand – the teacher is speaking, writing maths, the hand points where they’re indicating in the video. Used eye tracking to see. Following teacher’s references. One student, where teacher is using a pen: learner is following where the pen is pointing. Heat map for 40 students. They’re all over the place but there is a hot spot near to where they are. When the teacher is pointing, some learners converge on the gesture of the teacher. Compare ‘with-me-ness’. Score in the post-test vs two variables. Do they look directly at the finger – not connected with the score. All eyes follow any moving object. Evolution. But how long the spend looking at it – yes, that’s correlated (0.35)

Withmeness in another video, do a concept map together. Measured gaze recurrence. Two different concepts – but there is a correlation! Explanation – hypothesis – Art Brut picture. Two ways to look – that’s art, do you like it or not. Second way, look at the brain of the guy who did it. Looking at, vs looking through. Some of the learners are at a computer, see pixels; others see someone behind, they are engaged with a human being.

Orchestration graphs

I use ‘orchestration graphs’ to improve my h-factor. This is not MOOCs, vocational education. Dual system, logistics assistants in a warehouse. They are slaves, they move boxes as fast as possible. What did we do at school in the logistics class? How can you teach it to them. Augmented reality system – build plastic shelves, camera above, mirror projects it on to the table. Simpliquity. Tinkerlamp. Overlays feedback on top of the model. Gives feedback about your layouts.  Run a simulation. Gives real-time feedback. Stuff about loading trucks correctly. Stock management. Very cool, used in 10 schools. The teachers like it, the kids are engaged. We did it in an experiment, they learn. We do it for real, they don’t. It’s too engaging, they play, move the shelves again and again. No moment where they reflect, think how to improve it. How can we help the teacher to manage it. We give cool tech to the kids in the class – it’s an enemy for the teacher, much cooler then them. How do we help the teacher? We give them orchestration cards. One is ‘run the simulation’. You can’t run the simulation, you have to call the teacher and ask to run it, where they ask – do you think it will run faster? Think about it, call me when you know. When happy, they can run. Also  ‘Pause class’ card that stops everyone.

This is not learning theory, it’s how to practically manage the class. Time management, safety, physicality. Previously called implementation details, but I think should be part of the design.

Example, doing it with a sheet. Don’t ask the kids to login. This time is useless. Orchestration is the idea of, now how people learn, but how a teacher can manage a lesson.

Graph theory. Integrated learning scenario can be modeled as a weighted directed geometric graph. I don’t belong to any educational church. Some activities are individual activities, some are group, some are whole class.

ArgueGraph, from ?1980s. Individual activities. Plot answers on a graph, reflect their opinions. When students see their name, they get involved. Creates participation. Social constructivism, people learn because they disagree, have to change viewpoint to agree. If increase disagreement, increase learning. Debriefing tool. Sequence of activity, how to maximise conflict – no, optimise conflict. Try to engineer the pedagogical scenario by playing with individual/team so the conflict will rise and you as teacher can use it.

More complex script, Jigsaw. Four different roles, split, recombine.

Three more layers above Individual, Group, Class – Periphery, Community, World.

The edges are the relationships between two learning activities. Have a range of classification – his typology. We have a lot of semantics, a rich library of relationships between activities.

Advertisement – picture of alps from his office. At Lausanne, we have great ski in the morning, sail in the afternoon, evening great wine. I have an open postdoc position. [laughter]

Transition between activities, often need to compute something, organise, collect answer, feedback. So the graph is a workflow. These things work because there’s a workflow. Works with paper too, but does not scale so well. Fine up to 20 people. For MOOCs, would this work for 20,000 people? Have to make the workflow operators explicit.

His view of MOOCs – scale vs interaction richness. Scaling up tends to reduce richness. Hope behind this, maybe we can take very rich activities, formalise them with a graph with operators, then we can scale them. It’s my fantasy.

Graph is a set of vertices and edges. Activities connected by an edge that can be more or less heavy. If it’s important, should be heavy edges. If they’re randomly put together, should be light. How can you do that?

Learner state over time mapped. Four states – lost, active, fine, dropped. Track the transitions. Model the transition with matrices. What is a state – we have a lot of things to describe the richness of cognitive states – e.g. impasse, when you’re stuck with what you know – it’s a good state, you’re ready to learn. Need a richer, pedagogically meaningful set of states.

So if have the transition matrix, measure the strength of the edges. Measure of entropy – from everyone goes to fine. The weight of an edge is 1-entropy of the edge. But that’s not good enough – doesn’t discriminate between all stays the same and everyone goes to fine.  So define Utopy. utopy is positive if people tend to improve their state. Use this to weight the entropy.

It’s useful to predict if everyone will fail – except to publish in LAK.

Elasticity – strengths decrease over time, important for orchestration.

We have the history of the states, and the behaviour observed. Diagnosis process – as entropy. From one state, different levels of behaviour. Entropy here.

Markov representation. Then introduce third dimension. Compute state of learner at a moment: their state in previous activity; their behaviour; the state of previous students. Three axes: social, diagnosis, time. It’s a ‘cube’. 3D model, can have 3D reasoning. It’s a theory, I have not done it.

Want to integrate analytics in to the design. Application to the classroom.

Face to face situation. Physics 101 at EPFL. 2% of best students, enough will fail in 1st year. Sit and do exercises on paper, raise hand for help from TA.

I like problems. Ineffective orchestration. Invented a thing called a lantern. Put it on their desks – colour changes to show which exercise you’re on. Number of LEDs grow over time. When you need help, push on it and it starts blinking – slowly, then faster and faster. This is not smart technology. Waiting time – down from 62% to 6%. Big effect.

Can collect analytics – simply viz for teacher. That was before, when I was young and making gadgets. Now more serious. Not any more with gadgets, but with Mirko Racca, putting cameras that don’t do eye tracking, but measuring the direction of the learner’s face, and how they move. They follow the teacher. Give teacher eye-tracking devices to see where the teacher is influencing the attention of the participants by systematically scanning the enemy in front of me.

Between Piaget and Markov – what matters in schools is love, life, social relationships. Modeling the hidden graph does not question the importance of social relations in education. We can model the formal aspects. If you take a lesson – a MOOC, a lesson on a square for kids – it’s the same activity. It’s like chess. Very formal state graph you can get to. Formalisation doesn’t mean that social relationships are not important, but we should formalise a bit more what can be formalised.

Questions

Chris Brooks: Authentic learning environment, student population of MOOCs. Very diverse. How to do interesting things like eye tracking, but going to people e.g. sitting on the bus, phone in their pocket, and still involve them in analysis.

You’re right. 2/3 of people on MOOCs have masters or bachelors. Real concern. How can we reach logistics – this lamp costs 5000 bucks. Have done online version, buy plastic shells, box for $20, use webcam, look at the result on your screen. You might lose 5% of the cognitive benefits of tangible augmentation, but spare 95% of the cost. Similar one, compare augmented display with physical, don’t lose too much. Can extend MOOCs to hand workers, not only mathematics. Second point, about eye tracking, it will be there soon. Already eye tracking for your laptop. A question of accuracy. With low accuracy at the moment. Design MOOC with that in mind so you can interpret. Soon you will have eye tracker just like a touchpad, part of the interface. One already for the Samsung tablet. If we want to scale, have a big analytics with 20,000 students, small with 10 eye tracking, combine the two. Sooner or later will be able to do large scale eye-tracking. Advertisement will be the first market for that.

Boreham?, U Memphis: Data collection and classroom point of view. Doing similar to analyse discourse. We’re using new version of MS Kinect, can identify points of view for multiple persons. Are you using similar technology? Or have your own ideas?

Talk to Mirko. I don’t know how many you can capture with a Kinect?

B: About 20.

We use cameras instead. It’s gaze direction, body gestures, how they propagate. Sometimes a wave of distraction goes through. The point is, in lectures, the professor starts lecture, has the class with him. Lose a row after 5 min, then at end, only have the first row with him. He’s doing nothing, losing the class. Good teacher has the same thing – except he says btw, this is on the exam, or makes a joke. Good vs bad teacher is not about losing attention, it’s about noticing it and orchestrating it effectively to get it back.

Someone from Rice U: How do you see these, what is the experience of K12 teachers? Are they receptive to use these to improve teaching practices? Dreaming of a window like that – maybe we can talk.

Otherwise I’ll send you a picture. I never present this graph to an actual teacher. How do we do this? We failed. For last 30y, learning tech fails. In experiment, we waive the constraints. Say can we take 2h rather than 30min. Close the curtains, etc. We run expt, ANOVA, pre/post test, publish paper, then we leave. Then say they don’t use it because of resistance. It’s not true. Teachers can book a flight, etc. But we design tech that makes their life more difficult. Tabletops – classroom is dark. Bring your kids to a dark room ? It’s Christmas. We complain they don’t use it because they’ve not been trained. That’s a mistake. We should design tech so teacher is in charge, makes their life easier. Like the orchestration cards. They notice we spoil their life. Have to design by working with teachers. We should design tech based on learning science, and on the logistics, the practical problems.

Stian: Love embedding collaboration, automated. This is what we must do in MOOCs. Practicality of working with edX, Coursera somewhat closed platforms. Talked about apps, but don’t see it coming. Trying to push them? Taking edX, host it yourself and develop? How can we move forward from just analysing data from courses, design exptl environments.

If the idea sounds useful to them, they can steal the ideas and integrate them. edX is a bit more open than Coursera. It will come with Christmas, block you can open. Online service – e.g. running in Lausanne, I send you list of people with features, you send back groups. Maybe charge one cent. Already online proctoring, translation services. Might not be everything in edX, Coursera, but lots of added value services elsewhere.

Timothy: Like how this scales, replicating expert/master teachers. ID based on eye gaze whether they have the attention or not. That’s excellent. Concern is bringing them back to the classroom. Can these techs, can they be used to teach teachers, train to become attuned to attention to orchestrating their classroom. Like a ladder for teachers. Or suggesting that this becomes a crutch, a replacement for the master teacher?

I don’t think 1-2-1 is optimal scale. That’s the implicit assumption in education. Famous paper from Bloom. If you imitate same relationship at 30 you have at 1-2-1 … but sometimes scale of 10 is better than 1. The goal is to have the cognitive intensity reached at the large scale. I’m not sure, I don’t mind teacher training. I’d invest more in that than LA. But still, easy to say we need improve it. Would like to measure cognitive load of a teacher. If increase cog load of teacher, make teacher more difficult. Hard to be constructivist. If techs reduce cog load of teacher. E.g. one who doesn’t notice losing students, their focus is on the content, no cog space for classroom management. Can we use prosthesis to help with that? We can improve the usability at the classroom level, the classroom ecosystem. Then we don’t have to improve teacher training.

Jenny: Have you begun to imagine orchestration in online courses? How can this concept of teacher intercession be designed in to the design of online courses? I can’t see them bang on the desk.

Take forums as in Coursera, edX. Not bad from orchestration point of view. Features help teacher manage. But my answer is no. Basically, in a MOOC you have a script which is rigid, difficult to change anything. A graph is more like organic stuff, you can move, elasticity; some things cannot be moved, some can. Would like MOOC platform that can. FutureLearn platform, have introduced orchestration, but have not seen it yet.

 

Panel: Overview of learning analytics initiatives – Open Learning Analytics, Learning Analytics Master’s Program, Learning Analytics Community Exchange

Josh Baron (Marist College and Apereo Foundation), George Siemens (University of Texas, Arlington and SoLAR), Ryan Baker (Columbia University and IEDMS), Doug Clow (Open University)

[Light to no liveblogging since I’m on this panel.]

Josh – Open Learning Analytics. OLA summit found four domains – standards and software, strategy and policy, learning design, research and data. Groups working on these towards a white paper.

Learning Analytics Initiative (LAI) from Apereo.

George – Learning Analytics Masters Program (LAMP). SoLAR, LAMP. Met at CMU, end of April. Course topics lists.

Ryan – Open Learning about Open Learning Analytics. Open for business, openly.

MS in Learning Analytics, Teachers College, Columbia University. Exists as focus in LA as part of Masters in Cognitive Studies.

Big Data and Education MOOTextbook.

Doug (me) – LACE.

Dragan: Journal of Learning Analytics. First issue out, next issue July 2014. Two special issues out. Self-Regulated Learning and Analytics. Deadline August 15, 2014. Second on Learning Analytics and Learning Theory, deadline Nov 30, 2014.

JEDMS has a special issue out, EDM with longitudinal datasets, due Aug 15.

Questions

Q: OLA. Plan to collaborate with MOOC platform providers?

George: MOOC db initiative, MIT. Others too. IMS. Producing a position paper.

Q: Will LAMP resources be available to everyone?

Ryan: Yes. I think.

George: Haven’t seen CC resource that said Attribution except if not student. Maybe some NC. But Creative Commons.

Jane: Digital publishing of open resources. Wonderful community of openness. Spectrum of digital publishing quality. OER library, work in open textbooks. Digital publisher experience. Taking adaptable, flexible materials turning them in to beautiful materials. Approached people to make them a good design experience.

George: First view, when openness, I make open what I think is interesting, if anyone else wants to make it more valuable, have fun. We’re not looking to work with publishers. But if someone wants to do that, develop it, skin it, they can do that. Would encourage them to share back. attribution-share-alike. Those are value layers I’m not interested in adding.

Josh: I’m taking George’s stuff and selling it for $50. Remember Flat World Knowledge. Models around bringing that improved design. Small charge. Interesting balance.

Q: How will OLA approach to privacy work? UWMass.

Josh: It’s all conceptual at this point. No specific answers. Creating a framework with tangible, practical examples. All under CC. Another part, not just policy documents, but also introducing them to constituents at your institutions. Provost vs student vs faculty.

Chris Brooks: Synergy between OLA, LAMP – problem-based learning. Is that something explicitly considered?

George: Short answer is, both initiatives have similar partners. I do agree there are interesting application possibilities. Difficulties in tying together early. Apereo have tech expertise of large-scale software deployment, SoLAR doesn’t. If tied together, success of one might be impeded by the other. Down the road more ties.

Interenet questions from in the room: Shame to have multiple APIs – why not stick with IMS Caliper?

Josh: Deep connections there already. Trying to bring them together, than create something new. Around implementation, proof-of-concept, reference implementations.

Dragan: Not just a tech effort. Orchestrate different algorithms. Another theme, learning design, learning science connection. All important. OLA not isolated on tech architecture, but part of a bigger architecture.

Josh: In Apereo, learned that when software engineers work in isolation, it’s not best used. So excited to be involved with SoLAR.

Q: Are there overlaps between Unison and Apereo?

Josh: Some institutions in both. U Wisconsin. Engagement a two-way street.

Xavier: Another SoLAR activity. SIG call. Today and tomorrow, opportunity to discuss them, especially in the workshops.

Stian: Appreciate attention paid towards community. Simon BS says there’s a G+ community. Open Learning Resources, masters. Interested in supporting individual students. Huge amount of impostor syndrome here – people uncertain of stats, ML, learning theories. In addition to lecture material, have e.g. badges. If we have assessment – if I’ve passed this badge, it’s like running a marathon. Significant achievement – you know it’s something you can work towards, you have a portfolio to show for jobs. Can talk about and share as a community.

George: Lot of time thinking about networked architecture. A group of people who make decisions for others, it doesn’t work. We want people to make themselves a part of it through their actions. Open invitation, you decide your own interest in being involved. Your actions are what’s recognised. E.g. badges, Alyssa Wise sharing that. Make things happen. Not in an era where people say what you can and cannot do.

Q: NyanCat – how could standards-based record store based on TinCan adance learning analytics? What’s needed to measure?

Josh: Point to U Amsterdam. Connecting Open Academic Environment, Khan Academic integrating TinCan / Experience API in to standard open source learning record store, will release code soon. They’ll answer that better. Bringing data from very different systems. One question – is this in the right form for learning analytics? By doing this reference implementation, can get answers to that.


This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission needed to reuse or remix (with attribution), but it’s nice to be notified if you do use it.

Advertisements

Author: dougclow

Academic in the Institute of Educational Technology, the Open University, UK. Interested in technology-enhanced learning and learning analytics.

3 thoughts on “LASI14 Monday (1): Introduction & overview”

Comments are closed.