Liveblog notes from the afternoon session on Tuesday 1 March, the second full day of the Learning Analytics and Knowledge ’11 (LAK11) conference in Banff, Canada.
[Edit 3 Feb 2014: Comments disabled because of spam – do comment on others if you want to say something.]
(Previously: The Learning Analytics Cycle, liveblog notes from Pre-Conference Workshop morning and afternoon, from Monday morning and afternoon, and from Tuesday morning.)
It’s still bitterly cold, but it’s bright, sunny and clear, and the views are even more stunning than this morning. With this scenery and situation, I can understand why Banff Centre is a hub for creativity and inspiration – it is remarkable here.
(Note nearly frostbitten thumb in top left hand corner.)
Keynote: Caroline Haythornthwaite – Learning networks, crowds and communities
From UBC. Latest book on eLearning Theory and Practice.
Talk about the effects of a common event to get together on learning. Being delayed for four hours gives excuses to communication. Created a learning network – getting phone messages, info about next flights. But also translation, helping people understand accents, and misunderstandings.
Background is looking at social networks, how people work, learn and socialise at a distance through computer media. What you can and can’t do; rediscovering what it means to maintain a tie and how you do it. How they affect information and knowledge sharing. A lot of work on elearners, distributed knowledge – how you co-create knowledge together, ubiquitous, societal-wide learning, JIT learning. Turn from contributory practices to working with strangers in online communities.
Networks – large scale – knowledge, communication, transportation, webs, all sorts. Talk about way they can represent understanding, learning and education.
Basics of what is a social network. General idea, don’t aggregate their behaviour, look at what happens within that network – who talks to whom about what – and what the learning outcome is. Idea of actors – nodes in a graph (people, organisations)- tied by relations, that form networks, analysed and displayed in graphs.
Many different terms. These networks are evident in our conversations, in our buying habits (Touchgraphs), in our organising (web links from IoE), in our organisation of knowledge (bibliometrics – probability of clicking between journals), in our travel options (you can fly anywhere says the diagram … but the departures are all to Copenhagen), in our collaborations (who names whom among teams of collaborating researchers).
How do we tie this SNA to the concept of learning? Relations and networks and what we can see from them.
What constitutes a learning tie? What is it in a network that shows learning? What makes a tie a learning tie? Looked at what tie people when they say they have a working tie, a collaboration. If these are the things that tie people, need to design to support those. Social support matters, have to design it in, makes for better collaborative working and learning.
What do we put in to the graph? One thing or multiple things? Could have tie is that you taught someone. Or ask them what is in my tie when I say I have a good learning relationship. Fact, fiction (gossip), know-how, group processes (community of practice), group knowledge (who knows who knows what). The level of attention (from individual up to society). Educational relations, and community/society relations. Can use all these for network analysis.
The relational mix. Interdisciplinary teams learn different sorts of knowledge – patterns in the networks show different types of learning received. People who worked on method talked to people who worked on method; there were different communications channels at different levels. There are different relations at different levels – so is different if you ask people if they’re aware of someone, have contact, or have a collapsed contact network. They’re all stacked but give you variations. How to lie with networks – could do well!
Interesting things about networks is you can discover networks that come out of a learning relation; learning itself as a learning outcome, see brand new networks and roles emerging. Early study of who collaborated with whom in a class. Found four groups, didn’t know that, but confirmed with the instructor. Can see cliques, how well information could travel across the class, see isolated group. Can get picture of how class works together. Can see the network stars – network position where people are situated to control knowledge within a network. They’re a cut point, or bridging a structural hole – an entrepreneur can fill that position to spread the learning. Graph tells you how well connected people are.
Networked flow outside the network – distance learners (Kazmer), class network wasn’t the only network with knowledge. Went from class out to localities, then came in from there to the class. Different flows along those pathways.
Can use networks to interpret – ask people if it’s what they want. Also tie concepts together – concepts can be nodes as well as people. Left with a question: what does this mean? Can ask people that. The real work to do is, if we see a cluster, what does it mean? Is high density good or bad?
Asked people about the people they worked most often with. Dense network with lots of edges (people not asked to contribute). Found people in the core network who weren’t in the list of people on the team, but were critical to the function of the team. Found the network who were learning together that she wouldn’t have got from the list.
Emergent learning roles – we know a teacher, a learner. In online environments, new roles – students start taking over, answering beforehand. People who start shaping the argument – efacilitators, braiders, accomplished fellows; learner-leaders. Hard to see in a 600-person auditorium with only the teacher talking.
Can also discover effects from media – co-located computer scientists. Unscheduled meetings, scheduled meetings, email. Statistics show that these networks build on top of each other in a unidimensional scale. Another case study – IRC, webboards, email – all build on each other. When we design, we do so only for the strong ties – want to wave a flag for the people with weak ties, very important so people can start working with each other.
Media multiplexity – the stronger the tie, the more media people use. (Also more reciprocity, disclosure, etc)
Evolution over time – email and IRC start the same, but email connectivity drops up; IRC has weak ties growing, extending.
Wide connectivity with low frequency; selective connectivity with higher frequency – need both, have to have strong ties talk together as well as the weak ones. Always have people coming in and out.
Theory formulated – bring a new medium in to a group creates latent ties, connections that are technically available but not yet activated socially. Example of this conference, lays weak ties, have opportunity to turn them in to strong ties, use other media later. Need that weak tie structure to start with.
Learning in a networked world – personal, community, crowd-sourced, community-maintained. Formal learning, informal/non-formal, elarning/networked/ubiquitous learning.
Two ways to look at this – at the centre of our universe, a personal ego-centric view; or a communal, networked view. (Self-directy heutagogy, Luckin). Social capital is not possessed by an indivual, but by the network, connections – so look at what learning is held in a network. Learning can be distributed so the whole network knows. Put people in groups with different roles, think are not all learning the same, but this is what goes on in the real world.
Lightweight to heavyweight peer interaction. Wikipedias – crowd-sourced communities that add information incrementally by people who don’t know each other. Why would they care? Ties back to idea of latent ties, network aspects in general. Crowd-based has a lightweight peer production. You don’t know anyone, can nip in and change something, and leave. Simple contributions – e.g. proofreading, don’t have to wait for others or know them. Contributions the same, lots of them, no connection. Is still a learning network, it’s just constituted differently. Many lightly-tied non-networked individuals. Has to be run by an outside authority, has to be something they come to, what’s their motivation? A coorientation to the overall purpose – e.g. a commitment to open source and to give away knowledge. Academics a prime example, coorientation to dissemination of knowledge, we give it away. Why? Because ideal of open access. On other side, have heavyweight, heavily tied community. If you don’t pay attention, group will fracture. It’s not a nip-in and leave. As much about the people connection as the contribution. Motivation is purpose and group interaction. Contrast of teaching group and, say, Open Street Map where people are motivated to the idea of mapping, free information, and representation of their own region. Motivation is important for learning. Most research focused on the tightly-coupled heavyweight community, look at the lightweight too.
Learning in a communal view – learning is a relation, a production, an outcome of relations, and spaces – physical, virtual, and so on.
Learning as a network perspective – can be a relation connects people, the characterisation of the tie, an input for design, an outcome of relations – and also as contact with ambient influence, e.g with the Internet, we can look things up all the time.
Terry: If you look at your definition of learning, does this apply to all kinds of learning? Conceptual, memorising.
It’s how you define it. It matters what you want to look at. The network is just the interactions between people, it’s a demonstration of connections, not learning.
Terry: Conceptual knowledge, physically how to do things e.g. sailboat
It applies to them all. Social network isn’t the end of the question. Not about what’s happening in the mind, is what’s happening socially. So sailing, you take lessons, build up relations with an instructor. Network perspective doesn’t preclude objects – I’ve stuck to people-to-people, but could put in say a book, so have ties between people because read same book. Motor learning isn’t in a network picture. Doesn’t tell you about how to tone your muscles. It tells you that the people who ski have friends who do; people who hold the knowledge about skiing and how to fit in to it, and who talks about it the whole time.
Griff: Networked learning, individual cognitive capacity. How important is it that an individual is aware of their status and social capital they’re creating in this sort of learning network?
Some people extremely aware, can take advantage, some ignorant, may lose the ability. Reflective learning as a practice of how to learn. Very important. Under-utilised and under-addressed. What do people know? Cognitive social structures, what do people think other people know? Knowing that is good for the community. Know you can hand off some things. An efficiency aspect, but have to know your position in the network. We don’t talk enough about groups processes, Wenger’s Community of Practices very clear about personal, group identity. Engestrom on identity constantly reforming. Not just awareness of the status quo but the changes. Need to learn how to learn at a distance. Learning how to be a traveller, a distributed organisation. Working on videoconferencing systems, learned to recognise how many seconds were an indication that the connection has dropped. How long is a frozen picture a sign – we’re learning to be distributed communicators.
Did I address your question or ramble?
Griff: Both!
Katrien Verbert et al – Dataset-driven research to improve TEL recommender systems
Postdoc at KU Leuven, working with Erik Duval.
Slides will be on her Slideshare.
dataTEL project – theme team of the STELLAR network of excellence (EU proejct). Two main Grand Challenges – conencting learners, and contextualisation.
Need to collect vast amounts of real-life data to validate how these algorithms perform. To enable collection and sharing of datasets, several core questions. Have to look in to protecting privacy of learners, preserving confidentiality, giving the learner control over what data they expose. Also preprocessing of results. New evaluation criteria beyond traditional metrics in recommender systems, see effect on learning processes.
Very important to convince organisations to free data so we can use it for research. We need large scales of real-life data to compare algorithms, but also to see what we can do to personalise recommendations, what similarity metrics are useful.
First challenge at a workshop, issued a call for datasets. Eight datasets submitted. Also discussed how to facilitate sharing of datasets. Can find the datasets on the TELeurope site.
Very impressive dataset from Mendeley – system for sharing papers among scientists. Several others from large EU projects, including MACE mentioned before.
First research experiment, tried to validate existing collaborative filtering using standard algorithms. Amazon collaborative filtering – uses other users’ opinions to suggest new items based on what you just bought. Uses similarity measures – cosine similarity, Pearson correlation, Tanimoto or extended Jaccard coefficient.
Evaluation metrics traditionally used – accuracy (precision, recall, F1); predictive accuracy (MAE, RMSE) – e.g. split set in to 80% training set, 20% test set – mean absolute error; coverage too. If have very sparse datasets, items not rated so far can’t be recommended.
Two experiments – collaborative filtering based on ratings, and on implicit relevance data, because ratings often very sparse.
Can see influence of different similarity metrics – Travelwell, MACE, Movielens – tanimoto similarity better than cosine and Pearson, because ratings sparse.
Compare algorithms – main conclusion is choice of algorithm is dataset-dependent, different algorithms did better for each dataset. Consistent with other findings. Need careful testing before deploying.
Main bottleneck is the sparseness of the ratings.
Second experiment using implicit relevance data. From Mendeley, took all available data – when added, read, cited paper. For Mendeley got very good results, but precision very low for MACE dataset. Some datasets are not suitable for this sort of filtering.
The key question remains – how can you customise these filtering techniques for learning. Surveyed recommender systems evaluated in learning settings, make an overview of what sort of data they use. Many other possibilities beyond what they have explored – activity, user characteristics, physical conditions, location, etc.
CEN WS-LT – standardising dataset characteristics. Also a framework to monitor performance of algorithms on different datasets – collaborate with other researchers and pool results. Would be a major step forward.
Evaluation criteria – accuracy, coverage, precision are valuable. But for learning, must go beyond those, combine them with e.g. effectiveness and efficiency of learning, drop out rate, satisfaction. Also reaction of learner, learning improvement, behaviour, results.
Asks for people with data that can be shared. Do you want to be involved in dataTEL research? Please get in touch.
Finally, dataTEL challenge at the i-Know conference in Austria. Track on dataset sharing and recommenders.
Dragan: Presenting three types, measures of similarity – cosine, pearson, other. Which most effective?
On the TEL datasets, the extended Jaccard gives best results. Consistent with other results when ratings very sparse – consider only if rated in common, not level of rating.
Javier: Tried to rank algorithms by traditional methods, recall, precision – technical metrics. How do you plan to make jump to learning measures?
I also tried to cover that in a survey. How researchers covering that so far. Tried to measure effectiveness on a task, e.g. a simple assignment, measure how effectively learners complete a tasks. But can get very elaborate, e.g. effectiveness until graduation. Could be very hard to get insights in to effect of recommender.
Dragan: What happens in the beginning? Have datasets, but they are very generic.
Combine different techniques. Match what you liked before, then later, with collaborative filtering technique. Commonly used approach.
Dragan: Also useful when it’s not just one group
Sabine Graf, Cindy Ives et al – AAT – accessing and analysing students’ behaviour in learning systems
From Athabasca.
Cindy starts.
AAT is an Academic Analytics Tool – may be Learning Analytics Tool. Designed to work in Moodle, their LMS. Athabasca’s context is different – continuous registration, open, so not cohort-based. Not social networking, but how learners interact with individual objects. Moving more fully digital/online, so no direct way in observing how students interact with course components. This tool to help them do that in less obvious ways. Log data from LMS are inadequate for understanding. Want to understand students’ engagement and performance, then with other data, make predictions and interventions for better performance. Informed by formative evaluation. Design-based approach.
Sabina takes over.
The aim of the tool is to provide users (learning designers, teachers) with ability to ask detailed questions about the data collected in the LMS – how students learning, interact with the course. Analyse the extracted data and store the results. Very important to allow users to specify what they’re interested in. Some statistical reports, want to give control to the users. Idea is to be flexible with respect to what courses they want to analyse – a subset of courses, or specific ones. Want it to be flexible across learning systems and versions – tool usable in different LMSs, not just current version of Moodle.
Four design elements:
- learning objects;
- patterns (based on types of LOs), specifies what user is interested in – query or formula supported by query, e.g. list of students with more than ten postings a week; average postings within a week;
- templates – make it applicable in different LMSs/versions – specify where data resides – XML map file;
- profiles – experiments for extracting and analysing information – specify which system, how to connect to data, which courses to investigate, which patterns – use this to extract the data, see trends.
Architecture is based on profiles at the centre, linking the inputs (patterns, templates, LOs, dbs, datasets), generate outputs in database or e.g. CSV files.
Live demo!
Start by choosing what you’re looking for from drop down, shows you what’s in the database (e.g. list of courses). Then select that, and move on to the pattern wizard – pattern creator. Specify forums, quiz questions, whatever. Create lists – e.g. students fulfilling a condition. Perform arithmetic operations; complex patterns.
Example – quiz questions, how students do on particular questions, see the difficult ones. Select interested in question states, see what’s in the table, choose those – shows you a few rows from the table so you know it’s the right table. Then save that as a pattern. Then perform arithmetic, select pattern just created. Say want average of the grades, for every question id. Click calculate – gives you a table of the question ids and the average. Can put a condition – e.g. lower than 0.3 – click calculate aain, shows all questions with average less than that. Can then investigate more.
Can manage the profiles, can see the history.
The tool is currently under evaluation by learning designers wrt usability and usefulness. Plan to use it for evaluating courses, identify success factors, automatic interventions. Plan to release as open source tool.
Someone: Can you do chain enquiries? E.g. give me all students who have hit discussion less than 1sd than the mean.
Yes, the idea is to chain them. Start with simple patterns, build up to more complex ones.
Griff: Approach to discriminate between good and bad questions, much research, integrating those concept on that as well?
Cindy: Yes, haven’t got to that stage yet. We add more and more of these learning objects – quiz questions are new, they may be designed ineffectively. This will work as formative data to revise these questions.
Josep Grau-Valldosera and Julià Minguillón – Redefining dropping out in online HE
Josep talking. From U Oberta de Catalunya.
Dropping out traditionally a problem of brick-and-mortar universities – in Spain, 25.7%. Is also a problem in distance and online universities – 38.9% for the UOC. Is even bigger. Partly a matter of the official criteria and definitions. We are trying to find another definition.
Definitions – not taking final exam, not taking a course in certain periods, not gaining fixed % of credits over a period of semesters. Can have drop-out definition from degree point of view, but is ususally to do with a specific course.
Official definition (in Spain) applies to all including UOC – doesn’t fit their reality and peculiarities. So goal to find new definition, based on reality.
Background on UOC – 100% virtual, 50k students, mainly adults, 90% with job, 60% with previous HE, 19 degrees plus Masters and PhD. No obligatory enrollment – students can take breaks. Age profile peaks 30-34, and many over 40. 50/50 M/F.
Official definitions are related to ‘obligatory’ enrolment – (continuous?); allowing breaks as UOC do makes such definitions unusable. Is the case in many distance adult ed institutions.
Graphic illustration of enrolment behaviour. A new student comes in, becomes active. Then may take another course, or take a break. They may drop out after a course, or after a break. May take break of several semesters.
One example student – took three semesters, then a one semester break, then 7 more semesters. From that information, generate a personal record. Plot 1 when enrolled in a least one subject in a semester, a 0 if took a break. Different patterns: graduating with a single break; three 1s, too early to say; three 1s then lots of 0s, looks like a drop out – depends on whether they come back or not.
Analysed the long breaks – goal to minimise error of affirming that someone has dropped out when they haven’t. Use threshold of 5% probability that they will come back after a series of 0 – i.e. risk of error of 5%. Have also tested 0.1 and 0.01.
Examples – Law studies – 2 students took a 19 semester breaks! 9.5y break. But 5 semester break accumulates 3.77% error (4 semester break 5.12%). So when law student takes 5 semester break, less than 3.77% chance they’ll come back.
Across most courses, N goes from 3 to 5 semesters, interesting variation. Total – 4 semester gap yields 4.35% error, gives 57.6% drop out rate – this definition is higher than the official value.
There are differences between degrees; not between different types of degree content. Short degrees seem to have shorter break-up periods ending in dropout – if you are on a short degree, you decide you’re dropping out faster.
Produced graph of enrolment behaviour – 3320 dropped out, 629 earned degree. Can see the importance of first semester dropout – half of all the dropout happens in the first semester.
Defining dropout in distance HE needs to take in to account differences between individual degrees, and degrees of different duration.
Future work to look at analysis of causes and predictive analysis, and designing specific actions to reduce dropout.
Chris: Really interesting point. We’ve been talking in different vocabularies about what we mean. As a community, we want to come to a point where we can share (stuff), need common definitions. I hadn’t thought about these issues when it comes to dropout. Really appreciated that as an example of problems we don’t even realise we face.
In UOC, had our own definition – said our students were sleeping. Problem is to find a more realistic definition.
Rita Kop, Hélène Fournier and Hanan Sitlia – Value of learning analytics to network learning on a PLE
Rita presenting. Work on a PLE project at NRC Canada, in Moncton, New Brunswick.
Changing learning environment
ICT has made major changes. Our life has become more complex. New learning opportunities, outside institutional structure. Informal and self-directed learning. The web has changed, and grown; the data universe has changed. 70
Researched PLENK2010 – major activities were aggregating, remixing, and sharing. It’s a 10-week Massive Open Online Course. Distributed across the net, four facilitators. 1641 participants. Moodle, Elluminate. Grsshopper software to aggregate.
Had to rethink research methods, the environment much bigger. Complexity. Many issues. For instance, ethical issues in collecting big data. Can’t just use data that people have given for another purpose. Need to gain informed consent, has new meaning.
Did qualitative and quantitative methods. Virtual ethnography, focus groups, surveys, data mining. Very crude analysis so far, thematic analysis/Nvivo; learner analytics and visualisation, stats on surveys.
Learner analytics was a new tool for them. Does it add to traditional methods? Clarifying? Does linking data enhance learning?
Hélène takes over to talk about what happened.
Many different people as participants – professors and researchers, designers, teachers. Got demographics, 55 and older group are a majority of participants. Spread across the globe.
What did they do? Used a variety of tools – high number of blog posts, even higher number of Twitter posts, increases steadily over the ten weeks. Aggregated posts via the tag. Elluminate, Moodle were steady but low throughout. Only 40-60 individuals participated actively on a regular basis and produced blog posts. Visible participation was much lower by the majority.
Traditional graphs – when facilitator participation goes up, so do the participants’ participation; activity highly correlated. Contrasted with analytics – in Moodle, key links are the facilitator, but a lot of connections between the participants among themselves. See PLE, MOOCs and education as linking topics – in just one week of activity.
Thematic analysis – learning is the central concept. Agency, the I/me ownership of the discussions were sub-concepts.
Student blogging experience to compare traffic on her blog, contrasting CritLit and PLENK – ‘you will be noticed only if you tweet’.
Twitter analysis – the reach of activity – shows a lot of links to and from participants. Also analysed hashtag co-occurences and related communities.
Found analytics helpful; visualisation did clarify things didn’t notice from trad qual methods. But still need those to capture depth. Ethics implications are there. Linking data could be used to enhance learning.
Griff: One Easter egg, facilitator said one thing, had a huge effect. Did you look at types of post that really got people worked up.
Rita: The posts that get people going the most are controversial statements. Have to think how to word it to generate participation. Had huge number of participants, once ball is rolling people spark off each other. Twitter interesting in linking blog posts back to discussion forums. Twitter has been an important part in that.
Erik: Talked mostly about post-facto analysis; thoughts about how it could help during the course?
Rita: Yes, it will clarify, as you showed in your presentation. If have a dynamic model, it will give learners ideas on how they’re fitting in to the learning that’s going on. The most exiting part of analytics is eventually we may be able to get people to learn.
Erik: Pursuing that idea? Or bit early?
Rita: We will, Have used analytics as research methods not learning tool. But in PLE have recommender systems, use data in a different way too.
Helene: Participants also mentioned would have liked threaded discussions across weeks, but was split week by week. Some did continue across. If they tagged content/contribution, would have helped make connections across weeks. Feature of Moodle environment that could have been useful.
Simon: Big challenge, have an arbitrary number of systems – an SNA for Moodle, Flickr, Twitter – how do you merge those? Huge challenge.
Rita: Definitely. If you want analytics to work on a multi-platform, have to connect them in some shape or form.
Dragan: After a week, presentation was good. Were there pedagogical interventions? Understand it was open course.
Rita: Had discussion after the course. Large number participated, but not by producing material. Dropping out, not interested? Majority of learners – did surveys, want to know why people dropped out. The pattern on the internet of participation, like Nielsen. Also, about 54% said they were self-directed learners and didn’t need to communicate. Also a lot of novices who said they needed the time before they would participate and produce things.
Lori Lockyer and Shane Dawson – Learning designs and learning analytics
Lori presenting. From Wollongong (Shane from UBC).
Project based on using SNAPP. Looked at a project mainly in Australia, but other countries too. Looking at how people were using SNAPP. Wanted to look at the next step. Her background is in learning design research.
No numbers in this one; focusing on the teacher (= instructor = faculty = academic), in a traditional university. Designing their courses or units, and delivering. May work with an instructional designer, but idea focused on university teacher.
Learning Design – focused on resource that may have activity in it. About teaching and learning practice, the roles involved, who enacts different activities, what support comes in to the process. Has been going on for a decade. Similar to pedagogical models, pedagogical patterns. IMS LD – trying to develop a language to describe different kinds of learning interaction and who was involved. Different frameworks and models, sharing practice ideas amongst the teaching community, help teachers develop a range of designs in their own contex. Two volume handbook of research – also Gráinne’s book coming too.
Ten years ago, project looking at teaching with some technology – scope out what was going on, identify quality, created repository of learning designs, with a number of cases. Also some abstract, decontextualised cases too. Came up with their own framework to communicate a learning design. Capture in text and visual form.
Much work since then, language, tools, how teachers use and interpret, use for review/reflection. More fundamentally, look at how teachers design.
How does this work with learning analytics? Erik’s question a good lead-in. Often thought of post-implementation. We’d like to integrate learning analytics with different design ideas or guidance. So learning analytics comes in at all points in the learning design process – design, implementation, evaluation. Just-in-time analytics to understand learner activity, on-the-fly, make decisions about taking action – how to present those to a teacher to make it usable. Can we embed some ideas, intent, behind the learning design – recommenders for teachers. Finally, the post-implementation, to complete the cycle.
Someone: Consider a scholarship of teaching and learning approach?
Yes. Has been at heart of a lot of learning design work. A lot of people engaged, often an evidence base. LDs initially put in to repository had evidence base, use evaluation of teaching to create designs. Fact that they’re disseminating designs is evidence of scholarship. Scholarship of teaching has happened with this.
Someone: A success, that aspect? Or small scale and not scalable? We’re trying to encourage faculty to do this. Having faculty research online teaching. Is that an important part?
Partly it’s a sideshow, partly it’s contextual. In Australia, issues around demosntrating scholarship of teaching around the tenure process, feeds in to federal funding for universities. Probably less of a sideline, more a requirement of the job for an academic.
Dave: At the Oscars, could have prepared better if could see the Twitter stream. Could you look at this? JIT response, track what’s going on, JIT design, haven’t prepared, get in the middle.
We’re trying to figure this out. When a teacher designs something, how can design guidance provide opportunities for them to think through streams? And what analytics help change on-the-fly? And what will they be prepared to change.
Al: Talked a lot about learning analytics in the sense of tracking what students are doing, around learners. Implicit assumption that all teachers are equal. Just recently Bill Gates announced focus on teacher effectiveness. Are we thinking about teacher analytics? Thoughts on that.
Teacher effectiveness. Not sure I want to go down that track. We’re making sure analytics match teacher intention. If sociogram shows a pattern, does it match your intent? Outsider might think it bad, but if it is what you intended, because say you had some other activity, those issues are important before we get in to an analytics of the teacher. In Australia, the dashboard has been applied to research, many researchers upset about research analytic dashboards. Applying to teaching would be interesting.
Devan Rosen, Victor Miagkikh and Daniel Suthers – Social and semantic network analysis in VLEs
Dan talking. Yesterday showed really complicated stuff, this is very simple, just temporal proximity.
NSF grant. Previously worked on chat analysis. SNA of chat logs.
Challenge for chat is you don’t have structure of who’s replying to whom. Adjacency constraints loosened. Hypothesis – people who chat close together in time are co-present (See each other’s stuff), and chat is probably addressing something that happened recently. Can’t defend any given utterance, but aggregate the temporal proximities, get some info about who is talking to whom. Assumption: people may be responding to a recent contribution.
Move sliding time window across a transcript, build sociogram of tie strength. Every time they’re co-occuring, increment tie strength. So weighted, directed graph generated. Counts are the weights in a weighted directed graph – runs as O(n) so tractable.
Analysis of a Tapped In session on wikis, 62 people. Teacher-professional community, has various components. Large 1h session. Degree and betweenness suggest roles. Indegree, how many people chat after you, outdegree how many before you. Betweenness too. So one person with high in- and out-degree, but low betweenness because in middle of tight hairball. But other user still fairly high degree, but high betweenness because to reach others you have to go through them. Semantics of this is a big challenge. Or can have high betweenness but low degree – connecting cliques. The people who popped up are the moerators.
Tapped In have public transcripts, is available.
Another example – graph looks fairly balanced, no one person really central. Others often have the leader much more central.
Chat proximity provides first approximation to ties based on interaction. Can reveal structures – e.g. decentralised is egalitarian, centralised is leader directed.
Next steps, improve estimation of ties, tighten up – require round-trip chains (e.g. A-B-A, A-B-A-B); add other contingencies.
Can get their papers on LILT webserver.
Chris: Compared the temporal consistency of the structure and weights of the graph – e.g. use first half hour, estimate the weights, then compare with the other half hour.
No, haven’t done that. Might tell us that the chat had changed. Can do a sliding window thing. Changing role development over time.
Someone: Tool familiarity. Involved in lots of IRC channels, each one has its own culture. People are either used to or not used to it. Like Twitter, some hashtag all over the place, others don’t know what they’re doing. May change what you’re doing. Do you think it’s homogeneous as a community?
Tapped In has a mixture of old timers and newbies. Some things people get more familiar, but others have people in from all over the world. The moderators help newbies with how to use it. But possibility for difference in participation.
Someone: Artifacts that show up in the visual representation?
When first applied to another community of kids, used a 60s window, had to expand to 120s for the Tapped In community because they are slower at responding.
Simon: Someone tweeted link to transcript, has copyright all rights reserved statement. May be standard footer?
Don’t know what that means. She told us we don’t have to worry about the usernames. Good question.
Dragan: Ethics?
Well, for this grant relied on SRI’s IRB, told ours and got their permission based on historical data, no possible harm.
Griff Richards and Irwin Devries – Revisiting formative evaluation
Irwin, from Thompson Rivers University, starts off.
Following up, different tack. Interested in improving courses on a continuing basis. In online pre-conference, what analytics would help in this course? Interesting phenomenon – participants, co-creators, improvers. Quite new experiences. Sense of participation, need to give back to community. Many suggestions made, worry about popularity and groupthink, and tracking without a clear purpose. Sense of group ownership, own whole process.
In distance education (DE), have done this for frightening length of time. Development cycles have shrunk, smaller teams, less stringent evaluations (e.g. pilots before full presentation). More and more technologies to support faculty, faculty have to handle development. More dynamic learning environments, pedagogies of co-construction, remixing, etc. More outside LMS than inside. Increasing danger of irrelevance, e.g. 3/4 shoot off outside the LMS.
Heard at LAK11 – want environments that react to learners, richer feedback, meaningful and actionable data. Analytics for the learner, not to the learner (or of!), and – holy grail -analytics to optimise learning.
A simple feedback system: measuring credible learning, steady enrolments, happy learners; treat course as process, feedback loop. Currently, systems are very haphazard, come in at various systems. Want coordinated quality systems – strategic course design.
Griff takes over.
Trying to retrofit a whole university full of online courses to a way of analysing the quality. Also developing new courses, how do we know what we’re doing is being effective.
Conceptualise course as a sequence of learning activities. What would it look like to analyse each of those activities? Three layers – design, facilitation, learning – and for each, look at preparation, conduct, reflection.
After assignment finish, ask students simple quantitative questions – common ones across courses; plus custom ones. Get a quick, quantitative picture. Also asks for open comments – what would you change to make this activity work better? What would you keep? They do like to write all this, hard to treat when there’s lots of respondents. He gets 100% response!
Goal to build a micro-LAK system, outside the LMS, with generic and custom items for each learning activity. A shared toolset, work with others. Want to develop learning activity patterns, generic lessons to pass on to develop, deliver, and learn.
Goldilocks principle: not too much data, not too little, just right.
Erik: Could you say more about the danger of too much data?
Too much is when you don’t know how to deal with it. See too many patterns and can’t see what’s relevant. Data mining perspective.
Erik: Is that true in a world of algorithms, as long as they scale with the data. Attention metadata is like that.
What you have time to attend to, not a machine problem, it’s a human problem.
Erik: It’s like saying the web is too big
My mind is too small.
Chris: The difference is not the amount of data, but diversity of schema of the data. If you had ten students using automated methods, or 10,000 wouldn’t matter – unless have extra data that is just noise or not at right granularity. Clickstream data annoys me – would have to aggregate that just to start dealing with it. It’s a schematic problem, that’s when we get overwhelming.
Start small and build more, or start big and whittle down to what you can track.
Someone: Asking people, why not using social system to go with technical system. Why not get them to rate how important their comment is? This is vital, or this is minor. Get them to tell you what to pay attention to.
Good idea. Building first iteration of this collection system.
Someone: I’m a teacher, the issue isn’t the size of the data, it’s that as a teacher, I don’t want to see the complexity I’ve seen today, it’s how the computer scientists turn that in to a simplified form.
How much do we want to look at. Quantitative – not just looking for narrowing in, but some deviation amplifying comments, improve the learning activity.
Vassilios Protonotarios, Nikos Palavitsinis and Nikos Manouselis – Applying analytics for a learning portal
Nikos P presenting.
Was overwhelmed by LAK 2011, we are playing with ABCs. Used Google Analytics for the paper, when deployed didn’t have time to do something better. Originally deployed for reporting purposes. EU reporting, say we have that many visitors.
Organic.Edunet, 3y funding from EU eContentPlus – continuing. Aim to make learning content online, for range of stakeholders – teachers, students, professionals. Domain of organic agriculture and agro-ecology. Also develop educational scenarios for use in schools and universities.
Portal online in January 2010. 30k visits; 146k page views; 1,800 registered users – have had big increase in last 3 months. But resources stayed the same.
In parallel, organised validation events, called open days. Brought in people from external organisations, they played with portal, had predefined program, structured exercises, also unstructured interaction too.
Looked at stats for portal as a whole (visits, page views, bounces, most popular pages), and individual users (time on portal, page, depth of visit). Studied in three periods: pre-open day, during, post-open day. Two groups of open days. 160 participants in 13 open days.
People used the portal more on the open days. Over time, more visitors came from referrals, fewer direct. Over time, how to search dropped in popularity, but text-based search was the most used, not the semantic or tag search. Depth of visit remained the same, then dropped off after last open day. Loyalty peaks at second open day (? because people from previous open days came back?).
Text-based search prevails, they spent lots of time developing new ones. More users come to the portal over time, but the spend less time, and view less pages, and more come back but more bounce off it.
Many variables – language, seasonality, and others. Many open issues too.
Hosting a summer school on TEL – www.jtelsummerschool.eu.
John: Why were the Hungarians so bad?
Funny thing, the Hungarians got most visits through the Hungarian version of Google – they Googled for the portal then visit. Don’t know why. Organised open days in a hurry, because they had to, followed process very loosely. That’s our perception, is why statistics were not right. Maybe because resources are mainly images, so don’t spend time looking at them, just save them.
Dragan: Social features?
We have ratings. Users can create profile, some social features. Not groups, but some basic stuff. Will look in to those statistics now.
George Siemens – Final wrap-up
When I was 17 years old, had little fear, was in a big van with sliding door. Going down street at 50-60kmh. Saw a friend on the side, immediately prompted to chat, thought could just jump out. Did a few cartwheels, hit a tree, realised had misjudged the pace at which I was moving. I think there’s a real sense in which the field around analytics is moving at a more rapid rate than anyone in HE recognising. Most fascinating – the way in which activity in which it’s happening in different fields, with opportunities for connections. Each new data element added to a network amplifies what already exists.
Thanks sponsors, keynotes, presenters.
One question to grapple with, is what’s next – for analytics, as a conference. Steering Committee will talk about this. Will be a follow-up email if future directions, with dates. Hope to have wider gap between call for papers and deadline. Also groups set up for LAK11 open course.
Also CFP for Ed Tech & Soc special issue on learning analytics.
Will follow-up with post-conference links, videos.
–
This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission needed to reuse or remix (with attribution), but it’s nice to be notified if you do use it
One thought on “LAK11 – Tuesday afternoon”
Comments are closed.