Liveblog notes from the morning session on Tuesday 1 March, the second full day of the Learning Analytics and Knowledge ’11 (LAK11) conference in Banff, Canada.
(Previously: The Learning Analytics Cycle, liveblog notes from Pre-Conference Workshop morning and afternoon, and from Monday morning and afternoon.)
It’s fearsomely cold here – about -30C this morning – but stunningly beautiful. This is a quick snap out of the window of the restaurant where we had breakfast. There are views of the Rockies all around. If I’d brought a better camera – and was a better photographer – you’d get some staggering pictures.
Keynote: Erik and HannaH Duval – Attention Please
Erik is assisting his daughter, HannaH, with the presentation.
Learning analytics – don’t know what it means, but it sounds very interesting. Very grateful to be invited.
Slides will be online on Slideshare.
HannaH is circulating with bits of paper for feedback – mechanical Twitter backchannel. She will come round and collect them back in.
Erik’s team do work on TEL, and music information systems, and help people with their research. They work at K.U. Leuven.
On attention – trying to figure out what people pay attention to when they learn. Video clip of two teachers with an uninterested student (two children playing with a laptop and a dog). Teachers do intervention with technology, trying to get learner interested (kids setting up a video) – if she’s not paying attention, she’s not going to learn. They point the dog at the screen, and the technology takes over from there. The dog is watching and tipping her head from side to side.
Can we use what we capture to get better at what we do?
Many people use human-readable attention streams. Is teaching a course on HCI, students are blogging and tweeting, reading Twitter stream from the conference yesterday. This tells us what people are interested in. Uses Yammer with his team, gives him the pulse of what his team is doing. It’s human, it’s explicit – but it doesn’t scale very well. George’s LAK11 MOOC – Erik participating through the human-readable attention stream, but was overwhelming and hard to cope with. Maybe tools can help with this.
Theme is to use attention metadata to filter, suggest , provide awareness to you, and support social links.
We are not trying to be very directive on what we get out of learning analytics. Not ready to use them to say – you should do this. But make it available to help people steer their own efforts. Give them a dashboard, but they drive.
Examples from elsewhere – Amazon, last.fm. Also – Wakoopa. A plugin that tracks everything you do. Based on that, gives you a dashboard of what you’ve been doing over the last so many days, hours – applications, destinations, and so on. Kind of interesting. Because it tracks your software and usage, compares your use with others, and suggests new software that other people who act similarly are using. So it’s filtering and suggesting. It also gives you social interactions – here are some people who are similar to you, could talk to them about their experiences with software. Can follow other people, find out when your friends start using new software – that’s the most important information source to know of new software. It’s subtle, keeping each other informed about new tools. Another example – TripIt – follows your travel plans. Not by doing a lot of explicit filling in of forms, just email confirmation emails from airline, and it gives you an overview and tells you how is going to be close, people coming nearby.
Could translate this to a learning example. E.g. going to Tuscany, want to learn Italian. Is there an Italian speaker here? (A hand goes up) This tool could point that out. Gráinne, you have learned Spanish and tweet about it. These tools transfer to learning.
Another source of inspiration is from the world of physical exercise, jogging and so on: capturing data automatically, using it to help get better at what you want to do. Idea well developed: Nike+, RunKeeper – you just run with the device (e.g. listening to music), captures where you ran, shares results with people in your social circle. Gives you nice overviews of whether you’re making progress, towards a goal. It’s quite a bit ahead of what we do with learning. If you start learning, it’s all fun, but you get a bit bored after a year, might not want to go running – and these tools will point that out, link you to others at a similar stage, encourage each other. Basic version free, pay more for more. Can set yourself goals, gives you a programme of what you need to do. Tracks how well you’re following the schedule, gives feedback. We could do similar things for learning – especially language learning.
Another tool – RescueTime. Gives you details overviews of what you are spending your time on. Gives you an indicator of how efficiently you’re working. There’s something in that idea worth pursuing. Detailed overviews. Can set yourself goals. I’m the only one here with a problem managing his time, I’m sure. Can set goals – e.g. no more than 2h/day email, no more than 1h/ Twitter – will tell you if you go over. You set the goal, not someone else. It doesn’t block it, just tells you that you said you only wanted to do so long and you’re running over, you decide. Gives you a dashboard, you do the steering and braking.
Google tracks his searching – could see he’s been searching for attention metadata since 2005! An awareness environment: confronting to go back to your earlier activities and see how you’ve evolved. Also, filtering searches through social circle – very few people do this. If search for attention, get some sites – then filter through social circle gets very different results.
Attention in learning – awareness tools
New project – ROLE – Responsive Open Learning Environments. Tracking everything that goes on in PLEs, keep track of it.
Store traces of activity in a database, have tools to visualise what goes on in the environment. Shows detailed dashboard data about how students interact with different widgets. Not just which ones, but what they do. From that, we figure out what works, what doesn’t, and why. Another group working similarly – Google Zeitgeist.
This isn’t a tool that helps you as a learner, or much as a teacher, to structure your activities. But we do build those tools.
One example from a PhD student – a graph showing lines for each student, indicating time spent on the course over time. Some students start early, some of those top off and are done. Can see periods when things are very intense. Many ways of displaying the data. As a student, your line shows up highlighted in a colour, can compare yourself to everyone else.
There is a long way to go to get this tools right from a user perspective. Spend half to a third of effort on evaluation. Presents results as a Wordle – was usable, meaningful, straightforward – but also overwhelming. So build infrastructure, evaluation with students and teachers, different version, evaluate again.
A different way of doing it: look at what people post on their blogs – student’s blogs on their course. CAn see students, and what they need to do on their course, show a dark square to illustrate complete, light square in progress, empty not started – based on their blogs. Don’t tell students ‘you must do this task’, but show what they have or haven’t done and feed it back.
Recommenders
They have one that searches learning repositories – YouTube, Wikipedia, others – tracks what you like, don’t like, matches with others, and makes suggestions based on your previous interactions. Bit more geared towards learning context than what search engines are doing.
Evaluations to figure out how this benefits teachers or learners. Behind the scenes is interesting from a Computer Science perspective. It’s a federated search service, but the interaction is stored in a recommendation service that stores attention metadata in a database. User evaluation – attractive, simple, time-saving; confusing, inconsistent. Less predictable can be good, but can be a bit bad since you may feel lost. Also some performance issues, now solved.
Developing tools, testing with real students, real students – is a big thing. We need to evaluate how things work in practice.
Dataset driven research
A meta-reason why it’s interesting – are collecting serious datasets about how people interact with learning material. Help us to become serious, quantifiable about how we research learning. I think it would make a lot of sense to find recurrent patterns and find out what they mean, based on data, not on mere opinions.
Talk later today on this – Katrien Verbert.
Also a group on TELeurope.eu through STELLAR network of excellence – people working on dataset-driven research. Join the group, it’s lightweight and easy to join.
(Hannah has gone to pick up some feedback notes.)
Big problem
What do we track? What we’d like to track is wire my students, measure what goes on in their brain and everything that goes on in their life. I don’t. Here’s an idea for my next project proposal …
Can measure all sorts of things, if we only measure what’s easy, may miss what’s important, but if we don’t measure it, we won’t know if it’s important. We’re still trying things out.
We have a very open scheme for attention metadata. If want to capture what someone’s doing in a learning system, it fits in our structure.
Xavier talked about Learning Objects yesterday – we reuse a lot of that work, collabrate with ESPOL.
Much bigger movement of people measuring lots of things – from mundane to philosophical – the Quantified Self movement. First meetup in Brussels this week. People walking around with gadgets measuring all sorts of things. Could have a whole tay on all these things. We now know that when someone doesn’t feel well physically, we measure their temperature – who came up with that idea that it’s useful? Maybe my toes grow longer when I’m feeling bad, or my hair curls – but someone selected temperature. But it’s a small range, 37C to 40C, less than 10%. But for learning, what’s the thing to measure?
Dangers – there are obvious dangers. Silly to ignore them. Big Brother, 1984. There are parallels. I sometimes feel uncomfortable about what we do. Mostly if the University or the training department will own your attention metadata, it revolts me.
The AttentionTrust.org – perhaps defunct – four important principles if you want to track what I do:
- Property – they belong to me
- Mobility – let me move it on to another service or person
- Economy – if I give you my history, you can serve ads, make money – what do I get in return?
- Transparency – a big one. I spend 15 minutes at start of every class explaining what I’m going to do, may feel uncomfortable, explain reasons why. Give option – cannot opt out (though course is optional), but can use an alias, and can say don’t want to figure it out later. I know who you are, but I solemnly promise not to tell anyone.
In a bigger perspective – more people who track everything they do, building up a permanent memory of their life. Gordon Bell was a pioneer (book Total Recall) – every conversation, every place, every email. I haven’t deleted an email in six years.
Book coming by Jeff Jarvis on benefits of doing everything in public. ‘Public Parts’ (previoulsy did ‘What Would Google Do?’)
Yes there are issues, but also great benefits. Need to follow these good principles. Just ignoring the issues is not a smart approach. Having more of this transparency – like e.g. Wikileaks – is how we can make this work for the benefit of people, rather than oppression.
We can learn from our students – tracking what they do, figuring out what works for them. We want to use attention in my team to filter stuff, suggest new stuff, make you aware of what you’re doing to steer your own process, to help you lead a socially-connected life with people who are relevant to you that you may not know about. Very different to the 20-25y ago idea of intelligent tutoring systems. Could be a bad idea or not possible.
Question: Are you researching what we should be measuring?
Apart from just superficial log in, log out. I’m all for openness – we are not doing it in a methodologically appropriate way. It’s like walking around, feel something, say just measure this. We’re not really researching that yet because I don’t know how. Would love to have a conversation about how to do that.
Someone: Motivation, self-efficacy for students – have you tracked that?
Have a whole different keynote on that topic! Yes, very interested. Strategy with my own students is open learning, people outside class see what they do, can be very motivating. Have some evidence that works. In the tracking, not sure I know how to track the motivation. Maybe you could do stuff with language technologies, but not sure can really tack. Could ask people to auto-report, but that may interfere with the experience. A big worry for me – a lack of confidence between students and professors: students will report what they think will get them the highest grade, which may not be reality.
Someone: Sounds like gaming. Motivation of how much your progressing, or your classmates have done this already. Is your work pulling in work from gamification.
Games tracking, move up to different levels, motivation, how they inter-related. Yes, we do some things, but not with the richness they do it in games. Very good suggestion.
Someone: The whole game layer is very interesting right now.
You can earn badges, comes out of that world. Some of these games are way more ahead of that idea, doing very sophisticated stuff. There’s another research project!
Simon: Recommendation engines. Take generic engines and apply to learning. Second step, algorithms change because learning i different. Where are we?
Should ask Katrina. My version twofold: there’s a way in which learning is different not to do with the activity, but with what we can track. You get sparser interactions, fewer people interact with each item. The brute force approach of Amazon or Netflix become problematic. It’s nothing to do with the notion of learning, just activity where there’s less cohesiveness in the group doing the activity. Then for technical reasons, need to work on algorithm. But the activity of learning itself is different – that’s kind of true, but it’s true for other things. E.g music, can look at emotional state and factor in to music recommendations. We don’t really understand how music works, but can figure out the patterns. I can select music that makes you cry. i don’t know why it makes you cry, but I can select it. With learning it’s different. There’s a lot about the process I don’t understand. But we can identify patterns that help you learn.
Katja Niemann, Hans-Christian Schmitz, Maren Scheffel and Martin Wolpers – Usage contexts for object similarity
Katja, at Fraunhofer Institute for Applied Information Technology.
Very interested in recommendation, it’s different in learning from other areas. Not enough that user likes an object, has to fit the learning goal, and competence level, preferred learning style, and so on. A lot on semantic metadata – but often we don’t have it. Auto-generation works for text, but not pictures so much. Also social metadata, but is sparse, ambiguous, often faulty.
So try to find semantic similarities simply from usage – Contextualised Attention Metadata instances. Build similarity measures. Basic unit can be a chat message, song listened to. Use methods from linguistics to see if we can find similar objects.
First – paradigmatic relations – two words in similar contexts are assumed to be semantically similar. E.g. beer, wine. They don’t occur together, but occur in the same context. Can we do the same for learning objects? Build a usage context profile – what happened before, and after, each document. Compare similarity by comparing the usage context.
MACE project (EU project with resources about architecture with LOM data) as a testbed. For each object, have LOM data. For each object pair, calculate correlation between metadata similarity, and then manual comparison. A lot of objects are strongly related, even if not shown in the metadata – e.g. pictures of buildings by same architect, information wasn’t in metadata.
Second – higher-order co-occurences. Also from linguistics. Look not just what is directly co-occuring, but what co-occurs with the things that co-occurs with all of those. Is applied in linguistics – with high-order co-occurences. After a while, some iterations, the clusters stabilise. So ‘IBM’ at first order gives you computer manufacturer, global, etc. But after 10th order, have other computer manufacturers, e.g. Compaq, Microsoft, NEC, etc.
Can this work for learning objects?
Clustered about 4000 data objects. Found about 184 clusters. 70% of objects are in no more than 3, and none in more than 9. Statistical tests – the cluster means differ significantly from the overall mean.
Outlook – further examine the semantic relatedness. Application and evaluation to see if it is really helpful for the learner. Comparison/combination with other methods. Apply further methods.
John: Fascinating. Especially looking at the order in which things happen. Do you have any clues whether it changes according to the kind of object. Might be different in architecture and music.
Sure. We can use it in learning, because if student goes to MACE portal, they have a task, so it’s all related.
Paulo Blikstein – Students’ behavior in open-ended programming tasks
Couldn’t attend. Was planning to do a live presentation, but is pre-recorded video because of connection problems.
Looking at learning analytics to look at students’ behaviour in open-ended programming tasks.
Main motivation is that we’re pushing schools to teach C21st skills, but they don’t have means to teach or assess those. Hard to move away from the high-stakes testing; if don’t change the means, can’t expect change at the end.
Stanford Learning Analytics project started in 2009, student working on it, focus on project-based learning.
Open ended, f2f environments – schools, classrooms, after-school programs; especially when open-ended activities. Multimodal data – voice, clicks, gesture. Constructionist learning (Papert), learners make a shareable artifact, usually using a technological tool – movie, song, digital artifact.
Much work in related areas – text analytics, automatic assessment of reading proficiency, educational data mining. Review in 2009 Baker and Yacef – focus on scripted/semi-scripted environments, cognitive tutors.
Problems with qualitative approaches – no persistent trace, can only see end product not construction process. Crucial learning moments get lost if noone is observing them. Easy to lose them, can last only seconds. Finally, is hard to scale for large groups of people.
Used NetLogo programming environment. 9 students, engineering, 3-week assignment. Logging software on laptops, kept snapshot every time they saved or compiled. 158 log files, 1.5GB. Filter log with Xquery and regex, analysed with Mathematica.
Almost 9m data points! Just 4% were snapshots of code, 38,000 points of code. Plot of code size, time between compilations, and compile errors. Code size grows erratically over time, nearly monotonic; some sudden jumps, some steady. Show successful compilations as green dots, unsuccessful as red dots.
Can follow one student’s progress on a chart: to create the model, looked at the library, pasted it in, then deleted 75% of it and used that as their starting point, happened very quickly. Then a slow growth in code size over about half an hour, with many unsuccessful compilations. But then it stops growing, big plateau, for about 10-20 minutes, very frequent compilations. Were trying to do a lot of things, learning how to program – was first time they tried it. Then there’s a big jump, corresponds to them duplicating their first procedure as basis for second procedure – can also see opened many other samples for inspiration. Then some time making new procedure, frequency of compilation decreases, long plateau again, then another jump. Looking at logs, she went to the sample models, copied a piece of a sample in to her own code. Finally after a little plateau, a final stage where she’s beautifying the code, changing the variable names, but no substantive variations. You can identify a lot of moments.
Can extract patterns – stripping down examples, long plateaus of no activity (browsing for examples); student jumps when they paste in, final beautification phase. If in doubt about meaning, can go back to the logs.
Other students’ curves are different. One student, show lots of jumps where they use some code and then give up, then find something she’s interested in, uses that at the base and you see the ladder pattern from before. Next student, he looks at other code, but never uses it as a base for his own code. You see spikes, but then no jump – more expert programmers look at external code (library) but they don’t copy and paste, just use for inspiration. They open a model, then go back to their own, don’t copy and paste. Is confirmed in the logs. Can also see students who are in between – can see both behaviours.
Prototypical behaviours identified, group in to three types of programmers.
One behaviour is very similar – the moving average of unsuccessful compilations over time. Four students, the error rate curve follows an inverse parabola – peaks half-way through, then drops down close to zero. The compilation attempts are not linear – concentrate in the first half of the activity.
This is an initial step in defining metrics. Could be formative assessment, pattern-finding lenses in to students’ activity. Two implications for instructional designers: moments of greater difficulty should be identified; design to cater for diverse coding styles and profiles. Not just programming, but building robots, creating artistic objects, other technological objects. Will add other data sources – interviews, sensors, and so on. And try different tasks.
Questions
Simon: You opened with comment about tracking C21st skills. Didn’t pick up how that’s addressed in this. Can you say more?
One of the issues about C21st skills, problem-solving, things of that sort, we need to give students more opportunities to work in projects, could be a bigger program, or robotics. Engagement in those activities, more open-ended, students have a lot of space to exercise creativity. We’re trying to come up with more objective way to track students’ work, create formative and summative ways to assess it. Most of assessment is through either an artifact, after they are done with it. These activities for C21st skills are hard to assess, hard to scale, this may make it easier to manage.
Anna de Liddo, Simon Buckingham Shum, Ivana Quinto, Michelle Bachler, Lorella Cannavaciuolo – Discourse-centric learning analytics
Anna presenting.
Work developed with Simon, with Italian university.
Why discourse? Sociocultural theory – Mercer – key indicator of meaningful learning is the quality of contribution to discourse. Discourse and argumentation are the tools through which people compare thinking, explore, shap agreement, and identify or solve disagreements. Space where learning happens.
What kind of discourse? Online. Different from f2f. Platform is key to understand. Most established tools show chronologically, rather than thematically.
Research network investigating tools to go beyond threaded forum, designing online discourse environments that are more structured. Cohere is one of these technologies.
Demo. First thing is to create a question, share it with a group. Similar interface to a blog. They have to pick the category of contribution they’re making – a question, an answer, sharing data, or what. Then add description, tags, share it with group of learners. Post is saved in their personal notebook space, and shared on the group’s page. Can capture their idea also by annotating websites – have a sidebar, can see comments from other learners. Can highlight text in the webpage, add your observation, question, whatever; pick a category for your contribution, tag it, share with the group. When you save it, it appears in the sidebar so if another learner visits the same page, they can see previous learner’s comments. Clicking on comments jumps to the part of the page to which the comment relates. Can also build connections between posts. Three ways to explore: pick your own posts, those of others, or search. Pick a post, then you associate a semantic with the connection – why are you connecting this with something else. So e.g. this resource is consistent with another idea. Can create groups.
When you’ve created this network of posts, can visualise it in different ways. Follow not the timeline of the discussion, but the meaning, the structure given by the learners. Can see e.g. different networks which may indicate different topics. Can filter data, so see e.g. things consistent with this point. Easier to make sense.
What’s different compared to other discussions?
Asked them to classify what contribution they’re making – is it a question, an answer. And say how contributions are connected.
Can see which are the key posts, how other posts relate.
Can see post types – so for each learner, how they contributed. Was it mainly ideas, opinion, asking questions, whatever. Can interpret it differently depending on the task. Can also see link types for each learner – what rhetorical moves they make when they connect items. Can also see highlighted positive, negative and neutral link types through colour highlighting.
Can also see where someone sits in the discourse. Whether someone is an information broker.
Another statistic: discourse network statistics – a social-semantic network of discourse elements. It’s social because includes authors, but also has conceptual, semantic content because connections are all labelled. To analyse it as two superimposed network – a concept network, and a social network. The concept network treats the nodes as the posts and the edges are the connections. Normal network analysis give you – hub topics and posts. In both case studies, the hub posts were questions. Use visualisation tools like NodeXL. Social network – can see three learners only interacting with each other; and another larger group. Classic SNA, outdegree and indegree, mean specific things in Cohere – outdegree means you created a lot of connections pointing to other people. Indegree measures something even more relevant – how many connections have been made to posts you authored. Example of a learner who’s less active, but attracted a lot of attention.
Focus on learner’s discourse. Can see conceptual and social dynamics.
Future work – three directions. First, embed within Cohere UI so learner can use these stats while they’re working, and tutor can monitor learner performance and intervene timelily. (That’s a word I just made up.) Second, work to investigate computational linguistics tools to detect rhetorical gestures. Third, set software agents to monitor the discourse networks.
Someone: Several of us did this in the 90s. Ran in to problem of reliability of learner’s categorisations – most common choice was the one at the top of the list. Common problem. Have you noticed that? Has something changed?
Self-evaluation from learners is something we’re interested in. It’s a difficult task. What you say is right, we’ve noticed learners often don’t classify correctly. For example, one of the hub posts was a question, but the author had classified it as an answer. In this sense, the redundancy and the semantic connection (have to say twice) is the solution. Even though picture wasn’t a question mark, the other learners made sense of it.
Someone: Work on why groupwork fails. Learner needs to see benefit from classification.
Terry Anderson: Using these speech-act discourse tools, feeding back to students, is really important for self-generation of knowledge and meta-learning.
Simon: Dan, building on work you did, like others. Issue with students knowing the rhetorical role. That’s important in a learning context, these visual representations make that tangible. IN a learning context, can show good and bad examples. If these analytics started to count, would change.
Rebecca Ferguson and Simon Buckingham Shum – Exploratory dialogue within synchronous text chat
Simon presenting.
Ask users to do more work than simply hitting ‘reply’ – that’s one strategy. The other strategy, what if highly unstructured, but the machines make sense of it.
Part of a bigger project at the OU called SocialLearn, does a whole bunch of things.
You have hours of material to catch up on, maybe didn’t get through it. Or maybe you’re a tutor, lots of stuff done by your students online, time is precious. Where do you put your attention? Plenty of replays, stuff in forums. Lots of stuff in the text chat. Text chat the focus here, is the most challenging – often not well-formed contributions. Challenging for computational linguistics tools.
Example source, an OU online conference over two days. Neil Mercer framework, has three kinds of talk. Disputational talk – disagreeing, individualised decision-making. Cumulative talk – uncritical building, confirming, elaborating. Exploratory talk – arguably the ideal type we try to scaffold in educational discourse. Making knowledge more publicly accountable, making your thinking visible. Read an essay, blog, paper, can recognise a thought-through piece of work because their argument is visible. More sedate than in Twitter or journalism. Framework from studying children in classrooms. Rebecca’s thesis shows can analyse online discourse using this lens.
Key phrases illustrate the sorts of talk. Mercer analysed large chunks, using human qualitative analysts. To what extent can this be done by a machine?
Have categories – challenge, critique, etc – and indicator phrases that might correspond to the – but if, have to respond. Evaluation – ‘good example’ – works even if you said ‘that’s nothing like a good example’ . Had 94 indicators – some promising ones are quite misleading. Were looking for dialogue about the conference topic. Question mark is too blunt – e.g. what are we going to do for virtual coffee? 24,500 word corpus.
Text is not well structured, not full sentences, noise effects. What we’re doing is very simple – compiling list of indicators and cue phrases.
Data from 60 minute section, 9 participants. Look at posts, word length, posts, and percentage with exploratory talk markers. This analysis can differentiate participants – can pick out people actively engaged in knowledge-building discussion.
Can also discriminate between sessions – source data is all online, can replay. Analyse by session – by time – posts per minute, words, exploratory words per minute, etc. Can see that are different. Keynote prompted 5.8 posts per minute; 67.5 words per minute, but compared to other sessions had much higher exploratory talk markers.
This is promising for more nuanced analytics than typical forum analytics. More nuanced than conference timetable, Important to think about context. Where a number of speakers were followed by an advertised discussion, was much more discussion then. Unscheduled chat at start is low, but at the end is much higher because people talking about the ideas.
Future work – check reliability and validity. Differentiate exploratory talk about content, tools, process, people. Investigate relationship between chat and audio/video, and automating the process.
Martin: Shows an advantage of being open.
When you’re running open public conferences, have a public dataset. You can look at the source material now.
Someone: What was the size of the indicator tables? Hundreds, thousands?
We identified 94 indicators from going through the conference.
Someone: Different subsets to see if some do better in differentiating sessions?
Did we try different combinations of indicators, explore discriminatory power. No, not yet, but that’s part of refining the technique.
Someone: What do you expect to do with the analysis? Lot of factors that affect learning, participation in a chat environment.
Couldn’t make a strong claim that just because it looks liek exploratory dialogue that people have learned. But that’s a competence we want people to display, that’s where they might learn most. Might learn more from a fragment of exchange with exploratory characteristics than one that doesn’t. Strong foundation from Mercer. Using analytics to detect where that’s going on to a greater or lesser degree. Theoretical claim that more learning opportunities – from contributors and readers. But no empirical evidence that people learned more.
Someone: Key concept, exploratory dialogue. Michael Baker’s stuff, other ideas about types of dialogue, different indicators.
Not familiar with that. Lot of work in CSCL around what dialogue makes sense. Compare and contrast indicators driven by different theoretical lenses.
Someone: Phases of learning too.
–
This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission needed to reuse or remix (with attribution), but it’s nice to be notified if you do use it.
2 thoughts on “LAK11 – Tuesday morning”
Comments are closed.