LAK13: Monday afternoon

Liveblog notes from Monday afternoon session at LAK13 – second part of the 1st International Workshop on Discourse-Centric Learning Analytics.

BREAKFAST
(cc) malias on Flickr. NB This is not a totally accurate representation of the food at the conference.

Duygu Simsek et al: XIP Dashboard: Visual Analytics from Automated Rhetorical Parsing of Scientific Metadiscourse

(paper PDF)

Duygu Simsek, Simon Buckingham Shum, Ágnes Sándor, Anna De Liddo, Rebecca Ferguson (The Open University, UK and Xerox Research Centre Europe, FR)

Simon Buckingham Shum presents because UK immigration haven’t given Duygu’s passport back yet.

Technology can be a source of innovation; educators look at it and consider – can we use that? This work is slightly more technology-driven. We’ve spotted some interesting language technology, we think we should be able to bring it in to the realm of learning, but it’s early days.

Focus on a particular form of writing, trying to pick up signals people leave when they learn to write. Learning to make their thinking visible in a way the reader can follow. A research paper or essay isn’t a list of assertions, it has a shape, a narrative to it. It’s telling a scholarly story. Parsers that can pick up metadiscourse.

Metadiscourse

Signals important moves in educated/scholarly narrative. When it works well, that’s when your papers are accepted by reviewers and quoted by others. We teach it from school upwards to PhD and beyond.

Xerox Incremental Parser (XIP)

The parser identifies rhetorical functions of metadiscourse – background knowledge, summarizing, generalising, novelty, significance, surprise, open question, contrasting ideas.

There’s a model that breaks down the statements in to components. Can parse long documents in a second or two. Initial work looking at how a machine annotation tool can work with human annotation. Human analyst highlights significant passages with Word marginal notes; XIP highlights similar portions of text. It’s not always the case. Reading critically has different analysts reaching different conclusions – so hard to have a ‘gold standard’ human answer. Look at overlap. Humans read between the lines – e.g. that’s camouflage for the fact they didn’t do it properly; but the computers don’t. But if the machine can highlight interesting passages in a few seconds – to a human reader – that is potentially very interesting.

[Note to self: might be fun to use this against drafts of your own papers before submission!]

XIP’s output is fine for researchers (or machines), but it’s not learner/educator friendly. About six months ago, the output was a whole bunch of text files. Parsing for rhetorical moves is interesting, but have to make it more accessible. So took that raw output, consider how to present to ordinary humans.

First example: annotations on the full text using the OU’s Cohere social sensemaking app (Firefox add-in). Second: XIP annotations visualised in Cohere as a network around the document. Or as a concept cloud around it – machine generates it, but maybe human spots connects and merges/retags/summarises.

Visual analytics v0.1 XIP Dashboard

But wanted a step before, a simple dashboard for the parser output. Aim to draw attention to patterns of potential significance. Not just domain concepts, but when used in interesting metadiscursive contexts. Also trends in that over time, with individuals, or within and between research communities over time. Have looked at the dataset: LAK’11, LAK’12, EDM conferences, special issue ET&S. Longer term goal to use as formative feedback on your own writing. [Aha!]

Worked up paper prototypes to elicit initial reactions, led to some changes. Now using Google Chart Gadgets to make outputs of LAK/EDM corpus visible. See frequencies and trends within and between the communities. Summary and Contrast statements are high and increasing. Density measure. Heatmap of matrix of concepts against metadiscursive markers.

Future

User scenarios – student, educator, researcher. Need to do some user evaluation.

Many open issues around this technology. Also around any such technology: Is the signal:noise ratio worth it? Is the time it saves allowing you to go more deeply, or is the fact a parser is highlighting it for you lead to shallower reading? Do you get new insights, or just faster insights? Better writing, or just gaming the system?

It’s an interesting technology, validated in many contexts – though few that are directly educational. Intriguing, want to explore further. An important feature of educated writing is knowing how to signal substantive rhetorical moves. Natural Language Processing can detect this, and we can now generate rudimentary visual analytics. A promising language technology now has visual analytics we can deploy with stakeholders.

Q: Waiting to see, with bated breath, how it will be used, and how best used. You lose a certain point of metacognition if a machine spots what’s new. If there’s a gold standard for writing in a field, wouldn’t it be nice to parse your own or your students writing to see how you could evolve it further. Writing for one group is the wrong way for another. Good to know what the gold standard is, and how your writing differs. This is the Los Alamos project, could bring a lot of benefit, or explode in our faces.

If I give you all the same paper and ask for the contribution, you might come up with a lot of different answers. Likewise to highlight the key moves. If you couldn’t find any – we may have reviewed some of those – what’s the contribution? Imagine a teacher looking at low vs high achieving students, seeing many patterns – e.g. don’t make contrasts, identify a weakness – there is a gold standard in a sense that you’d expect at least something. We’re not sure yet. We could have a Los Alamos technology here, which we could shape in to a tool. Not sure how to contain it in to something a little less destructive. [Atoms for peace!]

Nicolae Nistor et al: Virtual Communities of Practice in Academia: An Automated Discourse Analysis

(paper PDF)

Nicolae Nistor, Beate Baltes, George Smeaton, Mihai Dascalu, Dan Mihaila, & Stefan Trausan-Matu (Ludwig-Maximilians-Universität, GER, Walden University, USA, Technical University “Politechnica” Bucharest, ROM)

An engineer originally, changed to learning sciences. Walden is one of four entirely online universities in the US. Interdisciplinary group on this paper.

Rationale

In learning analytics, we’re searching for quantitative models. Virtual communities of practice (vCoPs) increasingly used in academia. If we go from classic CoP in f2f setting to a virtual CoP, what do we have different? Probably technology. So issue of technology acceptance. If we’re looking for models of vCoP, should combine models of f2f CoP and models of technology acceptance. So aim to verify the combined model.

Theoretical background

Expert – won’t get expert status instantly, to be recognised, you have to participate first. No direct effect of expertise, it’s all mediated by participation. Also the role in the CoP – that makes a difference. Social roles are different.

Educational technology acceptance – Unified Theory of Acceptance and Use of Technology: expectancies and social influence affect technology use intention, which drive tech use behaviour, which is also driven by conditions and anxiety.

Combination of the two models large.

Methodology

Correlation study, N=129 academics at Walden (entirely online university) using a forum. Over two years. Variables: acceptance, expertise (reflected in the quality of interventions), expert status/centrality. Methods: UTAUT questionnaire for acceptance score; automated content analysis for CoP; and SNA for centrality measure. Automated content analysis of postings – semantic analysis, topic modelling.

Validation – manual content analysis compared to the automated one. Strong correlation r = 0.79, p <.000.

Findings

Part of the UTAUT model verified (drivers of technology use intention), but the rest was not seen or counter-seen.

CoP model much better. Domain knowledge and Participation and Expert status very strong links between. 0.98 of the variation! (Bit suspect, possible artifact – participation determined by number of posts, and more posts increases centrality.) But Time in the CoP not significant! Significant mediation effect – no direct effect of domain knowledge on centrality (expert status).

Discussion

Simplified model removing non-significant parts. Participation driven by Role in CoP, Expertise, Technology anxiety; and participation drives expert status.

CoP model confirmed. Acceptance models need reconceptualisation here. And for practice – we could develop assessment tools for collaboration in vCoP.

Q: Tell us a bit more about the model of expertise you’re using, and what critical thinking is in this context. I notice you frame it as domain knowledge, then said look at quality of argumentation, those aren’t exactly the same thing. Could use good logic but not know about what you’re arguing about.

Yes. Automated analysis was directed to do we have claims, proper evidence of them. The manual analysis we called critical thinking. If we think critically, we care about argumentation, evidence. I don’t think it’s possible to use exactly the same analysis framework for automated and manual content analysis. The analysis pipeline is so complex. But it’s the same direction.

Q: So it was essentially content-free? You can make a structurally well-formed argument, that is still nonsense.

Yes, the automated tool didn’t

Q Have I understood it right? Were you assuming that a high centrality score from SNA means expert status? Couldn’t that also be something else? I can imagine someone acting in that way that a human rater would not see as acting with expert status. This might be the same point as the previous questions.

Looked at in-degree centrality, in the f2f. Noticed that in-degree centrality is correlated with the quantity of the dialogue. That is a problem indeed. That’s why we replaced that with betweenness centrality, less affected by the quantity of interacitons.

Nicolas Van Labeke et al: OpenEssayist: Extractive Summarisation & Formative Assessment of Free-Text Essays

(paper PDF)

Nicolas Van Labeke, Denise Whitelock, Debora Field, Stephen Pulman, John Richardson (The Open University and University of Oxford, UK)

Research questions – context is about automated assessments, how can they detect passages on which a human marker would usually give some feedback? And specifically whether existing methods can be adapted to select content for such feedback – and how, and when. And how effective that is, and what the effect is in future.

Two examples – postgraduate course on accessible online learning (H810). They have several assignments. TMA1 (tutor-marked assignment) is 1500 words on a report on the main accessibility challenges in your own work. TMA2 is 3000 words on critically evaluating your own learning resource. As a tutor, you’d expect some key issues and concepts, but the focus is on your own work or resource, so all the responses will be different. No gold standard. Good test ground for extractive techniques. Most writing (drafts) happens outside the system; tricky to time/scope feedback. Only limited possibility for ‘mock’ experiments – can’t get lots of people to write 1500 words just to test your system, so it all has to run on live material. Creates a tension for administrative point of view – can we interfere, possibly privilege some students? Also issues about how it connects to the summative assessment by the tutor.

Working on a tool called openEssayist. Python tool close up analysis, then RESTful API-driven web service that processes it further.

Hypothesis: Quality and position of key phrases & sentences give idea of how complete and well-structured the essay is, which may provide a basis for building feedback models.

Plan to strate extractive summarisation on two small, simple strategies. First, looking at key phrase extraction – most suggestive words/phrases; second, whole key sentences. Want rapid implementation and testing. The Holy Grail of summarisation is generative summarisation – get the gist of a block and generate a much shorter, condensed scenario.

Pre-processing using NLTK (Python-based Natural Language Processing Toolkit). Target of many iterations – focus on list of stop words. Want to find a domain-independent list, or whether they need something that can be generated from appropriate reference materials.

Essay Structure – restructure as paragraphs/sentences, identify role of each. Using decision trees developed through manual experimentation. Output good enough for first rounds of testing.

It’s very basic – so treats last paragraph as conclusion, where in his example it was the acknowledgements!

Next, using other techniques to extract key words, lemmas and phrases. Similar approach for extracting key sentences.

Trying to think about how we can use the tools, having the student use it – want to bring all the elements in to a single structure, and help them to reflect. E.g. are the keywords there representative of your text, your intention, is the distribution of them across your text what you want? Exploration and disovery, hypothesis building, eliciting recommendations and heuristics.

In exploring the design space, want to move from research-centred design to learner-centred design. Looking at what kind of feedback. Not going blind – have good support for authors.  Also exploring what reflective activities can help – and how feedback can go to the system and the essay analyser. Also big question around drafts, history, changes over time, and question of ‘quality’ of output.

Future work: improving the analyser, analyses on the output, iterative design and testing of the tool. Want live evaluation in September, with new cohort of students.

Q: What kind of weighting algorithm are you using?

I can’t explain – talk to me later and I’ll pass it on.

Q: Most of the process happens outside of the paper. My thoughts went in a different direction. There’s interesting research on note-taking and construction of text based on notes and annotation. That could be for future research.

Good point, thanks. We haven’t looked at that. Want to use the system within the OU course system, have to make sure we don’t step on their toes. Don’t want too much help in writing their essay. So long as it’s structure-based, it’s fine, but beyond that might get in to some trouble.

Incoherent
(cc) kevin dooley on Flickr

Discussion

There’s a Google doc organising the topics and notes. (Abandoned plan for breakout groups due to structure of room and general tiredness and general interest.)

Darren – Was intrigued by discourse for assessment (Topic 2 in the doc). In previous work, often found that the conversation itself can be more powerful, and yield more that’s actionable than the results that get generated.  Analytics that could help capture, describe, guide, improve those kinds of deliberative conversations about what we value. Interested if others have done or know of other work on that. Is anyone interested in those, as research or tool development?

Simon BS – would the things we heard today count?

Darren – with exception of vCoP of educators, most of what we’ve talked about today are conversations among students. Conversations among teachers about student work, how do you capture the knowledge there. Is it possible to transfer some of this thinking from conversations among students producing knowledge to conversations among faculty about that.

Simon BS – Informal professional development, peers.

Darren – Specific example. Project of the Am Assoc Colleges and Univs developing rubrics around key outcomes of general education programs. E.g. thinking critically, intercultural competence. Rubric with categories, levels of development. 15 universities pilot testing this. Those pilot tests produced numbers, with a certain reliability. But consensus was that numbers were not so useful, but the conversation in geting there was. The conversation was important independent of the marks that they generate. How do you capture the knowledge that’s generated, how do you provide guidance that could help people involved in those conversations make them more productive.

David – Can’t answer that question in terms of teacher-facilitated discussion. But in general, there are a number of industries now where people who’ve been with a company for a long time, about to retire, there’s a camel or barbell distribution of knowledge. How to capture that is a related discussion. One question is how you normalise, what are you norming to when you have an analytic. If you take engineers, have them doing the simulation, would see what the experts were doing, then feed that back to others and say this is what the good engineers are doing. So here, find people who are doing a good job discussing those papers, feed back to those who are in study mode how that’s different. All these tools can capture a trace of discourse. If you know a priori it’s a good one, can use it as a model. Doesn’t have to be students or teachers, it’s an a priori empirical distinction between where you want someone to be and where they are now.

Denise – We’ve built a system called OpenMentor. We have thousands of students, hundreds of tutors, give the tutors training. System strips out tutors’ comments, compares to the mark. Make explicit how you give feedback to move people on. Being used at the Open University.

Simon ?- Could be generalised to mentors in a CoP helping their peers, not just tutor/students in formal setting.

Denise – Capturing the expertise in the talk.

===

Simon BS – What does it mean to declare a DCLA system good? David has one idea. Have to find a model, experts who are doing it in an effective way, assess the delta between what they’re doing and everyone else. Then question is how to move them from there to where you want them. Have to make visible, quantify what expert performance looks like, then figure out how to get there. There might be another approach. Alyssa, the theory-driven predictive approach?

Alyssa – Perhaps it’s not a sufficient approach. Both of the other ideas – benchmark directly, this is what it looks like, we want it like that. Or the process. But what if you don’t have experts? This is what a theoretically good process should look like. But need to validate that by some other route, theory.

David – You have to either accept the theory as correct, or have some way to empirically verify it.

Greg – What do you mean by experts? We’re trying to get people to learn through discoursing, online talk. Either their task is to produce expert-level text, or to produce learning-level text. Which may not have anything to do with an expert, who’s demonstrating competence.

David – At a lower level of implementation, yes. But education is teleological, it’s normative, we’re trying to make someone better at X. We’re leading someone. Somewhere in the system you have to say this is where we want someone to go. Two ways to do it – could say be like this person who does a good job, or I want you to do this thing over here. Without one of those two, no way to point any thing.

Alyssa – You can use theory as a basis. We know what the standards of good dialogue look like. That’s what those models of argumentation are about. We can say we’re accepting the theoretical model – not as correct, but as value-ful.

David – What constitutes a value? It’s an unwarranted assertion

SImon K: Those things stil have warrants.

David: Eventually you get back to God, the Big Bang, or Because I Said So. They don’t have an explicit warrant.

Simon K: Making explicit the underlying premise is important in moral education.

David: When someone makes a statement without feeling they need to give it a warrant there they’re expressing values.

===

Rebecca: People talk about talking, listening, speaking – they’re different activities to writing or reading online. If you use talk/speak/listen you miss things online because you’re missing something. Example, in a forum dialogue, they weren’t really challenging other, pulling each others’ ideas to pieces. But in their attachments, they were developing documents together. The attachments were 10x bigger. The talk in the forum was missing most of what was about. In the attachments they were challenging, reworking. My work followed on from other who’d looked solely at the speech and quotes from previous speaker. He was left with a conversation that made no sense, because removed the 95% that ‘didn’t look like talk’. This is an issue that occurred to me.

Someone: I’m interested in best practices for more actionable things on educational online dialogues, residential dialogues, what are the overlaps, why.

SOmeone2: One difference between oral talk, and online talk.

Rebecca: When writing, can review before posting. Can divide in to points, number them, separate out as ideas, carry on several lines at the same time. When responding, can read it through several times, can take direct quotes and incorporate them. All of those you can’t do very easily in speech. If I introduce one idea, you can follow, but if I introduce ‘and now point 7’ you may lose track.

– That’s right. And can skim online.

Alyssa: I use metaphor of speaking and listening, but there are differences, reading in online dialogue vs traditional. Different timelines, text production and text accessing are separated.

Greg: Online discourse as discursive essays, vs conversation online. Essays we know how to grade, what they look like. Conversation we don’t know what good looks like. When forums have participation either a giant turn in conversation, or a mini-essay, difficult to say how we can evaluate it.

Denise: We’re assuming we know what a good essay is. But why do we need third marking? I think that’s a false premise.

David: I’m going to dub this the modality question. Different discourses, in different modes. To analyse any mode you need a different coding system. One open question, having said that, is there a unifying underlying semantics. By coding them we can get back to an underlying model. Or do we have discrete models? So make sense of chat separately to dance moves, to the essay they wrote, because don’t measure them the same way. Having measured them with salient properties, can I use all of those in the same model so I can predict based on all of those, the kind of thing you might do next in any of those modalities. If always separate – chat can predict chat, dance understand dance, but no common platform. For theoretical and technical reasons I hope it’s not true. Those are all just expressions of an underlying meaning. The genres are different but the culture is the same underlying. If it’s not then all we have is a plethora of different modalities and no way of constructing a unified model. It’s a coding question rather than a modelling system. I’m going to make an epistemic frame model of what someone’s thinking. I’ll code different modes in different ways, but down to the same underlying idea.

Doug: This is an empirical question we can move forward on.

Darren: Clay Spinuzi at UTexas has done interesting applications of theory to look at different genres together. Book – Tracing Genres Through Organisations.

===

Paradata

Simon K: If learning analytics si about the learners, paradata is the equivalent data around learning objects. JISC in the UK have done work on gathering meaningful data on how learning objects are used, integrated, things like that. The context of MOOCs is quite interesting. It’d be interesting to disaggregate them. My blog post was about that. In this context, it’s about a pragmatic element, about how people interact around artifacts. Whether paradata could inform the learning analytics we try to gather, and how we understand it.

Darren: Learning Registry Project. Middleware, exchange of metadata and paradata across repositories. Traditionally you’d only see contextual annotation in the home environment, this allows it to travel. Could transfer back to student learning contexts.

Simon BS: It’s a form of annotation, expressing a value or critique of a shared object. You might not care about the resource. The annotations become a form of DCLA subject matter. Margin notes, diagram annotations.

Darren: Rating, tagging. There’s a variety of mechanisms associated with social media. Collaborative annotation and tagging.

Simon BS: Semantic tagging – explicitly support, challenge etc. Light semantic annotation. (Work on our system.)

===

Keeping humans in the loop

Simon BS: Thinking about Alyssa giving people feedback to improve the discourse. Damage limitation, not trying to solve it. It’s like biofeedback – here’s your pulse rate, breathing rate etc. It’s formative feedback. We’ll give you visualisations but not make a judgement call. You could call this a cop out. You could say we don’t know enough about what good looks like. We’ll leave you as a learner to make sense. Or even – if you are in this situation, you might want to do this to move yourself to that situation. There is a judgement calls.

Denise: If your goal is to have people self-reflect, this is a way of helping them keep a check on what they’re doing. Surely that’s our long-term goal. Whatever we produce now are scaffolds with a view to becoming an expert. Need internal checks to help you keep up. I think it’s essential.

Simon BS: You have to know how to read those analytics.

Denise: If you’re actually explaining, what it means, how to look at it. You can’t just put some graph up and expect them to understand.

Simon BS: It’s a knew form of literacy.

Alyssa: It’s beyond visual or other literacy. It’s a larger framework with respect to learning goals. One concern that I’ve heard about LA is for over-dependency, that students aren’t thinking critically because the analytics tells them. Or being watched over. A whole bunch of things that keeping the human in there, supporting metacognitive activity, giving them agency, by doing that they didn’t reject the analytics even if they were struggling to accept them. I worry about aiming to take people out of the loop. Even if they were good, feeling of being done to vs having a say in the process.

Greg: It is a cop out we have to do. Interpreting the data to know whether we’re doing right or wrong, then transforming it in to something actionable, which will vary from student to student. No consensus on what that should be. Humans are quite good at interpreting data against their goals, we should use that.

Doug: Human raters are not a gold standard. Inter-rater reliability is not that great. This is a problem for our assessment.

Darren: There is good research that having human feedback, and a sense of that, is very helpful. Big resistance in the US at the moment to automated assessment. Communication is a human relationship. Don’t get that with automated assessment.

Greg: Alyssa, in your course you have them set goals?

Alyssa: wrt the guidelines, which the analytics are lined up with. One reason we talk about moving away is a bandwidth issue – we don’t have enough teachers. But we do have a lot of students. Ask students to do the heavy lifting, we have a nice 1:1 ratio there, no shortage there.

David: Didn’t Alan Turing solve this problem some time ago? Unless we’re fetishising the human, there’s either some property of what the human is doing, and we can either fake it or we can’t. But in between, if it’s a resource issue you address it as far as you have to. Those are all just pragmatics, not an issue on principle.  Noone’s arguing that AI’s that good now, but nobody’s arguing – or are they? – that there’s something you can’t do, even a human connection.

Simon K: Turns out it’s hard to be a human too. The point about human grading being difficult. Maybe it’s intractable. Discussion is about what we’re using the assessment for. If want to use machine learning to give a grade, problem; perhaps issue is more psychological. We can still use them for useful formative assessment, for reflective purposes. Research on automated feedback that provokes students to improve. We have reasonably good proxies for what ‘improve’ means. But highly reliable assessment is hard.

Someone: I may be most anti-AI person in the room. But I deal with people. 90% of people think they’re above average. The idea that the gold standard for evaluating an essay is the human being is rather strange if I look at the pedagogical knowledge, content knowledge, look at 70% of the teachers, at least in the NL, I don’t see putting the human assessor on a pedestal. Strange statement – doesn’t matter if the person tells you bullshit, as long as it’s a person it’s good. But if the person tells me wrong?

Darren: That’s in the context of teaching writing, as opposed to teaching domain knowledge. In say biology don’t care about the writing, it’s the ideas. But persuading an audience of something requires an audience to exist. Maybe some way we’ll be able to simulate a system, but we’re a long way away from that, to assess the quality of it. Teachers who are teaching areas they’re not experts in.

Someone again: The competency of the average writing teacher?

Darren: We have some very fine ones in the US.

Someone again: Not all of them have Pulitzer Prizes. Those who teach composition know the technical ideas but aren’t good writers. Idealising?

Darren: Don’t think we should make up for the fact that we don’t train teachers properly. Because we don’t do it well is not an excuse to give up on it and do a system that does is slightly better but pretty half-assed.

Alyssa: Statement of question of should or not doing it with a computer. Argument for why not, partially rethinking LA as a scaffold to fade, or as a support system. I want students to be able to self-assess, how to evaluate their own essays. Want to keep human in, aspect of developing competency that LA can support, but want to do in contexts without the analytics.

David: I’m taking a snarky turn here. Is that saying we have spellcheckers because we should fade them out so students can spell themselves? Or they’re imperfect, but why not leave them in there? That probably means they’re not such good spellers, but if they produce better documents … why not do the same with argumentation, or self-reflection, or whatever. Don’t we want to move you to a higher level?

Someone again: Not give doctors diagnostic tools because they could do it with CT?

David: I’d like my doctor to go back to drinking my urine.

[laughter, turn-taking breakdown]

Alyssa: I agree.

Doug: With the doctor drinking urine?

Alyssa: This is one of those times where I have to quote back what you said in order to be sure that you know what bit I’m responding to. I get the argument you’re making. My original question – are they a scaffold, or a support system? Where are we going vs where we are. Should be clear.

Darren: It isn’t an either-or.

Greg: If don’t tell students what the goal is, they won’t do it. Any self-reflective practice, are we just putting it there to do anything better?

Alyssa: As a support system down the road, I imagine (indistinct) – people having a sense of power. I’m not against LA becoming a tool for use. But I don’t think we’re there.

Nicolas: I concur. It’s not an or, it can work both ways. Think back about how spellchecker was, first thing is suggestions for replacements, that’s like the scaffolding approach. Now just an indication of mistake, you go from scaffolding to performance stage. It’s about what aspects of the analytics should fade away by another aspect.

Simon K: I still click for suggestions! It’s about deciding which things are important for development. Perhaps we’re not worried about spelling as a lifelong skill, so long as it’s not hindering time use. Back to values on which things we value and why, what people are using it for.

SImon BS: Agnes who developed XIP, ran experienced researcher’s stuff through XIP, found out stuff e.g. said three different things about what the paper was about, hadn’t realised it. These are hard to do well even for professionals. Maybe spellchecking is a red herring, it’s clearly a low-er level skill. An argument checker is a tough problem – society needs people who think better, and that’s not going away any time soon. That’s what Vannevar Bush was obsessed about, improving critical thinking. His use cases are about scholars having better arguments.

====

What do we mean by discourse in DCLA?

Alyssa: My background in CSCL, discourse involves actions by multiple actors. But different people here today have different ideas – SImon’s idea of a published piece. How would that fit e.g. turns of talk. But have ideas of audience, how it relates to past and future ones. Some others seem different. Thinking of essays as part of discourse. If DCLA is something and not everything, maybe more concerned with linguistic analysis vs discourse. Does useful DA need to take features of discourse in to account? I say maybe not.

Simon BS: I’d defend the idea that a research paper is a slower, different granularity form of discourse. Usually takes me far longer to take a turn. Nobody pretends a paper here is the final word to be said, people will respond to it, make further contributions.

Perhaps we can call it a candle.

David: (waves hand in the air like a lighter at a rock concert)

Someone new: What happens in human discourse, selective sampling made of the whole transcript. Different to here. It’s a bias a human has, could be important. Could call it context. Fact that structure of the matrices of what’s in the conversation could be different. It’s not just a moving window issue, it’s that different things are salient. Memory is selective in what it picks out.

Simon BS: D techs don’t take in to account the selective nature of human attention?

David: I don’t think so. We do all have biases. But any of the analytics schemes have biases in what they pay attention to. May not be the same as what humans do. Though could try to make them so.

Someone new: E.g. reply with specific quote. Difficult to do across a couple but possible. If analyse email conversation with quotes, vs the whole thread, get the same results?

David: Propose an empirical test back. All kinds of information that the coding scheme from this morning ignores. Same was as when someone skims a newspaper article. Or when I read a student essay from a whole pile of them, I’m picking out something specific. You’re selecting some words or features to focus on. If you just pick out the turns of phrase identified by my system and gave it to a person, would they see the same as from the full transcript? No. They might even see better.

Paul (?) : One closing monkey wrench. Playing in my mind. What do we mean by discourse? Asked myself, what do we mean by learning? (laughter) When we talk about CoP, social learning, we’re talking about people understanding each other better, their opinions converge. I used the word bandwidth, you took it over, it’s a form of learning. CoP, things like that, more social learning process in which you’re analysing (possibly) different things. The juxtapositions we see are not actually differences in definitions of discourse, but different definitions of learning. Metatagging learning objects, it’s negotiation of meaning – approaching each other in terms of what you’re thinking. Large goal of learning, especially in those type of those communities. So not just what we mean by discourse, but what do we mean by learning?

[Note: I got the names muddled up hopelessly here with too many Simons. Hope they are right – please correct if wrong!]

This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission needed to reuse or remix (with attribution), but it’s nice to be notified if you do use it.

Author: dougclow

Data scientist, tutxor, project leader, researcher, analyst, teacher, developer, educational technologist, online learning expert, and manager. I particularly enjoy rapidly appraising new-to-me contexts, and mediating between highly technical specialisms and others, from ordinary users to senior management. After 20 years at the OU as an academic, I am now a self-employed consultant, building on my skills and experience in working with people, technology, data science, and artificial intelligence, in a wide range of contexts and industries.

%d bloggers like this: