Statistical Discourse Analysis of Online Discussions: Informal Cognition, Social Metacognition and Knowledge Creation. Ming Chiu, Nobuko Fujita (Full Paper)
Nobuko speaking. Ming Chiue is at Buffalo, she’s at U Windsor, but also small business called Problemshift.
Rationale
We have data from online courses, online forums. Tend to be summary stats, not time, order or sequences. So using Statistical Discourse Analysis, Ming Chiu invented. Not individual note level, but sequence of notes. Group attributes, recency effects.
Also informal cognition leading to formal cognition over time. Informal opinions are easier to access, intuitions rather than abstract theoretical knowledge. Fostering that development over time. Also how experiences ignite theoretical knowledge.
Knowledge-building theory, how students work with new information and made them in to good explanations or theories. Linking ideas from anecdotes in to theories, and elaborating to theorise more.
Corollary is social metacognition. Group members monitoring and control of one another’s knowledge and actions. Most individuals are bad at metacognition, so social is good to take control at group level. Questions indicate knowledge gaps. Disagreement always provokes higher, critical thinking. (?!)
Interested in new information or facts, and how we theorise about them, pushing process of students working with explanations, not just information. And express different opinion, more substantially.
Hypotheses – explanatory variables – cognition, social metacognition vs several other bits.
Data description
Design-based research. Online grad education course using Knowledge Forum (KF). Designed to support knowledge building – radically constructivist approach. Creation and continual improvement of ideas of value to a community; a group-level thing.
KF designed to support this. Focus on idea improvement, not just knowledge management or accumulation. Client and web versions, years old (80s) and now sophisticated. Lot of analytics features.
Demo. Students log in, screen with icons to the left. Biography view, establish welcoming community before they do work. Set of folders, or Views, one for each week of the course. Week 1 or 2, instructor spends times facilitating discussion and moderating it. (Looks like standard threaded web forum things.) Model things like writing summary not, with hyperlinks to other notes and contributions. And you can see a citation list of it. Can see who read which note how many times, how many times edited
N=17, grad students, 20-40yo, most working, PT, in Ed program. Survey course, different topics each week. After 2w, students took over moderation. Particular theme set, emphasising not just sharing but questioning, knowledge-building discourse. Readings, discourse for inquiry cards, and KF scaffolds.
Cards are on e.g. problem solving, set of prompts aligned to commitments to progressive discourse. Notes contain KF scaffolds, tells you what the writer was intending the readers to get.
1330 notes. 1012 notes in weekly discussion (not procedural). 907 by students, 105 by instructor and researcher.
Method – Statistical Discourse Analysis
Some hypothesis, some dataset. 4 types of analytics difficulties – dataset, time, outcomes, explanatory variables.
Data difficulties – missing data, tree structure, robustness. So to deal with it, Markov multiple imputation, and for tree structure store preceding message to capture tree structure. Online discussion is asynchronous, get nested structure. SDA deals with that. For robustness, run separate outcome models on imputed data. (?)
Multilevel analysis (MLn, Hierarchical Linear Modeling), test with I2 index of Q-statistics, model with lag outcomes.
Outcomes – discrete outcomes (e.g. does this have a justification y/n), also multiple outcomes (detail being skimmed over here).
Model different people, e.g. men vs women, add another level to multilevel analysis, three level analysis. (She’s talking to the presenter view rather than what we’re seeing? Really hard to follow what she’s talking about here. Possibly because it’s not her area of expertise.)
Results
Look at sequence of messages. Asking about use of an idea precedes new information. Informal opinions leads to new information too. Male participants theorised more. Anecdote, 3 messages before, ends in theorising, as did asking about use, opinion, different opinion.
Looking at informal cognition moving to formal cognition. Opinion sharing led to new information as a reply. Also opinion led to theorising. Anecdotes, got a lot of those, they were practising teachers and they talk about that, also led to theories. As did elaboration.
Social metacognition moving to cognition. Ask about use led to new information. Ask about use led to theorise, and so did different opinion and ‘ask for explanation’.
Educational Implications
Want to encourage students to use one another’s formal ideas to create formal knowledge. Also wanted to encourage students to create subgoals with questions, wonderment, they take more responsibility cognitively of what you’d expect teachers to do. They motivate themselves and build knowledge over time. Handing over to collective cognitive responsibility. Consistent with Design mode teaching. (All Bereiter & Scardamalia stuff.) Doing it via prompts to aid discussions.
Methodologically
Participants coded their own messages themselves – we didn’t need to do content analysis. Scale that up, might be applicable to a larger dataset like a MOOC. Talking about extracting that data from e.g. Moodle and Sakai.
Questions
(Phil Long says he’s doing the Oprah Winfrey thing with the microphone.)
Q: Interesting. I’m responsible for making research in to applied tools at Purdue. What artefacts does your system generate that could be useful for students? We have an early warning system, looking to move to more of a v 2.0, next generation system that it’s just early warning but guidance. How could this apply in that domain?
Signals for EWS. This is more at the processes, at higher level courses, guide the students further along rather than just don’t fail out. This data came from Knowledge Forum. Takes a few seconds to extract that, in to Excel for Statistical Discourse Analysis. Many posts had that coding applied themselves. We can extract data out of Moodle, and Sakai. If we identify something we want to look at, we can run different kinds of analysis. Intensive analysis on this dataset, including Nancy Law too, and UK researchers. SNA, LSA, all sorts. Extract in a format we can analyse it.
Q2: Analytical tour de force. 2 part question. 1, sample size at each of the three levels, how much variance to be explained? Use an imputation level at the first level, building in structure there?
Terrific question, only Ming can answer. (laughter) I’m not a statistician. I know this dataset really well. Gives me confidence this analysis works. For SDA only need a small dataset, like 91 messages.
Phil Winne: Imagine a 2×2 table, rows are presence/absence of messages that lead to something else, columns are the presence/absence of the thing they may lead to. Statistical test tells you there’s a relationship between those. [This seems a lot simpler – and more robust – than what they’ve done. But I haven’t been able to understand it from the talk – need to read the actual paper!] Looked at relationship between other cells?
I’m sure Ming has a good complicated response. I was most interested in how students work with new information. Looked at the self-coding; can’t say caused, so much as preceded.
Uncovering what matters: Analyzing sequential relations among contribution types in knowledge-building discourse. Bodong Chen, Monica Resendes (Short Paper)
Bodong talking, from U Toronto.
First, talking about cooking. If you cook, and buy cooking books, you have to buy good ingredients, cook for right time, and put them in the right order. Temporality is important in cooking, and in learning and teaching.
Neil Mercer – temporal aspects of T&L are extremely important. Also in LA research. Irregularity, critical moments, also in presentations at this LAK14, lots about temporality.
However, Barbera et al (2014) – time is found playing almost no role as a variable in ed research; Dan Suthers critique of falling in to coding-and-counting. So challenge in taking temporality in to account. Learning theories tend not to take it in to consideration. Little guidance, and few tools.
Knowledge building – main context. Also suffer from this challenge. Scardamalia & Bereiter again. Continual idea improvement, emergent communal discourse, community responsibility.
Knowledge Forum again, but different version – posts in 2D space so students can see the relation between them. Used in primary schools. Metadiscourse in knowledge building. Engage young students to take a meta perspective, metacognition of their own work. Two aspects: first developing tools, a scaffold tracker, extracts log information about who used scaffolds, and present a bar chart to serve as a medium of discussion. And design pedagogical interventions, here for grade 2 students, what’s the problem for their discussion, to engage them – e.g. where are we stuck, how can we move forward.
What do good dialogues look like? Focus on ways of contributing to explanation-building dialogues. Thousands of discussions, grounded theory approach, Six different categories. [Is this like Neil Mercer’s stuff?]
To make sense of lots of data. Lay out in a temporal, linear form, how different kinds of contribution. Compared effective dialogues and improvable dialogue where they didn’t make much progress.
Can we go further than coding and counting? What really matters in productive discourse?
Lag-sequential analysis. ID behavioual contingencies (Sackett 1980!). Tracks lagged effects too. Many tools: Multiple Episode Protocol Analysis (MEPA). GSEQ analysis of interaction sequences, and old tools in Fortran, Basic, SAS, SPSS. A challenge for a researcher to do this.
So wrote some code in R to do Lag-sequential Analysis. Easy to do, and is one line of code to run. (Is it available as an R package?)
Participants and data – Grade 1-6 students, 1 science unit, 1101 KF notes in total, about 200 for each grade.
Primary data analysis, coded as contribution types, inquiry threads, and productivity threads. About 10 threads in each dataset, some productive, some improvable – fewer improvable. (We’re N=2-9 here.)
Secondary data analysis – compare basic contribution measures. And lag-sequential analysis to look at transitional ones.
NSD in basic contribution measures between productive and improvable dialogue.
LsA. Simple data table (in R). Feed in to R program, computes what’s happening. Takes one thread, computes the transitional matrix for that thread – e.g. if 1 happens, what’s the frequency of e.g. 5 happening. Base rate problem though. Try to deal via adjusted residual, or Yule’s Q, gives good measurement. Like a correlation score. “The code calculates that, which is just … magical.”
Merge in to one big table, 50×38. Simple t-test between types of dialogue and whether they differ in each transition. Run over all data.
Found – in effective dialogues, after questioning and theorising, tend to get lots of obtaining evidence. Also when working with evidence, they tend to propose new theories. For not very effective dialogues, students respond by simply giving opinions.
Implications
Temporality matters. Temporal patterns distinguish productive from less productive patterns.
Focus on community level, not individual or group level. Also, an R implementation of LsA, addressing the practice gap. Contact him to get it.
Limitations – LsA overestimates significant results, misses individual level. Data manipulation converted it into linear format. Other actions, like reading, are not considered.
So for future, design tools to engage students in better discourse. Connect levels with richer action, and refine the algorithm to be more effective.
rpubs.com/bodong/lak14
Questions
Q: Agree that methods matter in LA. Useful to see these two presentations, employing different methods. Statistical discourse analysis is new. What would a comparison look like? They both hit on sequential analysis. Would be great, come from same lab – considered a head-to-head methodological treatment? (laughter)
Ming Chiu’s work is more advanced. A lot of work in SDA is different. Big difference here, I compare two kinds of dialogues, they are more not distinguishing between effective.
Nobuko: Focus on productive discussions, not non-productive. We looked at everything, but focused on things that led to provision of information and theories. But for you productive ones lead to theories.
I’m not trying to advance the methodology, I want to design tools for students. I’m trying to use a tool to explore possibilities.
Q: LsA is done, understood quite well, useful baseline for understanding new creature, SDA, it’s complicated. Before we can put faith in it, have to have some understanding.
Q2 (?Phil Winne): SDA can address the kind of question you did, like do discussion vary, like an upper level in a multilevel model.

10B. Who we are and who we want to be (Grand 3)
Current State and Future Trends: a Citation Network Analysis of the Learning Analytics Field. Shane Dawson, Dragan Gasevic, George Siemens, Srecko Joksimovic (Full Paper, Best paper award candidate)
(While Shane was out of the room, George stuck a photo of a dog into Shane’s presentation.)
Shane talking. Thanks everyone for stamina. Thanks co-authors, except George. (I contributed! Says George.) I had the lowest impact, so I am up here.
The slide comes up, and Shane looked straight at George. Yes, you did contribute. (Manages to recover quickly.)
Goal – citation analysis and structural mapping to gain insight in to influence and impact. Through LAK conferences and special issues – but missing a broad scope of literature.
Context – much potential and excitement: LA has served to identify a condition, but not yet dealt with more nuanced and integrated challenges.
Aim – to ID influential trends and hierarchies, a commencement point in Leah’s term. To bring in other voices, foundation for future work.
LA has emerged as a field. (Strong claim!) Often mis-represented and poorly understood, confused with others.
Using bibiliometrics – Garfield (1955), Ding (2011). Dataset: LAK11, 12, 13, ETS, JALN, ABS special issues. Straight citation count, author/citation network analysis, contribution type, author disciplinary background (shaky data).
Many criticism – buddy networks, self-citations, rich-get-richer. Gives us some great references (i.e. theirs). Real impact factor – cartoon from PhDcomics. But broadly accepted.
Highly cited papers are predominately conceptual and opinion papers, esp Educause papers. Methods – Wasserman and Faust SNA book. There were empirical studies mentioned, but few.
Citation/author networks. Count any link only once, not multiple. Lovely SNA-type pictures. A few big ones. Moderate clustering – 0 is no connections, 1 is all, got about 0.4/0.5. Some paper were strong connection points, but degrees surprisingly low. We’re drawing on diverse literature sets. Degrees were increasing from LAK11 to LAK13.
Author networks – a few strong nodes, but generally similar disciplines clustering. Small cliques, few highly connected; not a problem really. For an interdisciplinary field, still largely disciplinary clustered.
Paper classification – schema from info systems, 6 categories, added a 7th. Evaluation research, validation research, solution proposal, conceptual proposal, opinion, experience, panel/workshop. Lots of solution proposals. A lot of evaluation research in the journals, the empirical studies are more there. LAK dominated by CS. More educational researchers in the journals – they prefer stuff in journals than conferences, but CS will. Largely conceptual/opinion. Methods – “other” was by far the most. Quant not far behind.
Field early, but maturing. Lots of opinion and definitional. Need to grow empirical students, more validation research, and critiques of studies. Would be great to see more arguments. Computer scientists dominate LAK proceedings; education research dominates journals.
By understanding our field, we can better advance it. Or do all fields go through this process? Working at other techniques too.
Questions
Matt: We’ve noted this for a while, it’s maturing. Is there another field that we can look at, in its first 5-10 y, to see how our research compares.
That was a comment from reviewers, can we normalise this. I’m not sure. How do you do that?
George: One field we could look at, the EDM community, there is some overlap. Talked about that, talking to Ryan. Point at the end, the location of a citation is more important as its existence.
Shane: Love to do the EDM work. Still argue it’s not as interdisciplinary as LA, so direct comparison very difficult.
Adam: Irony, that for analytics topic, not much quant. At history, from early days of ed tech, could we use that as a benchmark?
Shane: Yes. Look at where authors have come from, go out multiple steps. Largely they’re from ed tech, that brings in other literature.
Q: How can we use this insight to develop? Look at what communities are being cited but not well represented at the conference, approach for next LAK.
Shane: Great idea, thanks.
Hendrik: LAK data challenge, visualised the dataset of EDM and LAK, with ACM full proceedings. 12 submissions analysing differences between LAK and EDM. How could we team up for that for next LAK. Focused track, with focused tasks, where people have specific questions, compare how questions work on the datasets.
Shane: We did, Dragan chatted to Chris Brooks about the data challenge, would be great to get involved more.
Bodong: Analysing tweets since 2012, this is my first LAK but have been tracking it for a long time. And attendees who did not have papers. So Twitter could augment this. Data challenge next year, include tweets? Another idea.
Shane: Really interested in that, who’s tweeting, what’s being picked up and what their backgrounds are. The comment about the group we are missing, that’d be another are of interested people who aren’t publishing.
Q: Not just conference, but alternative mappings, where published work is mentioned and by whom. Lots of different communities, educators, administrators. Social media may reveal some of those trends.
Maria: Discussing on Twitter the idea of an Edupedia. We can do things, a LAKopedia, brief summaries on how research builds on itself. Every article gets summary, bottom line, strengths and weaknesses, leads to things that build on them. Have a taxonomy mapped out – field is now so don’t have to go back a long way. I’m not volunteering!
Establishing an Ethical Literacy for Learning Analytics. Jenni Swenson (Short Paper)
Dean of Business and Industry at Lake Superior College, MN. I’m not a data scientist, I’m a soil scientist. Might be only discipline nerdier than data science. Then to dept of rhetoric, looking at ethical literacies. Interested in 2008, via Al Essa. And Matt Pistilli at Purdu.
Rhetorical theory, concerned with ethics. Crisis of credibility, focus on ethics really helped. So have a lot of ethical frameworks. Our focus is to teach ethical communication in the classroom, raise awareness, to give skills to make ethical choices. Best message possible, analyse artifacts – recurrent pieces, written, spoken, visual. Always outcomes of a group’s work. Better understand consequences, intended and otherwise.
So taken these frameworks, three so far, and applied to LA. Looking for questions around purpose from an ethical framework.
Data visualisations – Laura Gurak, Speed/reach/anonymity/interactivity. Early internet stuff. Ease of posting online, get ethical issues for visualisation artefacts become apparent. We have software that allows anyone – NodeXL and SNAPP intended to be simple to use. Realtime analysis can be posted by people with no expertise, can be put out with any context. When viewed as a fact, we get questions such as, is there a lack of context, no history. Who’s accountable for predictions, accuracy, completeness? Target snafu, private data can become public quickly. What if inadvertently made public? E.g. through implied relationships. Interactivity and feedback, there isn’t as much with people who might be part of the vis.
Dashboards – Paul Dombrowski. Three characteristics of ethical technology – design, use and shape of technology. Think about first impressions. Things like Signals. We all judge people, 8-9s, sticks with you. Visual elements could be expressive. Meaning created beyond the dashboard label of at-risk, and how student responds without the context. Finally, how does it function as an ethical document. Many questions! Is there a way to elevate the student over data, rather than viewing the student as data.
Then Effects, of the process. Stuart Selber’s framework, institutional regularisation – required use leading to social inequity. We have an economic reality, Pell-eligible students, no access to computer, transportation, have jobs. Different from 4y schools. Need to be sure not making harm. At any point, can have 5% of our student population homeless (!). Crazy what these students are doing to get the 2y degree. Ethics of these projects, could be different between two schools. Transparency about benefits. In rhetoric, if uncover ulterior motive, you message has failed and you’ve lost credibility. So transparency needed that school will benefit. Do intervention strategies favour on-campus, vs online? We want to have available to all. Bernadette Longo, power marginalising voices, often unintentional. Who makes decisions about LA model and data elements? Legitimising some knowledge and not other. If we do give them a voice, are they all given that consideration? Bowker and Leigh Star – most concerning. We are really trained to accept classification systems as facts. We know there are problems, and we’re stereotyping to fit in to categories. There could be existing categories we’re covering up. Making sure that conversation is there, again transparency. Real kicker – at risk, “people will bend and twist reality to fit a more desirable category”. But people will also socialise to their category, so dangers that it may feel like defeat.
How does institutional assignment of at-risk match the student? Do they reinforce status? Can LA be harmful if categories wrong? We know it is, we have to have the conversation.
Return to Selber. Three literacies – functional, critical and rhetorical literacy. They are non-practitioners put to this test. Understand, analyse, and produce knowledge. To reach ethical literacy. Under rhetorical side, four areas – persuasion, deliberation, reflection, social action. Who has control and power of the conversation, who doesn’t, and why? Are we following a code of ethics?
Central question: Who generates these artifacts, how, and for what purpose, and are these artifacts produced and presented ethically?
We could be a big step up, took tech comm 10-15y. Questions are a jumping off point.
Questions
Q: Thought about fact that many of us get funding from agencies, foundations, and how this compromises ethical principles, about empirical findings with another agenda in the background.
Any time you go after money, there’s ethical implications. For me, in rhetoric, as long as you’re having the conversation, being open and transparent, that’s all you can do.
James: In ethical models with Aristotelian ethics, utilitarianism – possibility of a new ethical model? Because dealing with technology?
I do think there is time. There is a lot of different codes of ethics, different models. This was just one discipline. There might be parallels or best fits. I’m hoping for that. People have papers in the hopper on ethics of analytics. Broader conversation, the Wikipedia idea was great, discuss what this code of ethics is.
Q2: Real barrier to LA is the ‘creepy factor’, people don’t realise you’re doing them. More mature ethical future could overcome that affective feeling?
It is creepy! (laughter) I think young people don’t care about how much information is being collected about them, and older people have no idea. Everyting is behind the scenes and we aren’t being told. We have to have trust about collection and use. Thinking more of industry. There isn’t a transparency. We feel betrayed. There’s no transparency right now and that gives us the creepy feeling. The opt in/out think contributes to creepy feeling.
Teaching the Unteachable: On The Compatibility of Learning Analytics and Humane Education. Timothy D. Harfield (Short paper)
Cicero – eloquence as copious speech. Will try to be as ineloquent as possible.
Timothy is at Emory U. A little bit challenging, involves unfamiliar language. Paper has philosophical language. Best way to approach it is with stories, motivation for thinking about what is teachable and what isn’t.
Context
Driving concerns – getting faculty buy-in, especially from humanities. STEM, it’s an easy sell. In humanities, unfortunate responses. Profhacker blog, LA space is interesting, but is a little icky. Generate a reason to engage in this space. Secondly, understanding student success. Problem at Emory, we already have high retention and student success. How do you improve? And opportunity – think about what success looks like as a learner. So changed conceptions of student success. Retention and grade performance is easy; others more challenging. And thirdly, understanding learning as a knowledge domain. More often than not, learning is not defined or if it is, it’s as basically experience, changes your behaviour or pattern of thought [what else could it be?]. Have concerns about language of optimisation as constraining what learning might mean. Any time you see ‘optimisation’, think of a technique. Some types of learning don’t lend themselves to enhancement through techniques.
Teachable and unteachable
Going back to Aristotle. Five forms of knowledge, fifth – wisdom – is combo. Episteme, nous, techne, phronesis. What is the case, nous, how do I do it, what should I do. Episteme and techne are training, phronesis is education – practical wisdom. We cannot teach this (!), includes moral behaviour. Cannot teach them the right way to act with the right motivation, only acquired through experience (?!). Training vs education from another philosopher.
Humane education (Renaissance humanism)
Approaches we see – three features. Ingenium, eloquence, self-knowledge. Ingenium, essence of human knowledge – deliberation, disparate situations drawn together. Not technical competence, imaginative, creative, can’t be machine-learned. (!). Eloquence – copious speech. Educators mediate between students and knowledge. Students come from different places, values. In LA, not all students respond the same way. Example at Emory, dashboards like Signals, expect everyone will respond the same, aim to perform at a higher level. For some it’s effective, others not. Already high-performing students end up pursuing proxies. So wrt eloquence, meet students where they are. Self-knowledge, cultivating ingenious behaviours, making them responsible for learning, developing capacity.
Humane analytics
Real concerns. Potential for analytics to not recognise the responsibility of the learner, undermine their capacity.
Really fruitful if reflective approaches, like in Alyssa Wise’s work, Dragan, Dawson. At Georgia State, predictive models, part of the funding for this project is not just the model, but also purchasing a second monitor for every student adviser. At others, it’s an algorithm where students are automatically jettisoned and have to choose a different path. Here, confronted with their data, asked to reflect on it in light of their personal goals, troubles, where they want to be. These reflective opportunities are powerful: encourage responsibility, and also have the ability to make prudential decisions about how they are going to act in this context.
Another space, fruitful, social network analysis. Humanists will be frustrated by online education because miss the f2f, with each individual student to eloquently meet each student where they’re at. End up as sage on the stage; humanists end up that way but like to think of themselves others. SNA has possibility to get the face of the student back. Can ID nodes of interest, opportunities to intervene, to lead and guide in a particular direction.
Conclusion
Thinking carefully about what learning is, and the types of learning we want to pursue. Some of our techniques may contribute to some types and distract from others. Need to be sensitive to needs of all students, and not all learning is teachable.
Questions
Q: Slide referring to higher performers, seeing their dashboard, undesired outcome?
We’ve had dashboards piloted in two classes. Told students to stop looking, because they’re seeing decrease in performance. The competitive nature is coming out to the detriment of learning outcomes. Really frustrated, students really anxious. So instructors told them to stop.
Alyssa: It’s not always necessarily positive. How students act on analytics. Goal not to just predict, but to positively impact and change that. Students optimising to the metrics at the expense of the broader goal. How do we support students in the action taken based on the analytics?
We’re learning you can’t just throw a tool at an instructor and students. Responsibility in the use of these tools. If instructors not careful in framing the dashboard, not engaging in discussion in ongoing way, whether it is accurate reflection. Levels of engagement measured in clicks or minutes is not correlated with performance at all. Bring that up in class. When you do ID students are risk, or in a class determine these metrics are not useful – say best use of analytics here is to not use them. Or not use as measures, but as a text itself, and opp to reflect on the nature of their data and performance – digital citizenship, etc. Cultivate practical wisdom in the digital age.
Maria: We have to be careful. Just because we have dashboards, not the first time they got negative feedback, or negative actions. It’s a bit more transparent now. One thing is cautionary note, this stuff has been happening all the time, this is just a new way for it to happen.
Hadn’t thought about it that way. Discussions about responsibility might end up leading to larger conversation about instructor responsibility in their pedagogy.
Maria: Tends to be case in tech, it creates new problems, we create tech to solve those. Phones where we text, people drive when they text, now cars that drive themself. We’ve created dashboards, new unintended effects, need to iterate to find ways to have intended effects.
Situate in a sound pedagogy. A talk like this is really preaching to the choir. Wonderful to enter conversation about these issues.
Caroline: Combining this with earlier talk, apply it to our development processes within the community. Technology is a theory to be tested. Ideas about reflection applied to development of the tools so don’t get in to those issues.
Q2: Interested in effectiveness of high-level things vs just showing the raw activities that make up the score.
Significant chunk of the paper discusses relationship between what’s teachable and measurable. If it’s teachable, optimisable, performance compared to a standard: skill mastery. These things are teachable and measurable. But things like creativity, imagination, even engagement, they don’t have a standard for comparison, we’re forced to produce an artificial one, which is maybe to lose what it is in the first place. Is more effort required to make distinctions about the types of learning? LA applied differently.
Closing Plenary
Kim gathers people together. Glad you joined us here, looking forward to connections.
Few final things.
- Best paper (Xavier Ochoa), poster (Cindy Chen), demo (Duygu Simsek).
- If you presented, get it up at bit.ly/lak14preso – or view it there!
- Twitter archive will be there too. Conference evaluation to be sent next week, please complete.
Hands the torch over to Josh Baron, from Marist, for 2015. A bit frustrated, you’ve set the bar too high. (laughter) Had an awesome time. (applause) Advanced welcome drom Dr Dennis Murray (sp?) who’s looking forward to it. Not everyone knows where Marist College is, it’s Poughkeepsie, NY, an hour up the Hudson from NY City. Focused on technology. $3m cloud computing and analytics center. Many faculty excited.
LAK15.solaresearch.org
Safe travels. See you next year!
–
This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission needed to reuse or remix (with attribution), but it’s nice to be notified if you do use it.