LAK13: Friday mid-day (13) Analytic architectures

Final liveblogging from LAK13 conference – Friday midday.

Analytic architectures

the town hall at Leuven, Belgium
(cc) Scorpions and Centaurs on Flickr

From Micro to Macro – Analyzing Activity in the ROLE Sandbox

Dominik Renzel, Ralf Klamma

Dominik presenting. Motivation, increasing adoption of distributed learning services – PLE, hybrids. Responsive Open Learning Environments. Thriling that you can get an unprecedented insight in to learner behaviour on a really large scale. Some proprietary techniques are biased/have limitations, move towards standards.  But learner behaviour is evolving and inherently hard to model. Their approach to step back to proven standards – simple web logs. Because: standardised byproduct, wide adoption – every web server admin has these logs. Lowest possible degree of bias, low level protocol data. Generic low level but liftable to higher-level semantics – e.g. RESTful interfaces. Analysis on multiple ecosystem levels, from micro to macro. Also have highest level of data interoperability, allows cross-service analysis.

Focus and level of analysis from Ecological System Theory (Bronfenbrenner): student at the centre with 5 nested and interrelated ecosystems around – from microsystem, mesosystem, exosystem, macrosystem, chronosystem.  Existing work often focuses only on one specific system or subsystem. A comprehensive LA framework should allow analysis on all levels – not just learners, teachers but operators, politicians. May never neglect micro data, always apply aggregation careful – danger of washing out anomalies – and the anomalies are interesting and useful hints. It should work with historical and real time data so you can have targeted and timely intervention.

Data interoperability was a concern. Could create/use proprietary data ‘standards’ for monitoring learner activity in LA. Convenient analysis, but danger of losing anomalies. Standardisation processes are long, no guarantee of adoption – hard to combine data. Use of incompatible formats makes cross-service analysis complex. Why not build on existing web standards? Web logs and page tagging – profit from things that work from years on the web.

Example web log entry, from a widget-based PLE. Combine for a learning task, move the widgets, all covered by RESTful web services. Creates logs:

IP address (can get a geolocation roughly), date, operation, URLs (use the linking structures), can also get other things. Can pull in external data – like geolocation from IP address; from widget, can extract widget metadata. In ROLE they have widget metadata, which gives more insight of what learning is going on.

Real-time processing of web logs, simple shell scripts, process the data, clean it, pull in other data.

Analysis techniques. Single operations gives micro level. Careful aggregations can give you higher levels. Relations – IP address called URL, set up paths, get sequences of activities. Set up actor networks, people using tools, or tools used together – use SNA on people or arbitrary actors. Temporal information can be used for time series analysis. Semantic enrichment gives you muh more. Benefits to learning stakeholders (broadly considered) – community awareness by visualisation, recommendations of e.g. tools, people, resources.

Main argument: don’t neglect web logs! They have limitations but always available without further instrumentation. Simple to set up. Multiple techniques possible.

Next steps: More serious analysis on ROLE Sandbox dataset. >8m API requests from 3000 IPs, 500 widgets (!!), 1300 PLE spaces, 3700 shared resources. Transfer approach to other contexts.

Questions

Q Maybe I missed it, how do you identify the user, the session, from web logs.

That’s really a problem. With IP addresses, cannot do it precisely. Many people use the same machines every day. There are other techniques that up to this, e.g. page tagging used in web analytics, make the problem less serious. You cannot differentiate users. MOOCs – people can have different IDs, come in with one ID, try again with a different. Other approaches can fight this problem better.

Dan: In many of our logs, a single action shows up in 3 entries – bringing up the page, edit round, posting – have to aggregate in to meaningful actions. That’s where you do the work to move to a specific application. A level of representation that abstracts to something where the analysis is re-usable. Do you do that? Or analyse directly?

A little bit yes. The data is usable for many scenarios, you can lift from this data for different scenarios. RESTful web services have their own semantics, use HTTP to model higher level semantics. If have documentation for service, can model it.

Dan: If you write an analytics solution, can that be reused because there’s an abstraction layer from the web log?

We put that in the database, we do not do the analysis on the plain text from the log, we put it in a db schema. Use visualisation tools to query the db in a more convenient way.

Analytics of collaborative planning in Metafora – architecture, data, and analytic methods

Andreas Harrer

Also ‘tales of a ‘tooler’ about an interdisciplinary approach to reach the middle space’. ‘Confessions’ perhaps.

Co-designed with pedagogical partners a system to support collaborative learning. Build a system, report, reflect, feed back to hypotheses. Also developed analytics components they wanted to present to the students. Ongoing process.

PlaTO – an analytics agent for planning behaviour.

Pedagogical background of Metafora – self-regulated and collaborative learning. Secondary school, small groups (3-6). Complex learning challenges, extensive lessons – 20h. Planning as a key competence to learn to learn together (L2L2). System supports learners and teachers with intelligent analysis.

Physics microworld, constructivist approach, students manipulate their own model. Tool instances can interoperate – it’s a tool box, but a semantic level exposing this in to a discussion environment, so it can be explored by the learners and then come back to the tool.

 

Picture of the tool in use.

The main tool is the Planning Tool – learners self-organise activities using visual elements – shapes, cards that represent activities, attitudes (Critical, open, reflective), roles in organisation, and connectors. Learners can create and document their progress and status. Tool is a bit like LAMS or similar tools. Can click from card to the tool itself. Elements have state e.g. ‘finalise’ on the model when you’re finished.

Use wide range of web technologies, flexible architecture to extend system with AI components. Want analysis across several tools – do plans and actions fit together? So need comprehensible data formats on an abstraction level.

Analytic architecture diagram. The components don’t interact directly but via channels. PlaTO takes data from Planning Tool (via logging channel) and passes output on.

Range of indicators from PlaTO – create/delete plans, open/change maps, etc.

Dataset – 7 school challenges, in Israel, China, planned in Brazil. 1200 users, 550 groups, 1000 planning mapes, 279k user actions.

Don’t want to post feedback untested to the students. So first do observation procedure. Replay log files, expert sees activities by the user, like on a TV, can then get the analysis component feedback, see if it’s useful or not, which creates coding on the usefulness of these indicators.

What to do with these results? Connect to Alyssa’s talk. Small tools informing students and teachers – an embedded tool, folded away most of the time, teacher can invite students to fold it down and do some reflection. Inspiration for the design from visualisations summarising football games.

Dataset is available and research method.

Current activities – eye tracking study for effects of feedback messages. Contingency Analyser based on Suthers et al Contingency and Uptake analysis.

Questions

Dan: With both talks, to what extent are you offering a solution, an approach or platform for doing LA? Is it at the level of implementation you have, or at the architectural level? What should others take home?

Here mainly demand-driven by the pedagogical partners, interested in being aware of what’s going on. Influenced the definition of our indicators. Had good experience with legacy systems, also with newly developed tools. Now integrate tools semantically in to our architecture. Interesting to hook that up with other systems in the spirit of having an analysis toolbox or analytics app store.

Dan: That’d be the role of yet another standard?

This format pre-existed from a European Network of Excellence, it’s not very complicated. Some XML transformation are not that critical. Could be some interaction with web logs, we don’t use them at the moment.

Q More the contextualisation. Challenge: you may hear that it’s not really learning analytics it’s intelligently supported CSCL, 15 y ago. I’d say that constructively. We have to build our evolution. Open learner modelling for collaborative learning, connects in well with this and LAK in general.

I would follow you. I buy that with intelligently supported CSCL. Here we have a heterogeneous environment with specialised tools. Open learner modelling implicit – students get feedback from what the system collected about them. You can’t manipulate or correct the model, that might be interesting too. It’s not very skill-based or -oriented. It’s relatively unbiased still. If you start or finish and activity, or created an artifact. The loop inside the head is more important than the loop in the system.

Dan: To what extent are you building on the plan recognition literature?

Maybe the terms overlap. Don’t use regular techniques. We use workflow engineering techniques for analysis.

This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission needed to reuse or remix (with attribution), but it’s nice to be notified if you do use it.

Author: dougclow

Data scientist, tutxor, project leader, researcher, analyst, teacher, developer, educational technologist, online learning expert, and manager. I particularly enjoy rapidly appraising new-to-me contexts, and mediating between highly technical specialisms and others, from ordinary users to senior management. After 20 years at the OU as an academic, I am now a self-employed consultant, building on my skills and experience in working with people, technology, data science, and artificial intelligence, in a wide range of contexts and industries.

%d bloggers like this: