Liveblogs from Wednesday 2 July 2014 at LASI2014: LASI Locals report and Tiffany Barnes keynote.
LASI Locals report
Dragan welcomes everyone. Cairo, Egypt. Hong Kong. Madison, University of Wisconsin.
Hong Kong
Xiao Hu talking … but we can’t hear her. Then we can. It’s evening in Hong Kong, so the others are having some rest. They are going to start on Friday. Schedule for a one-day event. Morning session is about presentations of research, with presenters from different organisations – Hong Kong U, and researchers from other universities, including HK U of Science and Technology. Research Institute too. First part, engineering perspective, then second part, pedagogical perspective. Then concluding with discussions and sharing. More open in the afternoon, with participation from teachers, school administrators. Lots of materials from LASI in Harvard. Organisers will review video from Harvard, and make a digest of those contents and make them shorter to cover more topics.
Showed Powerpoint slides – LASI-Hong Kong 2014 @ University of Hong Kong. They have lecturers from education, computer science, engineering, and also people from industry. Prof Nancy Law as the drive for this. They have a few research themes. Interdisciplinary. Had a Summer Fest on the Science of Learning, one-week event with speakers from around the world.
Cairo, Egypt
Have been there for two days, having sessions for 2-3h before these sessions. In those two days, we’re just starting up uncovering local expertise in all the multidisciplinary areas. People from engineering, data, IT, educators participating. Presentation on the data they’ve collected over 25y – lots of data, looking at how to link it to data in other ministries and other sources. After six months, hopefully we’ll have an event to share information. Have started a wiki.
Dragan: Any feedback or questions? Which themes resonated best in Cairo?
We’ll have lots of questions in the next six months!
Madison
Kyle says hello from UW-Madison. Yesterday, after the live streams from Harvard. Five different sessions, four universities from UW system. Keynote panel from four people, research, administration, leadership, technical team, talking about building out capacities for learning analytics. Tech infrastructure, policies and workflows/project management, values and skills, questions of culture and behaviour. Then concurrent sessions. John Thompson and someone else talking about student success pilot, predictive system built by D2L, using it over 2-3y. Talk about use of workflow visualisation tool using Moodle data – if you Google ‘Moodle workflow visualization’ you’ll get his work. Design-based research. Talk, interactive session on student privacy and autonomy. Four narrow problems, and a wide one – privacy about what, whom, balancing benefits & harms, student autonomy. Wider, LA in relationship to wider goals of HE: thinking about students participating in a democratic education process. Another talk from a grad student working with the games, learning and society group. Epistemic network frame theory, epistemic network analysis using game simulations.
Today, we’re gathering ourselves from the loss to Belgium yesterday. Watching the Harvard livestreams.
Questions
Dragan: We have many presenters here.
Madrid came on the line – Antonio Roblez-Gomez. But very distorted audio, and dropped off.
Dragan: Which of the themes we discussed here, and workshops, attracted attention locally?
Madrid came back again … but we couldn’t establish communication.
Jelte from the Netherlands, interested in early literacy, high school, exploring interest for a special interest group – if so, please join me in the coffee break. Literacy analytics.
Another lunch sig hosted by Caroline Haythornethwaite and Anatoliy for early career academics.
Keynote: Making a meaningful difference: Leveraging data to improve learning for most of people most of the time
Tiffany Barnes (North Carolina State University)
Making a difference for most of the students, most of the time.
Start with guiding principles for research. How many of you have applied for IRB? (Most people) Is it fun? Not really, but the principles are important: respect for people, beneficence, justice. Is what I’m doing worth spending all this time on. In EDM, we often make the mistake of trying to get a better model, that’s statistically significantly better than previously – but not asking whether it makes things better for the learners. Will it make a difference to how many exercises you have to do for your homework? Will it make a difference? I think about these principles in choosing my research. When we think about adapting systems, many levels of analysis – systems, institutions, countries.
Most of my work is at the individual student level. Here’s how we would apply those principles. Respect: making changes that make a difference for individuals. What difference does it make for me? Need personalised models. Don’t want ed software that doesn’t care who you are. Also recognise that people have a lot to offer to environments and each other. Most work taking interaction data to give feedback or hints to other students. Criticism that this couldn’t be useful. I feel I’m constantly learning as a teacher, I’m a learner as well. So don’t want to say students have nothing to offer. My other response is, don’t you ever give any As? I could take the data from them and feedback to other students. We forget that there’s expertise being built in our learning environments. Expert blindspots. Once you know how to do something, it’s hard to explain how to do it, because we have the knowledge together in our brain. You may not know the best path to take new learners down. But more recent learners can help. So leveraging current learners’ experience. The data in the middle of someone learning is great data to give someone else who’s coming along.
Beneficence: What difference does it make? Practical effect sizes. A trend lately – not only stat signif, but is a letter grade going to change, or homework time. Not arguing about models, but just pick some and use each other’s advances. Maximise potential for positive changes.
Justice: Use this a lot. If I think I’ve made a new system, or a new method that’s going to be better, it’s not fair for me to do controlled experiments where one classroom gets it and another doesn’t. Mostly I do switching replications or crossover studies. That turns out to be a nice method – have controlled study for the first half, and the second half tells you if there’s an ordering effect. Most of my stuff is homework studies. Considering equity in developing and deploying systems. Most of us work in digital spaces. A few on the other side of the digital divide. Lots of ways to measure proficiency and mastery. We’d all like to do stealth assessment. If you can predict test scores, why do you need to have the test?
The future of learning. Was asked to think about best paper award in 2023. Need different ways of recognising and promoting excellence in teaching and learning. Non-intrusive ways to identify people who can help others, recognising mastery. I do an intervention in discrete math class, invite students to become peer tutors if they’ve done well in the class. They provide 10 h of service in return for not having to do the exam. Students were elated, and so was I. 35 out of 65 volunteers completed it. 350 free hours of work from students who were excited to do it, and they said they learned the material better. When someone asks a question, they have to think a different way and learn more. Real-time support for effective culture – identifying potential collaborators working on similar tasks, pairing according to max likelihood of good relationship, suggesting joint activities. Blurring boundaries between teachers and learners. Reputation management, knowledge modelling. How many play games? (about 1/3) How many play games with user-generated content? (much fewer) Lots of examples, like Little Big Planet. It’s hard to do. They key is to ensure they are learning the content when they’re making it.
There are things we need to do to get there. Have a system where people can keep track of their achievements. Like Xbox achievements. Tracking what you’re good at. Like a credit score. Computer scientists in the room? (about half) But we’re all terrible educators. We just say, go do it yourself, because that’s what we did. All of us fell in love with something, figured it out by ourselves, and that’s what you think everybody needs to do to learn something.
Photos of her students. Research areas – games, broadening participation, AI for learning. Two centers – digital games center, and Center for Educational Informatics. CAREER award from 2009 for EDM for student support in interactive learning environments.
Vision is individualised instruction, shift in roles. Trying to make domain-independent systems for learning. Students are interacting with the computer, we could use data mining for intelligent feedback and control.
Approach – a data-derived model tracer. Lots of work under intelligent tutor systems. Cognitive tutors – a model tracer, and a knowledge tracer. Knowledge tracer is telling what the student knows. Model tracer, when they’re solving a problem, you have a model, and trace what they’re doing on that model. If measure a students’ performance, have expert solution, and trace what they’re doing on top of that. At first, just had correct models, then now have incorrect ones to in order to diagnose. Goal to have data-derived model tracers. Most work in discrete math and logic proofs. They’re open-ended. Think back to geometry – premises that are true, derive this result. Most students hate that, they have to figure something out. The activities they can do are constrained – there’s a fixed list of axioms. Can’t just draw a flower. So it’s open ended, but I can understand all of what students are doing. Looking at their proofs. Produce a graph of it – state at the start, take a snapshot of the next, that’s the basics of a graph. Combine each of those in to a big graph – same screenshoot=same state, if not, new state. Then calculate transition probabilities based on frequencies. Then label some states as good solutions, reward those, penalise errors. Then use a markup decision process, using value-iteration to assign credit to states based on whether they get to good states. We know where someone ought to go, see what’s connected to that, and then back-propagate to those. An example – big spidery state transition diagram. Most of it is spaghetti, but a lot in the middle with a lot of overlap, many students doing the same thing. If grade papers, you get deja vu. Students do the same thing lots of the time – so we can use that to give hints. Get value for each state, which is important, can tell the best and use that as a hint.
Useful in any domain with open-ended but structured problem-solving, with multiple steps. Math – algebra, geometry, logic. But also STEM.
State features describe what they see on the screen; here, the state is the premises and what they’ve derived so far and by what rules.
Over 200 students/y. Discrete Math course, Logic and Algortithms, Philosophy. Students have difficulty developing strategies to solve proofs. Use Deep Thought logic prover. Workspace, premises, conclusions, and rules they can use. Problem details. Goal – use rules, to make sure there’s no question marks. Actions are the rules they can click in this environment. State is what they can see. Set goal state to high value, errors to negative. All others start with zero, add transaction cost to each step. If an error, mark that as a side state, then move back.
Did a study looking at log data, see whether they can generate enough hints to be useful – say 50% of the time. Looked at all states, see if can generate a hint there. Look at the next state with the highest value. Generate hints from the state features of that state. For our domain, we generate a hint sequence. So we say, here’s what you should derive, and what statements. Four steps. Nice to hint to next step – but isn’t that cheating? Looked at how often they do it. On average, they use 4 hints. Either they get one free answer, or four nudges. Can’t you get hints more like what a teacher says? I don’t always stay at a high abstract level.
Took 4 semesters of past data, 523 student attempts, 381 complete. Did cross-validation. 1 semester of data to predict the other 3, then 2 to predict the other 2, and so on. Different kinds of hints. A person exactly like me at exactly the same step, like ZPD, likely to be helpful hint. That’s the first kind of hint – called ‘ordered’. Unordered, can’t find someone who did the same order, but state is the same, so may have similar model. Last two kinds, often they’ve already tried something that didn’t work, could be because they’ve just done something they shouldn’t, so maybe try go back one step and look for that as the next step. With 3 semesters of data, got 94% ability to give hints.
How long will it take us to get these hints? Whole semester, or add them piecewise. So took whole dataset, randomly add a student, then next one and see if can hint for the next. Do that 10,00 times. ‘Cold start’. Only need 8 students for 50% chance of hints.
Pilot study. Only had 16-26 attempts, hints only available for 45%. 40 students. 90% of hint requests were successful! Not only can we often give them, we can give hints 90% of the time when someone asks for one. Students have a lot of overlap in where they need help, a couple of hard spots that are the same for lots of students. Students were able to do more problems when they had hints. Students 3.5x more likely to finish the whole problem set if they had hints. No time difference, the control group people dropped out – no hints, so dropped out. Then looked closer at the data, the control group spent more time than the others – they spent more time and gave up because they’d spent lots of time.
Also looked – can we add expert data to need less student data – expert seeding, means you can start at over 50% hints if have just a few problems worked by experts in the system. But seeding could create side effect of reinforcing specific paths – not comparing students with successful students any more. So now alternate which problems have hints, so collect some of the data separately. Trail-blazing effect, do observe that students do what the hints suggest. If they do something and don’t get a hint, delete that and go back. Interface shows whether hint is available, and they use that as [an informational scent] to see if anyone has gone this way before. The road less travelled can be cool but not when you’re doing your Discrete Math homework.
We labelled the goals. Just because you’re finished, doesn’t mean everything is good. Like writing a conference paper – lots of stuff cut out of the final version. There’s a reason we delete them. So wanted to see how many steps are useless, recursively figure that out. Computationally intensive process.
Happy to find that frequency was a predictor of whether something was useful. If you take a problem, make a list of all actions, count how many did that. If more than 7 people did it, it’s useful. So didn’t need computationally intensive process – just say if lots of people do it, it’s important. Only works if 70% of people got the answers correct. Use that to assign hints – looked at how different they were.
384 states that could possibly have hints, 204 had only one next action. 180 had more than one choice. Compared two methods, 91% agreement. So we didn’t need to know what the goal was, just know that lots of people did the same. So this could be good in MOOCs if we don’t know what the right answer is. If nobody’s doing it, it’s probably not right. [A useful heuristic for life. But not an ironclad rule.]
Used idea in another game, a BeadLoom game. They have a picture, have to plot points to draw the picture. Made out of fat pixels. Start with e.g. blank square, add a red dot. Can generate state transition diagrams.
Prototype – InVis. Tool for exploring state transition diagrams. Example of larger network – from Deep Thought Logic Tutor. Have algorithms to simplify them – e.g. remove all frequency 1 states. So can see the main features. Not just visualising it, but giving e.g. the top 3 graphs, and what the errors look like.
Did a study using experts predicting students’ action. Hypothesis that students were guessing. Found a whole bunch of students doing that, and that they had a lower success rate.
Also looked at different graphs of students solutions at different times. Graph on day 1, all over the place. Day 3, a lot fewer states, they’d learned what they’re doing.
Did clustering of graphs based on network analysis. Used this to generate more abstract hints, based on edge betweenness. So instead of next step, need to get on another island, and the next step. Collapse the cluster, see what the islands mean and how people traverse the problem space. Students can work from the top down, or bottom up. More successful when working forward. Maybe need to teach them that. [I’m guessing selection effect – better students work forward and don’t need to try working backward.]
Now doing survival analysis, see what the time in the tutor means. Comes from medical literature, how long people survive on e.g. a drug. Looking to see what time in the tutor means, what does it mean, especially when so many people drop out of the tutor. So make curves to predict how long people will spend. People in hint condition take 45% less time.
Newest thing, data-driven mastery learning. Don’t move on until you know the previous element. I do know which students have finished. Break that data in to intervals. Then take a profile of those students at those intervals, make that the target profile. Compare new students, if not hitting same threshold, make them do more remediation before they move to the next one. When re-did the tutor that way, much higher percentage of students finish the tutor. Original system, less than 30% finish. With hints, more like 45%; with mastery, more like 55%.
Questions
Piotr Mitros: Logic has small state space. Thought of applying it to more open-ended domains, e.g. programming, circuit design. Similar systems but larger state spaces?
We’re working on building hint systems for programming for novices. What should the state representation be. Done hand analysis of Java programs. Can do a good representation. Two approaches. One is a linkage graph, John Stamper, tracing all the variables and what happens to them. Not built that but recommended for funding for a grant to do that.
Mykola: Your guiding principles. The justice principle. How does the experimental, development work relate to that?
Justice is about equity of access. Most work in college discrete math classes, already and elite group. Not pushing that angle with this work. I have other work, another grant to have college students do outreach service to get more people in. Also invite UG students to work with me, make new games and projects to expand to more STEM work for more students. Not using my EDM work to get to that. But equity, not saying if no A not good. Our hints are modified by frequency, a C student might match a C student and so might help more. Idea is that every student can achieve the A level but might not have to go the same path.
Xavier: If I have a view, I can solve the problem. How do you avoid the system saying it’s wrong?
Our hint system is on demand. So if someone doing something creative, new, we don’t say that’s wrong. Students who don’t ask for hints, and are doing something strange, they have a creative urge. Students who do wonky stuff but asking for hints, they’re usually trying something random. So we’re not necessarily helping them. I can capture creativeness and will be able to get a hint along that path later.
–
This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission needed to reuse or remix (with attribution), but it’s nice to be notified if you do use it.
One thought on “LASI2014 Wednesday (1): LASI Locals and Tiffany Barnes”
Comments are closed.