LASI14 Tuesday (2): Panels on Learning Sciences, professionals

More liveblogging from LASI 2014 at Harvard, on Tuesday after coffee.
John Harvard's Foot [256/366]

Panel: Learning Analytics and Learning Sciences

Chris Dede (Harvard University), Janice Gobert (Worcester Polytechnic Institute), Taylor Martin (Utah State University), Carolyn Rose (Carnegie Mellon University)

Mykola introduces the panel.

Janice Gobert: Using data mining on log files for real time assessment and tutoring of science inquiry skills

Apprendis, LLC – small startup she’s working on as well as WPI.

Coming from very psychological perspective. We’re getting in to the black box of learning; LA and EDM give us tools to work at a fine grain size. Intersection of learning sciences and computer sciences.

US students continue to fall behind in STEM, 21st in international comparison. Why? Students differ on many dimensions – e.g. prior content knowledge, skills for science inquiry, affective characterisation. Standardised tests don’t measure the right stuff. So teachers don’t have the right info to give feedback. Typically multichoice, get feedback a long time later. Often in to the next year. Skill assessment is very limited. Educators can’t know who needs help. Students struggle in silence.

Leverage tech & data mining to help. Working in this area, assessment, real-time tutoring.

Interactive labs, microworlds. Greater authenticity and validity than MCQs. Often not used to assess the skills they’re designed to hone. Get rich log files, could use for assessment using EDM methods. They can scale, give immediate feedback. Scale is really important. Inquiry learning might not be the whole story, but it’s a good start. We need to get beyond multiple choice tests.

Challenges to inquiry assessment. Complex tasks take longer, affects reliability. More than one way to do it, hard to handle and get metrics. Sub tasks are not independent – if start on a hypothesis, subsequent tasks are related to that. Another complexity, EDM can handle but trad IRT methods can’t. Tough to apply because skill level changing all the time. Theory needed to distill and aggregate the data a priori, otherwise GIGO model. Theoretical framework informs the data.

Our solution is an ITS called Inq-ITS. Assessment for middle school science using microworlds. Rich log files, real time metrics for skill assessment, tutoring. Pedagogical agent too, but not talking about that.

Microworlds, leverage affordances- Don Norman. hypotheses, experiment, analyse, warrant claims, communicate.

Virtual lab – not doing NLP.

Student works in virtual lab, algorithms on data (pat pending!), generates two reports – teacher report, and real-time alerting for teachers. Can see if e.g. lots of students don’t know what an independent variable is.

Can we develop valid metrics to autoscore students skills? Do the algorithms transfer? Goal is to scale. Transfer to complex causal systems, like life sciences.

Findings – detectors for state change can work >90% of the time. Can spot designing controlled experiments 94% of the time. These generalise to Density, different environment – 82% match human scoring there. Same features on log files were used. When tutored in real time, skills transfer to a new domain. More rigorous than an RCT.

Scalability – a view in to black box of learners. Rigorous measures to get at what students are doing. Automatic rigorous scoring of inquiry processes. Generalise to new students not used to build model. Generalise without re-training the ML model. Potential to inform design of new experiments.

Gets RT feedback, measure learning over time. Digital production and revision lowers costs.

Science Learning By Inquiry (SLINQ). Working on this – three domains, real time scaffodling, eye tracking, more. Info on YouTube under Janice Gobert, and papers at slinq.org.

Chris Dede: Diagnostic Feedback based on learning analytics

There’s a lot of models trying to understand how to design C21st education. Many of them have the same fundamental characteristics. Focus on outstanding classroom instruction, having a complement involving apprenticeships or internships in the real world, and community dimension of experts and peers. Of those three, the least studied and articulated is the richly contextualised real world learning. Pre college is particularly problematic. Even a field trip is hard to do.

Studying immersive learning – virtual reality, virtual environments. If can’t get out of classroom, sit there, but psychologically immersed in a virtual world – or walk home through an augmented reality. More easily without full-fledged internship.

Central theme, as with Janice’s work, is around inquiry. Immersive environments built, trying to teach inquiry skills. Well-defined area, lots of subskills that have to be organised to study inquiry. Built River City Interface, used widely. Diseases in 1870s town. Ecomuve too, ecosystems where students understand phenomena. More recently, Ecomobile, augmented reality – ‘magic eyes’ seeing things imposed on the real world. They all generate big data for which analytics is important.

These are all very open-ended environments. Not like highly-constrained ITS, or partly-constrained microworld, these are very unstructured. So extraordinarily difficult to interpret the log files. This here is the low-hanging fruit.

Four behaviours – exploration, data collection and analysis, communication, constructing evidence-based arguments about causation.

Timestamped logfiles have lots of different activities all jumbled together. Complex to disentangle. Keep the environment open ended, but look at parts where the actions are constrained, provide diagnostic feedback there that draws out of those constrained parts.

So, for River City and Ecomuve, collect path information and reflect the learner’s paths, their team’s paths, and expert paths. No interpretation, just showing – but strong gains in exploration behaviours. Used heat maps, out of game world. Contrast green representations of lots of time, with red representation of low-performing learners. If you show that to students, without a lot of interpretation, the message gets across.

Another relatively easy feedback. If have a hint system, can see if people activate it and how deeply they go. They already know what they’ve done, but teacher doesn’t – so flag which learners struggling, where, and where patterns suggest the system isn’t enough yet.

Another thing, without interrupting the flow, embed animated pedagogical agents. Ask the students questions, or students ask questions – both of those are diagnostic. Not where teacher gets to know.  Simple example, keyword pattern-matching with a big FAQ database behind. If you have a NASA scientist, it would outperform the animated pedagogical agent. But if they weren’t motivated or didn’t know how to talk to kids, our agent outperforms them.

Developed multiple immersive ecosystems, can do transfer studies. Note students progress in transfer, take one ecosystem, learn stuff, see if can apply it in another. Borrowing from games, gain powers, level up based on accomplishments. River City – mysterious mansion you can only access with special tasks. Can track that, feeds back to diagnostics.

Fundamentally, there are some things you can do to take advantage of constrained subsets of a very unconstrained environment, and provide useful diagnostic feedback. Parallel to what a mentor might do in e.g. an internship situation. It’s a way of managing this third dimension beyond classroom and community – real-world immersion that seems to be important.

Taylor Martin: All you need is data?

Interesting flow. A lot of things motivated by unconstrained learning environments. I spend a lot of time in our world, and in the world of big business. Different models for thinking about how we deal with data. Not dichotomous.

Peter Norvig, theory is dead, all you need is data?

Ok, some of the time. Even ML for hackers – humans are good at finding patterns e.g. in clouds. It’s like saying the microscope invented, I don’t know what’s going on in the soup, I’m not going to look.

Our data are changing. Pilot study last fall, ten tech products for personalised learning. Now, based on pilot results, 130K licence requests for one of the products. Now we might be getting to big data. Suddenly moving to big data.

Microgenetic research – can do a lot more observations of learning behaviour, analyse it on an instance by instance basis. 11 interviews over 11 weeks. In our games, we have more fine-grained data.

Example game, based on concept of splitting. You put little splitters on a beam, make fractions – e.g. half of a half of a half makes an eighth. Study theoretically-motivated ideas about learning by splitting.

One feature is node depth. Another is how many unique states. Whether they explore lots, or not. Interesting results about how people spend time, pathways, developing, lots of great stuff. theoretical analysis done. But now throw in to models to predict outcomes – and they don’t predict. So much going on, spatial stuff, time on task, and so on. So what variables are predicting success?

Some theoretical variables do pop up to the top. But total amount of time spent, number of correct targets hit, those are more important. So reanalyse based on those.

IPRO environment – I can program – not talking about that!

Some dimensions – theory driven vs discovery driven – like DBR for data analysis. Data size continuum. With really big data, we don’t know what’s important. Can use ML feature extraction to get to that place.

Carolyn Rosé: Mining Conversational Data

Most of my work is with groups of learners working together, interactions important. New complications. Not just knowing NLP tech, but social dimension introduced intertwines with cognitive level. Can inhibit engagement, so have to be aware. Want to convey, not here are great things we’ve done, but raise some questions. A lot we can do with simple techniques. But some real caveats.

Learning about learning by listening to learners. Long history in learning sciences – think aloud protocols, mental models. Later about thinking processes, self-monitoring. Tutoring interactions, two parties with distinctive roles. In collaborative settings, larger groups, roles diverse and complex. Think about how careful the methodology around think-aloud protocols – Ericsson & Simon, theory talked about circumstances you could get a realistic model, what should you look for, what questions should you ask, very careful. As we get more complex, we’ve got less careful. So going back to verbalisation theory, see what it looks like for collaborative processes, and that’s what we’re working out.

Data from realistic settings – e.g. classroom discussions. What are the constructs representing important processes we want to detect. Then we try to automate the capture, take the findings to inform design of collaborative environments, automated support for students, facilitates effect on teachers. Can detect that through the analysis.

Tutorial dialogue research. I used to believe in shallow measures, low-level features. Didn’t generalise well. Conclusion: shallow indicators are not what it’s about. They happen to predict learning because they correlate with something important, but misguided to rely on them. Seeing people falling back on these with MOOC research. It makes me sad.

I’m thinking about social processes, can support learning, sometimes hold learners back. Have to operationalise those. Think on two dimensions – authority, identity – signals that people emit. Theory behind this from multiple fields – interdisciplinary collaboration. Starts with psychology, and social psychology. Then sociolinguistics. Then language technologies – techniques for modelling.

Work developing three-dimensional coding scheme looking at collaborative discourse. Transactive Knowledge Integration. Authority, and engagement. SouFLé framework (Howley et al 2013). Get ideas that inform how we do our models.

Structured equation modeling diagrams. Conversational roles within subcommunities. Model of conversational roles. The way that we talk starts with our personal style, defines our tendency. Roles fit together in groups, so affected by others; conversational roles reflected in the way people talk in the conversation. Can built those in to the models.

Can learn things bottom-up about what’s going on in a MOOC environment. Students who take on roles associated with more attrition are presenting themselves as disengaged, not satisfied. Students with lower attrition are more engaged, supportive, working on problems together.

Introducing workshop later – tool called LightSIDE.

Most important thing about separating a bowl of candies is not bleaching the candies white. If we take an over-simplistic view of analysis, we’re not going to be able to do our task because we threw away the important information.

Panel discussion and questions

Xavier: The effort. Normally, create new technologies, how to assess. What do these methodologies represent for teacher? More work, less work?

Janice: Work closely with teachers in designing the tools. The teacher report was designed with teachers about what they wanted to know – who’s in trouble, who’s getting it. Aggregating over months, per topic. Teacher has a metric ready for tests, or end of term report. We worked with teachers in close collaboration. Finding teachers say this adds a great dimension to their work, saves an enormous amount of time. A level like a virtual lab book.

Chris: If you step back from individual teacher, look at system as a whole. Analogy to health. Let’s say, your physician got diagnostic info about your health from one day to the next. Initially more work for them. Over time, people would be healthier. Specialists who spend lots of time and money treating pathologies that could’ve been caught early wouldn’t be spending that. So total time going down, but redistributed and front-end loaded. Parallel analogy to what would happen in education.

Carolyn: Amount of effort isn’t the only thing. With our work, it takes a long time to get payoff. The work on modelling raises new questions, teaches us things.

Taylor: What teachers want. Report last week, they want to see their own data, but also not just another dashboard, but integrating across tools so they can go to just one place.

Someone: How does your research informs education in low-tech environments? In all these presentations, very often using cellphones, computers

Chris: Phones quite widespread. But often not allowed in schools. Two-fold issue. How do we take schools and make them more permeable about tech. But also about applying learning techs throughout life, not just in formal settings. Can apply outside school. Tech access problem is solving itself. Real question is will we have things to put on them.

Taylor: I do design-based problem solving in hands-on maker stuff, low tech. In prof devt, teachers have trouble figuring out what their students are learning. Workshop last week with teachers, coding open-ended responses. This is technology, but most people have it. e.g. Google’s ngram analyser. Lots of ways we can use low or no tech environments.

Janice: Techs we are using can be with one computer and teacher up the front, my metrics based on one computer per child, but don’t have to be. Teacher could integrate regular practices to handle the class discussion guided by the tech but whole class participates. South American schools interested in that.

Carolyn: Two goals. One develop tech to help learning, another to learn about how learning works. Most exciting findings, CSCL environments, we could see connection between small-group experience and whole-class environment. Important to have that small-group experience before whole-class. Tech helps to organise the groups and manage them but it can happen without them.

Jenny: How well do you feel these newer system – I love inquiry based, immersive, virtual, wonderful – based on opening slides, it’s out of step with society’s measurements of what learning means. Standardised testing, we rank 21st. Gatekeeper for college is SAT. Grad study is GRE. All standardised tests. Does all this lead to improved scores based on the system we have to deal with?

Janice: As a learning scientist, if you develop tech that help support rich mental model-building, can take that knowledge and do MCQ. Can take rich knowledge and do a MCQ. If instruction is akin to rote learning the test is based on, can’t generate rich knowledge from it. Current model is you can’t take rote-based instruction and generate rich knowledge. No hypothesis generation, transfer. Hope more standardised tests will be developed differently. MCQ largely has 4 prompts, many times 2 are distractors, so have 50/50 chance. Not a good metric. They’re reliable, but not very valid. That’s my optimistic answer. Less optimistically. Huge investment in MCQs. Will be around for some time. Lot of movement, communities like LA and EDM are pushing hard, at intersection of these things that can inform design of new tests.

Taylor: Driving factor for what I do. Work with maker spaces, Scratch. Principal used to say hey, how does that help with what we’re accountable for. Stealth assessment, but we’re not there yet. Make magazine starter, large project in California, get data, do analysis to say what standards you’re responsible did they learn something on. Doing it with Scratch too. Aligning with standards, make learning visible where it wasn’t traditional. It’s a big driving force for us.

Carolyn: There is a huge international effort to add collaborative assessments to PISA.

Chris: In the long run, high stakes items, predictive validity really matters. Predictive validity based on tests is high for future tests. For performance in life it’s quite low, and well documented to be so. At some point, businesses who rely on graduates are going to figure out that cost of low predictive validity of school leaving tests is enormous. If could offer more unstructured, high predictive validity performance assessments, they’ll use those. First, we have to have those performance assessments and demonstrate their predictive validity.

Panel: Learning analytics professionals – how should they look like, what should they offer and what can they get?

Alfred Essa (McGraw Hill), Jared Stein (Instructure), Negin Mirriahi (University of New South Wales), Charles Lang (Harvard University), Ani Aghababyan (Utah State University)

Charles Lang

What do you want? I’m at the end of my doctoral career, can get philosophical. Education is about asking what kinds of people do we want, and how we make that happen. Education is an adaptive problem. Say project and computer aren’t communicating. There is a technical fix that will make it work. An adaptive problem is incredibly complex, it involves people and behavioural change. Education is not necessarily something we can fix with a technical change.

Learning analytics presents a strange space in which we can maybe bring technical solutions in to this adaptive space by feeding technical information back in. Being a professional in that kind of world. It’s space that can deal in both ideals – technical solutions and behavioural change.

Adaptive problems create anxiety. People have learning anxiety. Am I good enough? Does anyone have the knowledge? Insecure circumstances. People being cautious about how we can move in this space, not sure what the problem is they are trying to solve. We need to be cognisant, think in terms of adaptive, behavioural change.

Idea is we need to develop people who don’t just have the technical skills, can deal with anxiety, but can really translate between decision-makers and the computer science side and trad research side. Allay the anxiety. Translation capacity in the first instance.

Al Essa

I’m VP of R&D and Analytics at McGraw Hill Education. Put together a team of data scientists, engineers. What attributes I’ve been looking for in putting that together.

Primarily character, in three senses. People who come to McGraw Hill, they’re technically brilliant, they have lots of options. What would attract them to us, and us to them? Sense 1, is sense of purpose and mission. Sense of the moment. Sense of community.

Sense of mission and purpose. We’re all educators. Firmly believe in providing quality education at very low costs to millions of learners. That’s the DNA at McGraw Hill. Working with authors for decades. That’s not easily transferable. People who have tech expertise, but really believing in the mission of education is not transferable.

Second, sense of the moment. When I go to the work, every day is like playing the world cup – it’s exciting. If you’re in the world cup, it’s once in a lifetime opportunity, don’t want to squander the moment. The tech is ripe for prime time. Moment in history where we can really change how learning reaches out to the world. Let’s not waste time.

Third, football metaphors, technically brilliant people, our training is – score a goal. People are trained to kick the ball in to the net, but we don’t teach them to pass the ball. So this is learning how to pass the ball. Total football, in the NL. The idea is, there’s no fixed position, you do what it takes. The idea of passing, it’s a difficult art. Don’t think, kick the ball in the next, think about expanding the space. Sense of the team, the team is a collaboration. We’re going to advance LA and learning sciences by collaborating with the academic community. Having a sense of the community, so not individual but the team.

Ani Aghababya

Why did you choose learning analytics? Or – Al’s question – why are you selling out? [Al: Ani is joining McGraw Hill, why going industry rather than academia.]

Carolyn: Cog scientists with bunch of questions I couldn’t answer with pre/post tests. Not sold out because waiting for pay enough.

Chris: Companies are the ones with the data. Moving to the cloud – Coursea, edX have the MOOC data. If you want to do interesting LA, might have to sell out.

Janice: Really great opportunity to design better assessments now. We have the tech we just need to do it. Students graduate every year without knowing science. We could talk about this for 10-15y. Don’t let the perfect prevent the good. We have the good. Secondly, partnerships are needed, like yours with researchers. Many academicians design good products that are not picked up because not marketed. Publisher needs to implement those products. A lack of infrastructure, community, that funders like NSF are realising. For products to be distributed widely, a partnership is needed.

Ani again.

What is it that you would want your grad student to know in regards to LA/EDM methods? If you want them to be successful. What do you want them to learn? My professor wants me to pick up as many CS classes while I’m still a student, because later will be too busy.

Phil Winne: I don’t know much about methods. For me and students, capability to ask questions of experts in learning analytics and datamining, so can collaborate in a community.

Carolyn: Important is how to validate your measures. And be very skeptical of the modelling. Using it as a lens, what can it tell you, what can’t it.

Janice: When you address a research problem, a deep research question about how people learn, it’s not a trivial thing. They don’t learn French like they learn Science. There’s a lot of theoretical knowledge that needs to be taken in to account. That rich intersection, it’s really deep. Some are skeptical that you can have a field called data science – can you be a domain-general data scientist, or statistician? Those fields will grow as sub-disciplines.

Someone: One thing I really like about LA environment is curiosity. Learning from diverse background of everybody. Environments where learning takes place. Being curious about different methods. Can I learn from other participants or experts?

What is it you expect to see in newly graduate’s resume to consider them as a valuable asset for your company? For companies.

Al: One skill that’s bedrock – if you’re going to be a scientist: know Python, and R.

[applause]

Al: Goes back to football analogy. LA is inherently interdisciplinary. Have ability to cross boundaries, think from computational perspective, then switch to stats, visualisations, cog sci. Curiosity and blend. It’s very hard, but also fun.

Shady, D2L: Would like to see in addition to e.g. Python and R, see the problems they are working on. Deep analysis on the problem. It’s not about specific technologies, about how deep they reach in the analysis.

Mike: Communication skills, super-important. We have to communicate with lots of stakeholders – clients, teachers.

Do you think there is a discrepancy between what you are expected to know as a grad student in this field and what your graduate programs offer you? Only tech thing I had was Photoshop. It’s not enough. Program doesn’t design it to be ready for postdocs or further jobs. Do you see discrepancy between what programs are offering and what the job requires?

Taylor: Learn Python and R! I told you.

Yes, I did, but the program is not offering it. Only a limited number of electives. You don’t know the requirements. Had to take statistics, it’s simpler than what the stats dept offer. Not necessarily CS classes, but guidelines, if you’re planning to work in this field, these are some of the things you should look in to.

Chris Brooks: We live in a MOOC world. More students applying in to Masters and PhD programs saying, I don’t have a CS background, but I took this class. Shows students gaining knowledge, and also that they’re driven even though coming from a different epistemological background.

Ani: I agree, I’ve been doing that. Taken Ryan’s course. As grad students, credits you have to get, MOOCs don’t give you that. Signing up as independent study but there’s only 2 or 3 of those.

April: It’s a mistake to believe you’ll learn everything you need to learn in a classroom. Should prepare you to teach yourself. That’s what graduate education is for.

Carolyn: Almost exactly what I was going to day. PhD 16y ago, but still try to learn new stuff. As academics, and industry, keep our eyes on the horizon. If our programs were designed with what we had to take, we’d be cookie-cutter images. Carve our own space to be successful.

Ani: Guidelines, not set in stone.

Taylor: This isn’t an established field yet. Trying to come up with the right curriculum for data science, learning analytics program. That’s a different step. Your a PhD, you want to go learn this stuff.

Negin Mirriahi: Learning Analytics Professionals

Research in LA in last few days. Also need to action it – how is the main thing. Put tools in hands of academics, tutors, students, so change will happen. As much as we can hire academics to do research, can’t put it in to action without LA professionals. What should they look like?

Seen lots of job descriptions. Position descriptions the same, but different titles. What should they offer? Hiring blitz at UNSW. Working on position descriptions, and how to find right skills and right people to make an impact. High level: big data analysis, extraction, data viz, staff capacity-building, management.

Knowledge of the bigger picture of how LA can inform/enhance HE. Often hire people from industry, BI specialists, marketing – have the skills, but missing is the bigger educational picture. What does learning and teaching really mean? People say, I have these skills, I need a strategy, I have no idea where to start. To minimise that learning curve, looking for people with that bigger picture.

An example. Couple of job descriptions, one looking for a data analyst. If hiring an LA professional, how do you know they can apply the skills in practice? We asked our candidates to do a couple of exercises before coming in. Example – data viz task. From MOOCs, “prepare a data viz of the key elements of the MIT/HarvardX open dataset”. That was it. Then data analysis task – extract, prepare, analyse raw log data of a course. Develop and present succinct data viz report suitable for psychology instructors.

Are these the right tasks? Would they give us the info we’d need to make the right hiring decision?

Example from log data – visualisation, log report.

What can they get? Jobs! Where are they? Australia! [laughter] But also everywhere.

Questions at the end – how do we hire LA professionals with the right skills and awareness of T&L issues? And do the tasks mentioned get to the heart of it? What was missing?

How do we make this easy for them to do – everyone is time poor. We have a job posting out!

Jared Stein (via web), VP Res & Ed at Instructure (Canvas)

Sharing some powerpoints. Learning Analytics professionals.

Aaron Johnson is there in the room with you. Analyst on my team. I’m an educator, a technologist, a manager, but not a data scientist. But Aaron is! Ask him about the joys, frustrations.

Bit about our company. Instructure makes the LMS called Canvas. Growing fast. How we’re different, we’re a true multi-tenant cloud on AWS. Everyone’s on latest version. All the data is consistent and uniform from account to account, hosted around the world. Gives us ability from LA view to look at micro level and macro level, at the use of platform across institutions.

How we are using educational data. Throughout the company. Specifically in education, use it three ways. First, to inform the community – e.g. How does course design correlate with student participation? So what? Lean to ed tech, in marketing, ed research teams. Second, to inform product – Canvas. What can we learn from how the platform is used to tell product team what to do next? Are the users benefitting from feature X? Why/not? Lean to data science, in R&D team. Third, in product itself. To answer questions like, can we build a tool that helps our users understand how online behaviour predicts learning outcomes. Lean developer, on product or engineering team.

My list of 10 educational data analyst skills. This is the new webmaster. I’d be looking for some of these skills. In the late 90s, that title ‘webmaster’ was a catch-all, as web developed and more broadly adopted and needs changed, the role dissolved and replaced by teams with specialised skill. LA professionals will be like the webmaster – broad range of skills, but I expect that will change in the future, as LA becomes more popular and effective.

Skills:  Simple project management. Can answer – can you get this done in a certain period of time. Agile / rapid prototyping, bit of mental adjustment – release early, often – try something, see if it has an impact, may have to iterate. Communication, important for any job but particularly for data analyst – lots of stakeholders, ‘don’t make me think’. Education theory is important, but depth depends on the role sought. Statistics. Data retrieval and standards – SQL, CSV, JSON formats. Scripting – programming, machine learning – we can only go so far with basic queries and scripting, especially with really large data sets. Visualisation – make stuff pretty – Photoshop; like to have, but we do have access to graphic designers. Data storage and management, so don’t have to reinvent, will be increasingly important. Knowledge management – track your practices, institutional memory.

Questions

Andy from King’s College: How customisable is it for a school or a teacher? How does it compare to open source? [directed to Jared]

Jared: Canvas is commercial open source. The code, take an approach to customisation that doesn’t need the core code, but customise through APIs, plugging out to new tools through the LTI spec. LA, we have a lot of people who do more or different things with the Canvas data than we provide through our reports. They create new tools which use the Canvas API for the data, make that part of the interface using the LTI spec.

Chris Brooks: For all panellists. As academics, not just training the potential employees, but potential researchers, and actively doing research. At an event like this, great to see interactions between corporate partners and universities. What do you see the opportunity for collaboration around PhD students? Many years before they get their degree, longer-term relationships, not just skills training, but coming to academic research of their own.

Al: First, really looking for two primary areas of collaboration. One is, our interest on industry side is which areas of research have greatest potential for product development. Help identify that, collaborate with researchers with moving to the next stage. Areas of research underway, a subset has the potential to lead to product development. Good thing for PhD students to begin to think in that mode. How do I think about my research, what is the next step to turning it in to a product – whether commercially or used by the institution. Haven’t seen much training.

Chris: Entrepreneurial spirit

Al: Yes. How can I think as an entrepreneur? If I were to start my own company, what’s involved in that?

Someone from Columbia Teacher’s College: Academic and industry. Great question, discrepancy could be the best opportunity – there’s some innovation. We didn’t have EDM before. Within MOOC research community, without creation of MOOCs, wouldn’t have opportunity to work with a professor in English literature. That’s pretty amazing. In terms of industry, wanted to ask question to Canvas. When you look at candidates, you listed 10 skillsets, mentioned knowledge management, can you elaborate? For my own understanding, it refers to how you can personally transfer from what you did before to what you can get in planning your future work.

Jared: Knowledge management is a challenge for all companies. Knowledge individual teams have should be captured in some way and referenceable. On my team, analysts using iPython Notebook as a tool to document their work, the results and the process that got them there. So when in future I say remember that analysis? They can go back and see what we actually did. Tool that adds work on the front end, saves tremendous amount on the back end.


This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission needed to reuse or remix (with attribution), but it’s nice to be notified if you do use it.

About these ads
Explore posts in the same categories: liveblogging

Tags: ,

Both comments and pings are currently closed.

One Comment on “LASI14 Tuesday (2): Panels on Learning Sciences, professionals”


  1. […] LASI14 Tuesday (2): Panels on Learning Sciences, professionals […]


Comments are closed.


Follow

Get every new post delivered to your Inbox.

Join 63 other followers

%d bloggers like this: