Seeing how people really use online technology with the Tobii eye-tracker

Liveblog notes from an IET Technology Coffee Morning, 16 December 2009.

Presenters: Patrick McAndrew, Graham Healing, with input from Elpida Makriyannis and Anne Adams

We have a Tobii eye-tracker in the labs, which isn’t used as heavily as it could be. Aim for today is to show how easy it is to use, and explore some of the ways it could be developed.

History – a few years ago, Graham and Patrick were trying to improve the OpenLearn website. It was hard to know whether their worries about its effectiveness were real. Used an eye tracker to do a brief study of what actually happens. It was amazingly revealing, and very efficient – just a few minutes of recording and playback showed them the interactions on the site. That then grew in to quite a study, with analysis. At that point, the technology was hard work – took many extra hours. But now have bought more recent kit, with software upgrades, which make it very easy to use.

Some classic research in to how people read pages – like Jakob Nielsen’s classic (2001) work showing an F-shaped reading pattern for web pages. But is it still true?

Has been around since the 1950s! Now more up to date – see e.g. this presentation, which is about the machine we have:

One big win is that the technology is almost invisible – it’s built-in to the monitor, so the effect of the instrument is minimised.

Graham sets of to do a live demo, and Keren Mills volunteers to use it. She’s sitting in front of the eye tracking machine, which looks very nearly just like an ordinary monitor. On the second display, it shows her eye movements – a red circle appears where the eye stops for a moment; it’s bigger the longer the eyes rest in place. It joins these dots up with a red line to track the movements of the eyes (saccades).

The software has three tabs – Design and Record (capturing data), Display (simple replay), Visualisation (analysis tools).

Clicking ‘record’ gets things going. She registers as a new participant, types in her name (or alias!) and clicks continue. then it goes through calibration. It turns out that most people’s eyes are very slightly crooked. The calibration is very straightforward – just tracking large red blobs with your eyes. Oops – the software crashed. Cue a slight pause to fix it, then the final serious IT solution of turning it off and then on again. The joys of live demos!

Jonathan San Diego and others have used this technology quite heavily for looking at learning. It’s hard to tell whether someone’s learning directly from a page using this, but you can tell whether people are engaging or not – whether they’re reading or skimming.

Back to the demo! We also have a video camera, which can be pointed at the user (Keren here) to show their expression, or at the context more broadly.

The calibration worked this time. It’s much more advanced than it used to be – previously, glasses or vision problems would make it impossible. But now the device can cope very easily.

Graham sets Keren a task – to go to the OU’s Platform website and explore it as she would any new site. She does so – but feels a bit self-conscious. The software is overlaying (on the display we but not Keren can see) the red blobs and lines on the website, showing where her eyes are fixating. Keren clicks ‘close’.

One slightly annoying feature of the software is that on startup and on finishing a recording it processes away but without giving any indication that it’s working.

We could see (as the audience) that Keren was skimming a lot, and reading one bit – which she confirmed.

Now we go to the second stage – the playback (‘Display’). On the left it shows the recording sessions, the URLs that have been visited within a recording, and the video from the camera. On the bottom are fairly standard video playback controls, including a hare to speed up and a tortoise to slow down. The timeline shows clearly ‘events’ which are things like moving from one URL to another.

We can see that Keren noticed the ‘Install Google Chrome’ banner on the top-right of the page. And that the eye tracking isn’t affected by head movements, or reasonable ambient illumination – i.e. you can do it in an ordinary room rather than a darkened one.

The third stage is the visualisations. You get a list of URLs, and shows you which of your recordings have looked at that URL. Then for each of those, you can get a view showing them. First is the ‘gaze plot’ which shows where people looked, and in what order – size of the circles shows you the length of fixation, and with each recording shown in a different colour. You can also look at how this elapses over time – so e.g. only show the fixations from the first five seconds, or two minutes, or whatever. You can pick which individuals to look at, and compare.

Top tip: when doing testing, it’s very good to have a fairly stable site. If the content changes a lot, it’s hard to compare – and the eye-tracking software treats large changes as different pages (to some degree – needs checking). And rotating/changing components are hidden in the Gaze Plot: it only shows a still image of the underlying web page. But those are visible in the playback. Another problem – with the web option, the software doesn’t capture video playback within pages (?well). You can capture video, and e.g. SecondLife.

Next view is the Heat Map, which combines the fixations to show ‘hotter’ (more red) for the areas that got more attention. Some classic findings e.g. people fixate on faces.

(There are more expensive analysis facilities which we don’t have – including Cluster and Bee Swarm). There’s an Area Of Interest tool, so you can measure the amount of time that people look at individual areas. It’s quite easy – you draw areas of interest on the screen, then you can see the Statistics for each area. So, e.g., can see how long people spent looking at each area, or how many times they looked at that area.

Testing can show up very annoying problems very quickly – simple things jump out, and you then have evidence to show others in the development team. In this example, Graham used Platform just as a quick example – the testing was three people who spent about five minutes on it, and then Keren live in the session.

You can do more serious, heavyweight analysis – exporting to SPSS, or more serious video analysis tools.


This work by Doug Clow is copyright but licenced under a Creative Commons BY Licence.
No further permission needed to reuse or remix (with attribution), but it’s nice to be notified if you do use it.

Author: dougclow

Data scientist, tutxor, project leader, researcher, analyst, teacher, developer, educational technologist, online learning expert, and manager. I particularly enjoy rapidly appraising new-to-me contexts, and mediating between highly technical specialisms and others, from ordinary users to senior management. After 20 years at the OU as an academic, I am now a self-employed consultant, building on my skills and experience in working with people, technology, data science, and artificial intelligence, in a wide range of contexts and industries.

%d bloggers like this: