Getting away from screens

After yesterday’s session on multi-touch surfaces, I saw that Rhodri Thomas tweeted:

v interesting demo earlier on use of ‘Surface’-like multitouch table – but are we ever going to get away from interacting with screens?

Which got me thinking about the degree to which we already interact with computers without screens.  I was also reminded of a rather staggering (but believable on exploration) claim I heard on the radio last week from a guy from Intel, who reckoned that more microprocessors would be manufactured in the next year or two than currently exist in the world.  The overwhelming majority of these are not in computers-as-we-know them: they’re buried away in embedded applications.  So this morning I thought I’d try to note all the microprocessors I’d interacted with other than by traditional screens, from getting up to sitting down at my first traditional computer screen to type this.  Some of these are slight cheats since they do have displays (e.g. the central heating timeswitch), but they’re not the sort we usually think of.  (We do need some leeway here because if by display you mean some way in which a processor can make its state known to a human and/or vice versa, it’s by definition impossible for any interaction to occur.) Anyway – a rough quick list:

  • central heating system – the timeswitch programmer to turn it on, and more processors in the boiler itself to run the system
  • bedside clock
  • fridge/freezer (for milk) – thermostatic and frost-free control working away
  • microwave
  • kettle – not certain since older models are purely electro-mechanical, but this one’s brand new and I strongly suspect there’s at least one processor in there managing overheating/boil dry and possibly actively optimising the heating process
  • radio
  • umpteen electronic toys used by the kids
  • electric shower – controlling the flow and heating rates

Then I left home and got in the car:

  • car – engine management system, and possibly other subsystems I don’t really know about, oh, and another radio
  • streetlights – some were still on suggesting they’re individually controlled (time? light?) rather than centrally switched – must have passed hundreds of these or more
  • SID – Speed Indicating Device – measured my speed, flashed it up on a display, then a smiley face to say it was under the limit
  • Pelican crossing with lights
  • level crossing with lights

And then I got to campus and towards my building:

  • More lighting
  • Security barriers
  • CCTV cameras
  • RFID security card entry system
  • automatic doors
  • heating blower behind the door
  • building management system controlling temperature and ventilation – this does have a traditional screen view but I don’t interact with it that way
  • lighting controllers
  • coffee machine

… a pretty large haul, and that’s not taking in to account any of the processors helping deliver utilities I used (gas, electricity, water).  It rather swamps the number of traditional screens I’ll be interacting with today: phone, iPod touch, laptop, desktop.  And of course those themselves rely on a large number of less visible processors running the network and power systems, and the hundreds of computers (or more) I’ll interact with more directly online today.


ERA: Enabling remote geology fieldwork by transient wireless networking

Trevor Collins (KMI) and Mark Gaved (IET)

The Enabling Remote Activity (ERA) project provides opportunities for mobility impaired students to fully participate in fieldwork learning activities. Over the last three years ERA has developed a rapidly deployable, lightweight, battery-powered wireless network that can be used to transmit video, audio, and high resolution still images between a field geologist and a nearby student. This is used on The Open University’s ‘Ancient Mountains’ residential course (SXR339), a one week series of field trips in locations throughout the Scottish Highlands. In this course our students get as close as they can, ideally within sight of the field location, and use the wireless network to work remotely with a field geologist. The technology is used as an enabler to facilitate the inclusion of students in fieldtrips without compromising the learning objectives of the course. In this seminar we will present an overview of the technology, the actions it can support, the use made within the ‘Ancient Mountains’ course, and the feedback received from the students and tutors involved.

Getting out there with real geologists was key – your ideas about what they might want (as an informed technologist) is different to what they actually want to do the teaching. Hazards and difficulties include trees and huge rocks in the way. Lots of different levels of mobility, from no restriction, to walk short distances with stick, to negligible unassisted walking. So vary response as appropriate – if the rocks are right next to the road, may not need to set up the kit; if it’s a hard walk away, more people stay at the car park ‘driving’ a field geologist.

Geology fieldwork is fundamental to the learning experience.

SXR339 Ancient Mountains, residential school in Scottish Highlands. Visits to lots of rocks. Drive for 3h, on the rocks for 5-6h, 30 min at each site; moving around, drive some more, etc.  Also variability of individual students on different days. Jessica Bartlett (Course Manager) asked about alternative learning experience for mobility-impaired students. Previous tech was binoculars  and walkie-talkies – not fantastic.  Geologists want to see things in the large (context and layout of the rock formations), down to the tiny (down with hand lens at the crystal structure of the rocks), and switch between the two. (Scale from individual mineral grains to continents!)

Standard geologist tech is cameras and GPS/GIS tools. But we don’t allow photos in field reports, in order to get them to do the sketches which do the abstraction.

Much previous work on virtual field trips. RAFT – remote accessible field trips – similar project from OUNL.

Approach: transient wireless networking. Rapidly deployable, lightweight, battery-powered wireless network. (Contrast with Ambient Wood project which had a lot of infrastructure in it.) Sites are all away from mobile signal, and satellite costs a bomb.  Mark’s background in grassroots community networking (the Pringles cantenna).

Wireless routers (Linksys WRT 54G/GL) because firmware opened up – (freifunk OpenWrt, dd-wrt), plus custom-made batteries (12V 2.8Ah DC lead-acid), external attenae (8/18dB omni, 14dB panel) with telescopic stands. Also Eee PCs at either end. Also 21″ monitor driven by car battery recharger.

Runs in parallel with main student body – student by roadside with dedicated tutor; field geologist reports from locations, with the other students, carrying network, driven by the student in/near the car.  (Needed to use hi-vis jackets to see each other at the distances required.) Extra tech bod (Trevor or Mark) there to help too.

Sense-of-presence video, to contextualise features; detailed stills. Two-way audio to direct field geologist.

Everything lightweight … and then the geologists walk down the hill with 20kg of rock in their rucksack. Issues: very sunny (visibility), very wet.

Web interface, very simple: live remote video, image bank, local video (useful diagnostic tool).

Feedback – one student very focused on the collective student experience, strong group identity, and physical presence was important. Other more about grounded (!) understanding. Side benefit – can take the images away with you. (Other students wanted the pictures too.) The quality of student sketches varied, not really dependent on mobility impairment or not.  But … you’re not actually there (3D, kinaesthetic experience?) and fundamenally it’s a 2D representation of a 3D thing. Hand samples also crucial.  Measurement generally done by the field geologist, not taken off the picture because picture-taking tricky physically. Was slower – takes longer to fix irrelevancies. A lot of communication/prep between the tutors about how to teach the specific outcrops/settings.

Suggestion: stereoscopic camera(s) with eye headset.

Weather a challenge – used drysacks to protect the kit. Fog reduced signal strength but they never lost it entirely.

More/future stuff:

  • ERA fieldwork kit
  • Personal Inquiry project (EPSRC/ESRC TEL, mobile tech to support evidence-based inquiry learning).
  • ‘Portable’ WLANs (JANET / Cumbria mountain rescue – Lancaster University).
  • New OU geology residential course 2010.

It’s low powered, so don’t need a licence to turn it on. Cheap kit is good because easily replaceable, which tends to make it low power.

Sheep Dalton: How to make a multitouch surface for less than £500

IET Tech Coffee Morning: Sheep (Nick) Dalton from Computing Dept.

Change to ubiquitous/pervasive computing: from one computer to thousands of users, through one PC per desk, to thousands of computers per person.

So multi-touch surface allowing many people to interact at the same time. (Which is actually several users to one computer … interesting!)

Biggest ontology to visualise: all human knowledge. Amazon is the modern Library of Alexandria.  So he’s captured Amazon’s taxonomy, can zoom in from top-level categories down to groups of ten books.  Using his nifty surface to zoom with multi-touch.

Nice video demo, using a vertically-mounted multi-touch surface. (Very large.) Jeff Han – some ace demos including TED talk from 2006. Created a homebrew craze of postgrads building their own table (and then running away) – not much software. (Minority Report interface actually based on MIT research, not the other way around.) Fiddling with photos is the ob demo.

Problem of ‘gorilla arms’ if you’re waving your arms around in the air for a long time vertically; easier if you have something to press against; even easier if the table is horizontal. Microsoft have announced Surface, a tabletop machine, but they have technological problems – can’t use it in a room with windows (so Ok to use in our shiny new JLB labs).

Parallel to the homebrew computing craze in the 1980s – people build things and if you’re good you keep building things. Hasn’t been a similar movement in between when people are so excited about building hardware.

Hardware: surface, projector, camera. Want to be able to distinguish touching from hovering finger. Strongest image is what’s projected on the screen, so tech challenge is to remove the projected gubbins. Secret is to look without looking (!). Jeff Han’s big contribution: start with big piece of glass or acrylic. Fire infrared LED down the edge, goes through acrylic – as in fibre optic cables, they do the total internal reflection thing at the glass and bounces back in to the glass/acrylic until absorbed (or emitted out the end). So you get 450 IR LEDs and point them in to the edge. 50 LEDs wasn’t bright enough (in about 1m x 60cm) so they did 500. Just a battery and a bunch of lights. When you touch it with a finger, that interrupts the acrylic/air interface, and … it glows in the IR.

So need 1-200 IR LEDs and a big chunk of acrylic.

You can still see what’s on the projector (when you turn it on), which drenches the IR signal with the optical signal. Get an IR optical filter and stick that over the camera. (Possibly for astronomical use?)  Using a Guppy camera (about £1000) but can use a normal webcam. But can take a piece of exposed film (from trad film camera) – which will do the same sort of job as the £30-50 filter. Most cameras are sensitive in the IR anyway; the ‘night vision’ mode on many consumer cameras simply switches off the filter-out-the-IR tech.

Can hack it out of an IKEA table but getting the tech labs to do it makes it look much more scientific.

Can use the acrylic alone, but that’s transparent (so projected image not very clear), so use a layer of Rosco grey (makes the back projection better), then a conformal layer. Polish acrylic with Brasso, then , then, then … to brighten the finger points. Or … a squidy layer of latex – which does work – available in any good fetish shop. But a sheet of silicon works well.

The software he uses is called Reactivision. Not designed for multitouch but you can do that, and can respond to any arbitrary touch on the surface (which is cool and allows for interesting appropriations). Reactivision just does the detection and tracking.  Many other technologies, on different surfaces. OpenTouch works really well too. They output open sound control (OSC) which is a UDP packet, which you can then interpret.  Toolkits for the image processing … but then you’re own your own: this is the level the software development is at right now.

Java Swing, MFC, all of the frameworks – all assume a single user at a deep level. Text needs to go in all sorts of directions.  Whole GUI paradigm needs to change.  Oops!

Fiducials – like a barcode for multitouch. Reactivision software can recognise what they are, and (importantly) what orientation they’re in.

Or can do very simple table by just shining LEDs straight up and on to the fiducials and pick them up from the reflection – makes for tangible technologies – using cards with fiducials on the back but everyday pictures on the top. Eva Honiker observed in a museum context: a digital exhibit with a computer and mouse, the kids will play but the adults go to read the paper on the wall; but the multitouch exhibit is more inclusive as an interface.

Need: acrylic, LEDs, cheap webcam, piece of exposed film, computer (assume lying around), video projector – LED-based are nice (can leave on, bulbs don’t blow and they don’t make noise) and only £100. (Need low-intensity projector so as not to blind the users above!)

Table they designed was deliberately too big to reach over – embedded physical affordance – encourages multi-person interaction rather than one person driving.

Reactivision – was originally designed to help Bjork look good at Glastonbury. Can reconfigure sounds during a performance.  OpenTouch – another multitouch project littering the Internet in response to Jeff Han – but is a good one.  Touche on the Mac, a Mac-ised version you don’t need to mess around with XML and so on.  Open Sound Control is the lingua franca, reflecting the wacky sound project history – bit like Electroplankton and so on.

Alternative technology: capacitance-based, project from above. Underneath have weft and warp of wires. When you touch you form a circuit; can detect fingers by capacitance – and can actually tell WHO is touching, which gives you a lot of fun interface possibilities. (I can move my pieces and you can’t; also good for tracking in research times.) Can buy for £10k at the moment, but possibly coming down as it commercialises. Main problem is getting the projection from directly above.

Can get the same sort of effect with multiple mice – less whizzy-looking but does do the job.

Asus have a new Eee Top – £500 with a touch screen, runs Windows touch version, like iPod Touch. Often only have a bounding box interface, looks like bimanual input but it’s faked. Video-projectors-for-schools people are interested too. Challenge for back compatibility versus doing something actually new. (Like command-line lying beneath windowing systems.) Smart are bringing this to schools in less than six months.  Microsoft see it in hotel lobbies, mobile phone shops.  Others see it in getting stakeholders together to negotiate – group in the Netherlands discussing e.g. getting people to agree about relocating a river.

NATS – National Air Traffic Control Service – consultation about rerouting air corridors, with multiple constraints and stakeholders (safety, fuel usage, noise, etc). Yvonne Rogers showed these interfaces are good for picking up people with different levels of skill.

They’re exploring: what situations does this work best in. Good for small groups.  What’s coming in the next five years? “The next Microsoft is sitting in a garage at the moment fiddling with a multitouch surface.”

They used the capacitance-based table to help Computing Dept decide about who sits where in the new building. Small groups of three. The solutions they came out with were fabulous … but were ignored.  Social failure not technological.

What’s the point of it all?

My colleagues Chris Jones and Gráinne Conole are at a learning design workshop hosted by Peter Goodyear, listening to contrasting talks from John Sweller and Roger Saljo.  Chris tweeted that he didn’t like the Information Processing view:

John Sweller argued “the purpose of education is to get information into long-term memory”. I just don’t buy that at all!

but he’s

Much happier with Roger Saljo’s position “the ability to transform and recontextualise in manners that are relevant to local needs”

I think both are right, in some senses, and both are wrong in others.  It’s a question of what you mean by purpose, and what level of description you’re talking about.  I strongly uspect that you need to get information in to long-term memory in order to gain the ability to transofrm and recontextualise in manners that are relevant to local needs.

This problem of levels of description has a long history: Aristotle argued that there were four sorts of causes of any change: the material cause, the formal cause, the efficient cause, and the final cause. (It gets a bit confusing since the Greek word Aristotle was using isn’t quite the same as cause.)  The last two are the ones we’d think of as causes: the efficient cause is what makes a change happen, and the final cause is the purpose.

In more modern times, Systems Thinking embodies these distinctions in the notion of a root definition of a system, which goes “A system to do X by means of Y in order to Z”, and implicit in any definition is the possibility of considering a system one level up or down.

So to come back to education, it could be a system to get information in to long-term memory by means of (something we need to work out) in order to gain the ability to transform and recontextualise in manners that are relevant to local needs.

For me, though, the top-level purpose,  goal, or point of education is to make people better people.

This might well – at several levels of description down – require changes in the bonding between molecules in synapses in their brains, but that’s not (yet) a level of description that’ll help you much as an educator. And focusing on that level of description as your goal could easily distract you from better ways of achieving your aims.  So, for instance, the purpose of a carved wooden table is not chipping away at a block of wood with a chisel.

The focus on making people better people is an important one. It makes it clear that education is a social, political and fundamentally moral enterprise.

Appropriation and the real function of artefacts

What’s the educational use of an iPod?

Al Briggs points to an article in the Independent about the Teaching Awards last month.  (Incidentally, these were an idea of Lord Puttnam’s, who’s now the OU’s Chancellor.) Shawlands Academy in Glasgow won an award for a scheme where the kids got iPods.  Al comments:

On seeing the headline I thought this would be an education related story but the iPod prizes are used as an incentive for the students to eat more healthily.

The overall story is one that we already know – iPods are extremely attractive to students.

I am left with the thought – wouldn’t it be great if they were being used not just to encourage healthy eating but to encourage and increase engagement in education.

I’ve even heard that you can put educational materials on them!

It reminds me of that old barometer anecdote (dissected here by Snopes and Wikipedia) where a mythical physics student is asked how they would determine the height of a tall building using a skyscraper barometer.  The student dodges the ‘obvious’ answer (measuring air pressure at top and bottom) by reeling off a list of imaginitive ways it could be done (throwing it over the side and timing its descent, various trigonometric shenanigans, use it as a pendulum bob top and bottom, etc), before they finally suggest going up to the building’s caretaker and saying “I’ll give you this shiny new barometer if you’ll tell me how tall your building is.”

The ‘obvious’ answer is not always the best one.  The obvious use of an iPod to promote learning is to put educational podcasts on it.  That’s not a wrong answer by any means: I’m a huge fan of Melvyn Bragg’s In Our Time podcast and of course our own OU presence on iTunesU – as plugged by my colleague Peter Scott in the Guardian recently.  But a significant revealed social function of an iPod is (and always has been) to be a desirable consumer good.  Harnessing that function rather than the MP3-playing one may be a better option sometimes.   “I’ll give you this shiny new iPod if you show me how much you’ve learned.”

Though there is, of course, the danger of hamfisted attempts at extrinsic motivation undermining learners’ intrinsic motivation.