Sheep Dalton: How to make a multitouch surface for less than £500

IET Tech Coffee Morning: Sheep (Nick) Dalton from Computing Dept.

Change to ubiquitous/pervasive computing: from one computer to thousands of users, through one PC per desk, to thousands of computers per person.

So multi-touch surface allowing many people to interact at the same time. (Which is actually several users to one computer … interesting!)

Biggest ontology to visualise: all human knowledge. Amazon is the modern Library of Alexandria.  So he’s captured Amazon’s taxonomy, can zoom in from top-level categories down to groups of ten books.  Using his nifty surface to zoom with multi-touch.

Nice video demo, using a vertically-mounted multi-touch surface. (Very large.) Jeff Han – some ace demos including TED talk from 2006. Created a homebrew craze of postgrads building their own table (and then running away) – not much software. (Minority Report interface actually based on MIT research, not the other way around.) Fiddling with photos is the ob demo.

Problem of ‘gorilla arms’ if you’re waving your arms around in the air for a long time vertically; easier if you have something to press against; even easier if the table is horizontal. Microsoft have announced Surface, a tabletop machine, but they have technological problems – can’t use it in a room with windows (so Ok to use in our shiny new JLB labs).

Parallel to the homebrew computing craze in the 1980s – people build things and if you’re good you keep building things. Hasn’t been a similar movement in between when people are so excited about building hardware.

Hardware: surface, projector, camera. Want to be able to distinguish touching from hovering finger. Strongest image is what’s projected on the screen, so tech challenge is to remove the projected gubbins. Secret is to look without looking (!). Jeff Han’s big contribution: start with big piece of glass or acrylic. Fire infrared LED down the edge, goes through acrylic – as in fibre optic cables, they do the total internal reflection thing at the glass and bounces back in to the glass/acrylic until absorbed (or emitted out the end). So you get 450 IR LEDs and point them in to the edge. 50 LEDs wasn’t bright enough (in about 1m x 60cm) so they did 500. Just a battery and a bunch of lights. When you touch it with a finger, that interrupts the acrylic/air interface, and … it glows in the IR.

So need 1-200 IR LEDs and a big chunk of acrylic.

You can still see what’s on the projector (when you turn it on), which drenches the IR signal with the optical signal. Get an IR optical filter and stick that over the camera. (Possibly for astronomical use?)  Using a Guppy camera (about £1000) but can use a normal webcam. But can take a piece of exposed film (from trad film camera) – which will do the same sort of job as the £30-50 filter. Most cameras are sensitive in the IR anyway; the ‘night vision’ mode on many consumer cameras simply switches off the filter-out-the-IR tech.

Can hack it out of an IKEA table but getting the tech labs to do it makes it look much more scientific.

Can use the acrylic alone, but that’s transparent (so projected image not very clear), so use a layer of Rosco grey (makes the back projection better), then a conformal layer. Polish acrylic with Brasso, then , then, then … to brighten the finger points. Or … a squidy layer of latex – which does work – available in any good fetish shop. But a sheet of silicon works well.

The software he uses is called Reactivision. Not designed for multitouch but you can do that, and can respond to any arbitrary touch on the surface (which is cool and allows for interesting appropriations). Reactivision just does the detection and tracking.  Many other technologies, on different surfaces. OpenTouch works really well too. They output open sound control (OSC) which is a UDP packet, which you can then interpret.  Toolkits for the image processing … but then you’re own your own: this is the level the software development is at right now.

Java Swing, MFC, all of the frameworks – all assume a single user at a deep level. Text needs to go in all sorts of directions.  Whole GUI paradigm needs to change.  Oops!

Fiducials – like a barcode for multitouch. Reactivision software can recognise what they are, and (importantly) what orientation they’re in.

Or can do very simple table by just shining LEDs straight up and on to the fiducials and pick them up from the reflection – makes for tangible technologies – using cards with fiducials on the back but everyday pictures on the top. Eva Honiker observed in a museum context: a digital exhibit with a computer and mouse, the kids will play but the adults go to read the paper on the wall; but the multitouch exhibit is more inclusive as an interface.

Need: acrylic, LEDs, cheap webcam, piece of exposed film, computer (assume lying around), video projector – LED-based are nice (can leave on, bulbs don’t blow and they don’t make noise) and only £100. (Need low-intensity projector so as not to blind the users above!)

Table they designed was deliberately too big to reach over – embedded physical affordance – encourages multi-person interaction rather than one person driving.

Reactivision – was originally designed to help Bjork look good at Glastonbury. Can reconfigure sounds during a performance.  OpenTouch – another multitouch project littering the Internet in response to Jeff Han – but is a good one.  Touche on the Mac, a Mac-ised version you don’t need to mess around with XML and so on.  Open Sound Control is the lingua franca, reflecting the wacky sound project history – bit like Electroplankton and so on.

Alternative technology: capacitance-based, project from above. Underneath have weft and warp of wires. When you touch you form a circuit; can detect fingers by capacitance – and can actually tell WHO is touching, which gives you a lot of fun interface possibilities. (I can move my pieces and you can’t; also good for tracking in research times.) Can buy for £10k at the moment, but possibly coming down as it commercialises. Main problem is getting the projection from directly above.

Can get the same sort of effect with multiple mice – less whizzy-looking but does do the job.

Asus have a new Eee Top – £500 with a touch screen, runs Windows touch version, like iPod Touch. Often only have a bounding box interface, looks like bimanual input but it’s faked. Video-projectors-for-schools people are interested too. Challenge for back compatibility versus doing something actually new. (Like command-line lying beneath windowing systems.) Smart are bringing this to schools in less than six months.  Microsoft see it in hotel lobbies, mobile phone shops.  Others see it in getting stakeholders together to negotiate – group in the Netherlands discussing e.g. getting people to agree about relocating a river.

NATS – National Air Traffic Control Service – consultation about rerouting air corridors, with multiple constraints and stakeholders (safety, fuel usage, noise, etc). Yvonne Rogers showed these interfaces are good for picking up people with different levels of skill.

They’re exploring: what situations does this work best in. Good for small groups.  What’s coming in the next five years? “The next Microsoft is sitting in a garage at the moment fiddling with a multitouch surface.”

They used the capacitance-based table to help Computing Dept decide about who sits where in the new building. Small groups of three. The solutions they came out with were fabulous … but were ignored.  Social failure not technological.