Liveblog from Technology Coffee Morning, Jennie Lee Labs, 9 September 2009, given by Eileen Scanlon (IET) and D K Arvind, Director f the Speckled Computing research consortium at Edinburgh University.
Context
For Eileen, it’s the Personal Inquiry project – large collaboration with Nottingham. Inquiry learning in science; a little over halfway through a 3y project.
Hardware is all reasonably off-the-shelf equipment for scientific data capture. Literature-grounded method of supporting the inquiry process by involving young people in empirical work; technology is a way of enabling them to work through an actual cycle of focused investigation, rather than a simulation. Examplar topics: microclimates, urban heat islands.
A lot of previous work to support inquiry learning is about modelling phenomena and processes, often using simulations.
Student feedback says they appreciate real data collection. Project is not tackling issues of modelling and immediacy of feedback.
SensVest, developed at Birmingham as part of Lab of Tomorrow project – vest with accelerometers. Results from pilot trials not very positive. Hypothesis was that this would be better than looking at readymade or simulated graphs; but not clear that it was. Thought could be because of delay in feedback.
So conversation here is about comparing predictions of a model with data collected in a real-time sense.
Speckled Computing
D.K. Arvind – a high-level overview. Funded by EPSRC. Not in to the technological detail. Work by concetrating on underlying science and technology to realise the specks, and networks of them – specknets – working very closely with domain experts to see how the specks can be used in applications.
Internet has 1 billion hosts today. IPv6 will support >35 trillion separate subnets, and each one in turn can connect millions of devices. Potential capacity to name/connect every grain of sand. Smart objects – smart meaning objects know something about their environment, and location-aware – not necessarily absolute, but relative: who are my neighbours.
Vision: endow persons/objects with sensing, processing and wireless networking capabilities. Aim to bridge the physical and virtual worlds. (Just what I’m interested in!)
Sensor intelligence as a telecom service – plural services, access agnostic.
Specks: minature programmable devices which can sense, compute and network wirelessly. Autonomous, rechargeable, energy scavenging (e.g. photovoltaic cells tuned to internal lighting – focus on built environment). Specks non-static and unreliable – design protocols for expected failure and intermittent connectivity.
Tens/hundreds of specks collaborate as dense programmable network – a Specknet. Fine-grained distributed computation – the resources (energy, bandwidth, computing) are scarce here. Thirty years ago (or more!) the integrated microprocessor replaced box of different electronics with a single unit, led to a revolution. So here, encapsulate sensing, processing, and networking in a single ‘device’. If these are unobtrusive, lightweight … This is an enabler technology for Ubiquitous Computing.
Family of devices – 8-bit (med) client, can connect up to four sensors with 32-bit (large) microserve first, miniaturising to give 8-bit 5mm cube client. Freespace optics as comms – useful when devices are stationary. Would love to put sensors in e.g. the Jennie Lee Labs – because they’re static, can have line-of sight. Very small, low-power lasers. When on people, need radio – but that’s wasteful of energy because you radiate in all directions rather than directionally.
Next device: ‘Orient’ – 3-axis gyroscopes, accelerometer, temperature – attach to the limbs, calculate orientation on the devices themselves: leads to real-time capture of 3D motion – liberated from the studio. Lots of applications.
Also: Energy Neutral (EN) platform – capturing energy from photovoltaic sens.
Current motion capture methods: 1. Studio based with cameras, many cameras, reflective markers attached to person; grab info from 6-8 cameras, stitch together to get 3D view – computationally/memory-intensive post-processing. Not real time unless very high-end. Expensive – £30k ?per hour. Occlusion is a problem when capturing multiple subjects – need more cameras, but makes more post-processing.
2. Motion-capture suits. Wired suits, lycra, with a bulky base station/backpack which routes the sensor data to a high-end machine to do the processing (like Gollum).
3. Joint angle sensors. Bulky exoskeleton, cumbersome, hinders movement – not widely used.
So want: fully wireless, real-time and interactive, easy to use, ‘banalise the technology’, democatise its usage. Parallel with desktop publishing.
Orient Motion Capture system – currently sensors are about 30mm, need to miniaturise. (Video using Motion Builder for capture at http://www.specknet.org)
Can use real avatars: telepresence; bipedal robots operating in a harsh environment – use entire body as interface. Also in games. Unobtrusive participation in simulations combining real and virtual players – ‘serious games’.
Applications – lots – Digital media (motion capture, games, sports); Health – with Lothian – looking at:
- Congestive Obstructive Pulmonary Disease COPD non-invasive monitoring of breathing (devices on the chest wall) – can do analysis/monitoring remotely, with patient at home;
- Intensive care
- Clinical gait analysis – not just a few minutes in the hospital, but captured over, say, a week – is there variation over the day, different surfaces, slopes and so on. Much richer information for diagnosis.
- Physiotherapy. Program them with ideal movement, track improvement over time. Transfer data. Can see how well they’re doing.
Videos/applications
Showed avatar control to Linden Labs (Second Life). Not keen because would flood their network.
Edingburgh Science Festival 2006 – learning in informal settings. Put sensors on break dancers (8-10 year old), give them ideas about physics e.g. angular momentum, centripetal forces and so on, based on their breakdances. Competition – who can spin on their head fastest. Not saying you’re teaching – surreptitiously getting them to do things.
Golf swing analysis – challenging, limited bandwidth, 2-3 hour tour round club. Data coming in to mobile phone. Modelled as double pendulum – arms are one pendulum, connected to club which is the other. Equation of motion for double pendulum using Newton’s Laws. Get visual feedback of swinging club in the plane – angles between parts of the arm and so on. Applied sports science unit with biomechanics people helping interpret.
Interacting with robots – Trying to program behaviour, especially standing on one leg, walking etc, is done with heuristics, army of programmers over weeks. Can we capture human motion, analyse, run it on a simulator with physics engine, then select candidates and run on a real robot. (Extend life of robot by being selective in which gaits to use!) Get training data from human, segment in to phases. Fantastic videos of arm swinging, standing on one leg, sit-ups: and a great walk by a robot, with no human intervention in the learning algorithm.
“You need to demonstrate before anyone will start adopting these things” – very true.
Health scenarios – Need to validate data. Breathing rate during ventilation – breathing rate validated all the time. Can capture coughs, overall activity – e.g. go to sleep, turn right/left etc. Prosthetic limb adjustment – done by eye at the moment; with their capture data, can make it much closer to the normal/optimal setup. One example – couldn’t do it for climbing up a slope, but can now.
Speckled Computing Applications CEntre (SPACE)
Exists to evangelise! Encourage people to experiment with the technology. About fifteen applications project, very keen – due to funding! – to see the technology applied, and making a difference.
Example: projectile motion. Take a soft ball with Orient device inside. Instrument thrower with three devices. Thrower throws ball, can detect instant when ball leaves the hand, so only acceleration due to gravity thereafter. Expect an arc defined by good old equation of motion. Study in inquiry learning: try using tangible interface to support learning the laws of projectile motion. Masters student had a first attempt at this.
(The research question here – for me and people like us – is what can you do if motion capture is cheap, easy and near-ubiquitous? Exciting!)
Questions
Don’t detect physical location, but can infer it. Treat human body as articulated system of rods. Marker system requires precise placing of markers on parts of the body – here can be anywhere. Camera-based gives you position information, but have to infer orientation and acceleration.
256-times a second capture. Do-able because done on the devices themselves, so can be done in real time. Base station 33g, sensors 13g. Sensors can talk to each other, but here they all talk to base station.
Feedback not just visual, but audio – a tone – good e.g. in physiotherapy or golf swing. Give audio feedback on how close it is.
Visual feedback on phone for golf – haven’t done any evaluation. They demonstrate they can do it, then work with end users to evaluate it. They work on the speck inside, improving, miniaturising. Applications are collaborations.
Ball-throwing example: very interesting question as to whether the embodied action of doing it makes a difference versus looking at graphs/models. You find the literature says more about the confusion in dealing with messy data. The physics ed literature believes in immediacy and theory-building – but not proven that this is better. Could be that the finding is that you learn better without going near to real things! Some research not finding much difference – or more difficulties in real-world data. Lot of rhetoric about real, authentic experiences as important for learning … but needs to be explored, and can be now. Motivational side is a better argument than representational.
iPhone would’ve been a good bet; have worked on WindowsCE and don’t much enjoy it. Using those in experiments with NHS Lothian. You need a load of software, it’s messy. Happy to work with people to do stuff on more phones, but that’s not their zone. Delighted to work with people to port it – can give you the hooks etc.
Possible applications in Formula 1, Nokia open lab network.
Dance also – tango dancers. Can get metrics about e.g. coupling of motion between leader and follower.
Separate centre for applications, several students, is geared up to use stable versions of their platforms, very open to collaboration.
Motor-control skills development in pre-school children. Ten-week study in a nursery, currently analysing data. Longitudinal study, exploring whether you can spot developmental difficulties. Previously only possible in very expensive, constrained environment of a lab. Now can do in ecologically-sound environment – where they normally play.
Wii only gives you acceleration; here get the biomechanics of it.