Future of the Net

Liveblog from a seminar on The Future Of The Net (Jonathan Zittrain’s book – The Future of the Internet and How to Stop It.), 20 March 2009, by John Naughton.

Update: Listen to the MP3 and see the useful concept map from John Naughton himself.

Audience small but quite high-powered (eight, including Tony Walton, Paul Clark, Andy Lane) . OU Strategy Unit trying to reach out to academic units and others.

train tracks with points set to go off a cliff

John  lost his physical copy … but rightly guessed it’d be available online as Creative Commons-licensed text.

Jonathan Zittrain was employed sight-unseen as a Unix sysadmin at 13, then by some process (probably involving Larry Lessig) became a lawyer.

Part of an emerging canon – Lessig’s Code 2.0, Benkler’s Wealth of Networks – heavyweight academic stuff. Two sorts of people – trailblazers and roadbuilders; Lessig is the first. Our role in OU (including Relevant Knowledge Programme) is to follow and be roadbuilders, which is an honorable activity.

Core argument of book: Internet’s generative characteristics primed it for success, and now position it for failure. Response to failure will most likely be sterile tethered appliances.

Transformation of the Internet in a blink of an eye from thinking it’s just “CB de no jours” to taken-for-granted. John’s message is don’t take this for granted.

Three parts: 1 rise & stall of generative network, 2 after the stall (including a long and good analysis of Wikipedia), 3 solutions.

Conjunction of open PC and open Internet created the explosion of creativity, but contains within it the seeds of its own destruction. Parallel with T171 You, Your Computer and the Net (Martin did the PC, John did the net) – but didn’t study what happens when you put them together, which Zittrain does here. Not about proprietary versus open source – PC was an open device, if you could write code you could program the device.

John says people don’t understand what we’ve got in the current Net. Knowing the history helps. Design problem (Vint Cerf, IETF etc) – design for apps that haven’t yet been dreamed of, given distributed ownership. If you’re designing for the future, you don’t optimise for the present. Architectural solution has two key points: anyone can join (permissiveness); dumb network, clever apps (end-to-end principle). The openness is a feature, not a bug. Contrast with the case of the Hush-a-Phone.

Zittrain equation: Open PC + surprise generator = generative system

Thought experiments from James Boyle – gave two talks recently, at the RSA and John’s Cambridge programme. Almost everybody has a bias against openness: when something free and unconstrained is proposed, we see the downsides. (Because you can imagine those, whereas you by definition can’t imagine what hasn’t been invented yet.)  Imagine it’s 1992 and you have to choose between: approved sites with terminals at the end (like teletext/Minitel); dumb, unfiltered, permissive network (the Internet) with general-purpose computers at the end. Who would invest in the latter? Second question, still 1992, have to design an encyclopedia better than Brittanica: broader coverage, currency. Options: 1 – strong content, vast sums of money, strong editorial control, DRM. 2 – I’d like to put up a website and anyone can post stuff. Who’d pick the latter?

Posits tension – or indeed tradeoff – between generativity and security. Consumers will become so worried about this that they’ll (be encouraged to) favour tethered appliances and heavyweight regulation.

(I wonder if I can’t bring myself to believe in the Net being locked-down out of all recognition because I’ve always had it around in my adult life. It’s probably easier for people who really knew a world without it to imagine it going away.)

Part 2 explores our likely response to these problems, then Wikipedia. “With tethered appliances, the dangers of excess come not from rogue third-party code, but from […] interventions by regulators into the devices themselves.”

Criticism of book – it underestimates the impact of Governments on the problem. Remembering 9/11, like JFK assassination. (John was on the phone to a friend who was there at the time!). John wrote in his blog on that day that this was the end of civil liberties as we knew them, and in many ways was right. (My memory was that it was the first huge news story that I got almost entirely from the web.) But – one day the bad guys will get their act together and we’ll see a major incident. Dry-runs with what happened to Estonia. But there will be something huge and coordinated, and that’ll evoke the same sort of response.

Rise of tethered appliances significantly reduces the number and variety of people and institutions required to apply the state’s power on a mass scale. John thinks it’s like the contrast between Orwell and Huxley – likelihood of being destroyed by things we fear and hate, or things we know and love.

Dangers of Web 2.0, services in the cloud – software built on APIs that can be withdrawn is much more precarious than software built under the old PC model.  Mashups work (except they’re always breaking – see Tony Hirst’s stuff, just like links rot). Key move to watch: Lock down the device, and network censorship and control can be extraordinarily reinforced.

iPhone is the iconic thing: it puts you in Steve Jobs’ hands. It’s the first device that does all sorts of good things and could be open but isn’t.  (What about other mobile phones?) Pew Internet & American Life survey – Future of the Internet III – prediceted that the mobile device will be the primary connection tool to the internet for most people in the world in 2020. So this could be a big issue.

Wikipedia analysis in the book is extensive.  Looks at how it handles vandalism and disputes – best treatment John’s seen. How it happens is not widely understood. Discussion about whether Wikipedia or Linux is the more amazing phenomenon. (My argument is that Linux is in some ways less startling, because you have some semi-independent arbitration/qualification mechanism for agreeing who’s a competent contributor and which code works.)

Part 3 – solutions to preserve the benefits of generativity without their downsides. “This is easier said than done”. The way Wikipedia manages itself provides a model for what we might do. (I think not – I think Wikipedia works because it can afford to piss off and exclude perfectly good and competent contributors.) Create and demosntrate the tools and practices by which relevant people and institutions can help secure the Net themselves instead of waiting for someone else to do it – badwarebusters.org.

Barriers – failure to realise the problem; collective action problem; sense that system is supposed to work like any other consumer device.

Nate Anderson’s review in ArsTechnica – three principles – IT ecosystem works best with generative tech; generativity instigates a pattern; ignore the downsides at your peril.

Criticisms: too focused on security issues and not on commercial pressures; not enough on control-freakery of governments; too Manichean – mixed economies; too pessimistic about frailties (and intelligence and adaptability) of human beings; over-estimates security ‘advantages’ of tethered appliances.

Discussion

Parallel with introduction of metalled roads. Crucial to economic development, move people and stuff around as a productive system.  Early days were a free-for-all, anyone could buy a car (if rich enough) and drive it, no need for a test.  Then increased regulation and control.  (Also to cars – originally fairly easily tinkerable with, now not/proprietary engine management systems.)  Issue about equity, as much as open/closedness.

Lessons of Wikipedia and the creators of malware. Malware creators only need to be small in number. To take down Wikipedia and make it undependable would take too much effort and coordination. (I disagree – a smart enough distributed bot attack would do it.)

I can’t imagine no Internet/generative/smart programmable devices because never not had them. Grew up on ZX81 onwards, had the CPU pinout on the connector.  Helps to have smart people around who have known the world before that.

South Korea got taken out by SQL Slammer, bounced back though – system is pretty resilient.

Manhattan Project perhaps a bad parallel for an effort to help here – it was the ultimate in top-down command-and-control project, with a clearly-defined outcome. And it was constrained and planned so tightly that it couldn’t actually work until people like Feynman loosened things up a bit to allow some degree of decentralisation.

How do you sign people up? Won’t do anything about e.g. climate change – until their gas bills shot up. Science and society stuff, well known that people only become engaged when it becomes real to them. Liberal is a conservative who’s been falsely arrested; conservative is a liberal who’s been mugged.

Surveillance – makes it unlikely that major public outrage leading to reaction is small, most people don’t realise their clickstream is monitored. It’s only if something happened that made people realise it that they’d say no.  Hard to imagine the scale of community engagement happening.

Case a few months ago – Wikipedia vs Internet Watch Foundation. Readymade community leapt in to action immediately.  But less likely where you don’t have such an articulate and existing community. Also photographer crackdown – they do have access to the media. Danger of the Niemoller scenario where they come for small groups one at a time.

It’s an argument about the mass of technology, not the small cadre of techies – iPhone can be jailbroken if you know what you’re doing. And there are more not fewer opps for techies and more techies than ever before. Most PC users in the 80s only used what they were given. In 1992 I could write an app for the PC and send it to anyone on the Internet. Except hardly anyone was on the Internet then, and even though most techies were among them, most Internet users then couldn’t write their own stuff – or even install software off the net.  Techies a small proportion still (even though bigger in number than before), so still vulnerable to this sort of attack.

Mobile devices are key here, consumerism. People just want stuff that works, generally.

Google as another example – they build very-attractive services, but on the basis of sucking up all our data.  Harness amoral self-interest of large corporations in this direction. Also (enlightened?) interest of Western Governments in promoting openness.

John uses example of bread mix and a recipe  to illustrate open source. Parallels with introduction of car (wow, I can go anywhere); PC (wow, I don’t have to ask people for most disk quota) and Net (wow, I don’t have to ask for more mail quota). These things have an impact on society, can damage it. So for instance, if you have an open machine, could damage other people’s computers, hence need to regulate ownership and operation. With car, annual check you have road tax, insurance, MOT; with a PC the surveillance needs to be continuous.

The 9/11 disaster scenario is instructive: why didn’t we have the same response to the Troubles? Because not transnational/non-State actors. The Provisional IRA have tangible, comprehensible political objectives that could be taken on. Whereas 9/11 terrorism is more vague.  And malware is different. Wasn’t a problem when it had no business model … but now it has. Can now take it on?

Is the Internet just (!) an extension of civil society and how you should regulate it, or is it something quite different?  Motor traffic law introduced absolute offences (no mens rea) – it’s an offence to drive over the speed limit regardless of whether you know you are going that fast or what the limit is) because quite different threat.  Internet is at least as new so likely to spur at least as revolutionary – and shocking – change to our legal system.  Ok, now I’m scared, so that’s a result.

But we’re only eighteen (nineteen?) years in to the web.  It’s idiotic for us to imagine we understand what it’s implications are.  So the only honest answer is we don’t know. John argues we’re not taking a long enough view. 1455, eighteen years after the introduction of the printing press. MORI pollster, do you think the invention of printing will undermine the authority of Catholic Church, spur Reformation, science, whole new classes, change of concept of childhood.  Web is a complex and sophisticated space, so to regulate it right can’t be done overnight.  Tendency for people to make linear extrapolations from the last two year’s trends.

In the long run, this won’t look like such a huge deal in the history of humanity. It’ll be a bit like what happened with steam. It looks like the biggest deal ever to us only because we’re in the middle of it.

So what do you do when you know that on a 20-year horizon you’re blind?

My answer: get moving now, plan to change and update regularly.  Expect to have to fiddle with it, throw great chunks of things away because they’re no longer relevant. Challenge to OU course production model! (Actually, I’m wrong to say throw away – more expect that things will become eclipsed and superseded – old technologies die very hard.)

We’ve become more open/diverse in our offer to bring in enough people. Which is hard – costs and scale versus personalisation.