In the runup to the Web 2.0 Summit later this month, Tim O’Reilly and John Battelle have been outlining their vision of what comes after Web 2.0. Their answer: Web Squared. They’ve set this out in a white paper (also available as a 1.3Mb PDF), a webcast, and a Slideshare presentation:
Ever since we first introduced the term “Web 2.0,” people have been asking, “What’s next?” Assuming that Web 2.0 was meant to be a kind of software version number (rather than a statement about the second coming of the Web after the dotcom bust), we’re constantly asked about “Web 3.0.” Is it the semantic web? The sentient web? Is it the social web? The mobile web? Is it some form of virtual reality?
It is all of those, and more.
They set out a vision in some detail – it’s well worth a read if you’re interested in what the leading lights of Web 2.0 think happens next. In a nutshell (as you’d expect from O’Reilly) it’s ‘Web 2.0 meets the world’. The boundary between the web and the real, physical world is in some ways clear, but in other ways very blurred, and the transition across it is one I am fascinated by.
As with Web 2.0, of course, lots of the things they proclaim as part of Web Squared can be seen going on right now. As William Gibson said, the future is already here, it’s just not evenly distributed.
There’s smarter algorithms to infer collective intelligence from the ‘information shadow’ of real-world objects, cast in space and time by more sensors and more input routes; and smarter ways of visualising and exploring the outputs, and delivering them to people in more contexts and situations. And all of this happening in ever-closer-to real time.
The ‘information shadow’ and ‘new sensory inputs’ is exactly the potential that Speckled Computing is mining and looking in to (and I’m very interested in pursuing for learning). And the increased sensors/input routes, building collective intelligence from many individuals collaborating with low effort is the sort of thing that iSpot is doing – using geolocations and photos from a wide range of individuals to build a bigger picture.
(As a bit of an aside, one ‘key takeaway’ is that ‘A key competency of the Web 2.0 era is discovering implied metadata, and then building a database to capture that metadata and/or foster an ecosystem around it.’ – I’m certainly convinced that’s a more scalable system than one where humans do the hard work of marking data up semantically by hand.)
The potential for the web to learn more and better about the world is huge – and as the web learns more, we too learn more. As they say, we are meeting the Internet, and it is us. And we’re getting smarter.
3 thoughts on “Web Squared”
“I’m certainly convinced that’s a more scalable system than one where humans do the hard work of marking data up semantically by hand.”
More reliably scalable, anyway. Wikipedia and OpenStreetMap do okay.
Oh, absolutely, Wikipedia, OpenStreetMap, any number of projects do extraordinarily well – spectacularly, wonderfully well – by harnessing the hard work of many thousands or millions of humans.
But the people are not (very often) doing (much) semantic markup, was my point.
My skepticism is more of those visions of the Semantic Web where (nearly) everything gets marked up by everybody using an all-singing all-dancing ontology or ten. Where that markup happens automatically, with your devices and things inferring semantics from data and contexts generated in the course of ordinary human activity – that’s the hugely scalable and exciting vision, I think.
The Semantic Web is one of those ideas that computer scientists have pushed for the last fifteen years and that actual humans have repeatedly failed to give a damn about.
Humans think in tag soup. Maintaining sensible hierarchies is only for obsessives.
The only way semantic stuff works is when it’s abstracted by machine from human behaviour. Even then, it’s prone to epic fails.
Comments are closed.