As Technology Review explains, some demonstrations at the 2010 Semantic Technology conference show directions the collaborative encyclopedia may pursue with future development. The idea behind the semantic web is that links and document structure aren’t just mechanical, tying topics together and allowing for attractive and readable presentation. In addition, meaning would be encoded into links, why two pieces of information are related and how. Adding that same layer to a document’s structure would clarify standalone content, what parts are the table of contents, preface matter, and so on.
The first targets for Möller and Parscal are the “infoboxes” that appear as summaries on many Wikipedia pages, and the tables in entries, such as this one showing the gross national product of all the countries in the world.
That’s Erik Möller, a deputy director at the WikiMedia foundation, and colleague Trevor Parscal, a user-experience developer working for the foundation. This low hanging fruit makes sense, it already has a certain organic structure that meshes well with making it more accessible to programs looking to parse, transform and otherwise make novel use of Wikipedia’s content.
On the whole. bringing semantically meaningful structure into such a dense information source makes more sense to me than just about any application of so-called Web 3.0 principles I’ve seen so far. Wikipedia also has the kind of human processing power that would be required to drive forward truly useful markup of their content, beyond the easy to markup infoboxes and tables. I would even suggest that adding a semantic layer could attract, or re-attract, a whole crop of contributors, enriching the more steady state work that has been going on, by all accounts, more recently.