Monday, February 14, 2011

Hackers, transparency, and the zen of failure


This week I picked up the Hacker Ethic from my library (remember them?) by Pekka Himanen, with an intro by Linus Torvalds and an outro by Manuel Castells. It was hard to resist it with names like that splashed across it.

The book was written in 2001, back when the browser wars were in full swing and streamed video was still a bit of a novelty (so nothing's changed that much). But the theme addressed by Himanen is anything but dated - and contains some key threads which I want to think about and blog about more.

A lot of the text deals with the idea of what makes a "hacker" tick -and not just geek hackers, but anyone with a passion for what they do, rather than a bitter feeling that they work because they have to. Central to this hackerness is a passionate creativity, and a desire to share knowledge - including the results of that knowledge, such as code.

The truth and untruth of progress

This act of sharing knowledge is partially a form of status, true. But when you read articles like this about incorrect data being published, you start to notice what else open knowledge (including data) is about - social learning.

Hackers and open government are both (now) keen on sharing data - knowledge, code, ideas. But the real difference is in how they learn - for the hacker, openness brings about learning and improvement through public failure - there is an assumption that what you create can be improved, and an attitude that anyone else is welcome to improve it.

Now compare this to CLG's response to the LGC article above - a response filled with defensive language and finger-pointing. There is something rather scientific - or, rather, legal - about this discourse: claims are made by one party and refuted by another. Slowly the "truth" is "sculpted" from what is left.

But for the hacker, the truth is only what is created - not what is undisputed. Hackers fork code, create new communities, start new websites, run unconferences. If "truth" exists, then it is what emerges, not what is discovered, or what remains.

What do hackers sit on?

Can a highly hierarchical structure such as our democracy adapt to be creative rather than competitive? The open data movement is driven by both of these - data for transparency can be thought of as "evidence" in a legal bid for the justification of an organisation's existence. Data for new apps, on the other hand, only needs a use, and to be useful in a creative context.

This split in attitude is key when considering efforts like the recent consultation on local data transparency, which clearly puts "open data" into the evidential context:

The Government wants to place more power into people’s hands to increase transparency by seeing how their money is spent.

Transparent screen 1
img by AMagill
My fear is that this inherently makes "open data" unuseful to the hacker crowd - an essential crowd to interest when the data is being released in CSV files, or other formats that require some parsing. If hacker's can't create something with the data, they won't do anything with it. The idea of an "army of armchair auditors" becomes a functional paradox, as the people the Government has in mind for the data apparently sit in armchairs, while the hackers sit in cafes, meet in pubs, and generally find comfy chairs far too comfy to code in.

To return to this post's title, what role will failure and learning play in this paradox? Looking at the draft code, we can see a desire to use the "many eyes" approach to fixing data:

18. Data should be as accurate as possible at first publication. While errors may occur the publication of information should not be unduly delayed to rectify mistakes. Instead, publication and use of the data should be used to help address any imperfections and deficiencies.

The hacker approach agrees with this - fix things as we go along. But does this fit with the idea of "armchair auditors"? As we saw in the LGC article, how can an auditor tell the difference between what is incorrect, and what is wholely disagreeable? And if they can't, why should they trust any of it?

(Maybe we need a "stable release" system like that of open source projects? Maybe, like the Linux kernel or desktop distributions, data can be released with an "unstable/testing" tag, then marked up as "stable/trustable" after enough testing has been done on it.)

Transparency is a lovely thing - but everyone has different uses for it. If it's used for creativity, then there is, perhaps, an implicit assumption that things can and will change. If, on the other hand it's to be used for accountability, then there needs to be trust in it.

1 comment:

Ben said...

You might be interested in Kelty's _Two Bits_ as well, if you haven't already seen it.