Monday 19 February 2018

Unintended Consequences

This post isn't as exciting as that heading makes it sound. Which is to say that I'm not about to report some hideous unintended consequences of my research. Rather, I'm thinking about the  Collingridge dilemma.

This was brought to my attention during my work on the AHRC Tracking People Network. The dilemma is this: during the early stages of developing a new technology, its design is easy to change - but its social consequences hard to predict. Once the technology is developed and in widespread use, its consequences become apparent - but now, the technology is hard to change.

This was something that clearly applies to tracking technology: the recent issue of Strava revealing the location of military bases is a good example. Likewise, social media's role in bullying and fake news: lots of early internet utopianism looks quite naive now. However well-intentioned, technology can be adapted and misused in many ways. After all, we're all designers: we combine various aspects of our environment to achieve our goals. A desktop computer can be a doorstop, a pint glass a weapon, a telephone a paperweight and so on. You can see how Deleuze and Guattari had a point about assemblages and territorialisation. It's a process that happens all the time.

The problem for the designer or engineer is that their products will be territorialised by other people, used for new purposes, for good or ill. This dovetails with the problem of engineering failure highlighted by Petroski: new things are more likely to go wrong, and we accept that engineers can't foresee every problem. In terms of physical failure, there are well-established methods: prototyping, simulation, test runs. For social consequences we just don't have the same thing. So what are engineers to do - if anything?

This is particularly on my mind because of SUITCEYES, especially in setting up the ethics advisory board and preparing ethics applications. We cover the research ethics - not exploiting participants, safeguarding their data, considering potential benefit. Yet, we find ourselves in the process of developing a new system - what ethical concerns are there? What are the social consequences, if any? Suddenly, we face the Collingridge dilemma head on. Should we even be developing this?

Which is hyperbole, of course. I don't have any special reason to believe that we are about to unleash a far-reaching chapter in history in the way of, say, Tim Berners-Lee and the internet or Mark Zuckerberg and Facebook. Then again, I don't imagine they thought they would have the impact they did. Though hey - I could be wrong.

It's always interesting to be on the other side of a problem, particularly in academia, where so much is about theorising what others should do, while frequently failing to apply it to oneself. I believe this is what is known in sociology as reflexivity. I've often thought about this in a decision-making context: doing my PhD I became fascinated by human decision-making and how design decision-making in particular varied from the ideals espoused in text books. I do a lot of design, yet I critically reflect on virtually none of my decision-making processes. Maybe that's something I should be doing more of.

And having espoused thoughts on the Engineering Imagination and tracking and Collingridge and ethics - here I am again, faced with the question of: what do we do? How do we address these questions in a messy, practical situation? "Reflect" doesn't seem like much of an answer (indeed, it raises the question - how does one reflect?), but perhaps that's the only one I have.

So maybe that's it. All I can do is try to capture the issues that crop up: a case study in the Engineering Imagination. That and actually read up on the  Collingridge dilemma. It's not like there's a shortage of people who've written on it...

No comments:

Post a Comment