Sunday 13 November 2016

2 w/c 31st October: Peak Teach again!

One of the ironies of blogging is that at the times when you are doing your most interesting stuff, you just don't have time to blog about them. So it has been the past couple of weeks. This is the period we know as "peak teach" - that mid-term point where all the stuff you set in the early part of term comes back for marking, and all the stuff for the last part of term needs finalising. Of course, one always knows it's coming, so after a decade, it no longer catches me off guard.

Nor does it mean that research has fallen completely by the wayside: far from it, in fact. The last two weeks have been busy for a variety of reasons on the research front, for a good few reasons. As you will know if you've taken a look at my Twitter feed (one of many reasons why I like mirco-blogging!).

First up, part of the Augmenting the Body team (Stuart Murray, Sophie Jones, Amelia de Falco and myself) took a trip to visit Tony Prescott and Michael Szollosy at Sheffield Robotics. It was particularly interesting, because they do a different type of research than my colleagues at Leeds - they are firmly focussed on Human-Robot Interaction, rather than novel technologies, and are using systems such as iCub and NAO, which were both exciting to see (and see Figures 1 and 2, below). There are lots of information about these systems available, so I won't go into detail here. Suffice to say it was a very informative and thought-provoking trip.

Figure 1: iCub, learning what a toy chicken is.


Figure 2: Stuart Murray with NAO.

Of course, part of the reason for doing that was the second Augmenting the Body seminar, this time on Redesigning the Human, featuring Tony Prescott and Andrew Cook (from Dundee). I'll try to write this up in more detail, but there were some really good questions raised about the future of technology and how it fits with humanity and society.

On a related note, I then gave my seminar at the IDEA (Interdisciplinary Ethics Applied) centre on the subject of Who are we engineering for? Again, it generated a lot of discussion, which was really useful - you can see the detail of my arguments in my previous blog post, and I'll try to write up some post-seminar thoughts in the future (time, as ever, permitting).

I attended a Centre for Disability Studies reading group on some of Margrit Schildrick's work. It was good fun, with some lively discussion, though I'm not sure how far we got through the ideas in the paper. On the plus side, at least everyone seemed non-plussed by some of the philosophical language: I was a bit worried that everyone would be off into complex discussions that I couldn't follow, but not a bit of it. I've now been tasked with reading Delueze from an engineer's perspective. Just the tricky question of which Delueze that should be...

Finally, a red letter day for the Postural Sway Assessment Tool: first build of the new design is up and running! Just got to get it cased, now, and then we're off gathering data! Exciting times, as long as the inevitable bugs aren't too onerous!

Anyway - onwards! Still lots to do. It may be a while before my next "non-update" post, but I'll do my  best!

Tuesday 1 November 2016

Who Are We Engineering For?

I've been invited to give a presentation to the Interdisciplinary Ethics Applied Centre on engineering ethics on the 9th of November. I know the centre well, as I have co-taught undergraduates and co-supervised a PhD student with them (and that PhD student is now part of their staff). So it's not exactly a complete step in the dark, but it's still nerve wracking because I'm not an ethicist, and I know very little about the topic. To make matters worse, I'm going to bring in some of C. Wright Mills' Sociological Imagination, so even the material is outside my comfort zone, and the potential for things to go wrong is always there. But if you're afraid of things going wrong or looking foolish, research really, really isn't the game for you. After all, if I'm wrong, this is a great opportunity to find out, isn't it?

Anyway, I wanted to use this post to try out a few of the ideas I'm going to present. I picked the natty title: "The Engineering Imagination: Who are We Engineering For?", because that was one of the main things I wanted to discuss in this blog (hence the name!).  You can find the abstract here: I won't repeat it. Let's just blast into the ideas - with apologies if this seems a little off the top of my head.

Caveat Lector

Two things worth noting - once again, I'm skirting around disciplines that are not my own, and I may be wrong about one or more things here. Feel free to correct me. Also, I'm going to use the term "engineers" with slightly cavalier abandon, as if it were just engineers who make a lot of the decisions about our systems. In reality, a lot of what I'm discussing is shaped equally by many other players (designers, managers, therapists, clinicians, etc), and many of the same issues arise. But it was engineering I was asked to talk about, so that's where I'll focus, with the significant caveat that I realise it doesn't happen in a vacuum.

Who are We Engineering For?

There’s been a growing interest in engineering ethics recently. That’s not too surprising. 
Engineers play an increasingly significant role in the lives of most people today with our increasing use of - and dependence on – technology. This has always been the case, by the by: new technologies have been disrupting jobs and killing and injuring people as well as making life better and easier for centuries, probably millenia. The rate of technological advances, and in particular the increasing use of automation,  AI and autonomy have made this very visible in the last few decades.  So, it’s not surprising that there has been more and more emphasis on engineering ethics.


    Engineers are one of several professions that get the privilege – and by extension the responsibility – of making decisions that can have a massive impact on everyone’s lives. That doesn’t just apply to your top brass – the bosses at Google and Apple or NASA, for example.  Even a junior engineer can make an error or a typo that introduces a catastrophic bug.

    Decisions taken by engineers therefore have a huge impact on our lives, for better or worse, and this impact is increasingly being recognised by engineering’s professional bodies such as the Engineering Council and Royal Academy of Engineering, who are increasingly requiring that engineers abide by principles of accuracy and rigour; honesty and integrity; respect for life, law and the public good; and responsible leadership. Indeed, if you want to be a Chartered Engineer, you have to specifically provide a statement as part of your application that you have read and agree to abide by your body's Code of Conduct. It's the nearest thing engineers in the UK have to the Hippocratic Oath taken by doctors, albeit much less famous.

These principles are pretty uncontroversial and I think it's fair to say that we pretty much require these of everyone we interact with. I also think the first two principles are straightforward, since they basically revolve around withholding service or giving inadequate service either deliberately (to encourage a bribe, for example) or inadvertently (by pretending greater competence than you possess). The last two, though are interesting though, because both relate to the public good, and socio-economic impacts. Among the bullet points expanding upon them are:

* Minimise and justify any adverse effect on society or on the natural environment for their own and succeeding generations; and

* Be aware of the issues that engineering and technology raise for society, and listen to the aspirations and concerns of others;

This requirement to act in the public good and for awareness of social impact create an interesting area not often explored by engineers, where there are no objectively “correct” answers. I wanted to use the seminar as an opportunity to explore my thinking on these areas, and review some of the work I'm aware of that addresses these.

What is an Engineer?


Perhaps a good place to start is by reviewing what constitutes an engineer. For most people, I suspect, the assumption is that engineers are people who work with engines (hence the name!) and machinery. Fixing boilers, cars and the like. Indeed, I do remember a Headteacher saying that GCSE maths might be important if you were going to be a mathematician, but not if you want to be an engineer, because you would never need to know trigonometry or calculus. FOR SHAME! Mathematics and the numerical modelling and analysis is the very bedrock of engineering!

The term engineer is not derived from the modern word engine (as in steam engine or combustion), but from the same root as ingenious - an engineer is therefore one who is ingenious, who develops new solutions to problems, new ways of doing things [1]. Within the parlance of the Engineering Council in the UK, we can distinguish three types of engineer:

Engineering Technicians "apply proven techniques and procedures to the solution of practical engineering problems";

Incorporated Engineers "maintain and manage applications of current and developing technology, and may undertake engineering design, development, manufacture, construction and operation"; and

Chartered Engineers "develop solutions to engineering problems using new or existing technologies, through innovation, creativity and change and/or they may have technical accountability for complex systems with significant levels of risk."

It's worth noting that this isn't a hierarchy: each represents a very different and important set of skills. You wouldn't ask me to fix your car, or operate a lathe. These take a lot of skill (you don't believe me? Come and have a chat to the technicians in our workshop some time - they certainly fit the requirement for being ingenious!). The key thing to note is that most of what engineers (as defined by the Engineering Council!) do is solving problems and for Chartered Engineers these tend to be problems we haven't seen before and/or don't understand very well. This suddenly makes the issue of what impacts those solutions might have a pretty serious matter - and that's what I want to explore with this whole "Engineering Imagination" business.

To Engineer is Human

I 've started compiling a reading list for the Engineering Imagination - the various books and texts that are great food for thought for engineers in this kind of situation. There are lots, by the way: I'm a long way from being the only engineer or academic thinking about this, and I'll try to put the list together in a blog post at some point. Anyway, two that make really good points are Louis Bucciarelli's Designing Engineers [3] and Henry Petroski's To Engineer is Human: The Role of Failure in Successful Design [4].

    I'm an enormous admirer of Bucciarelli's work, and witnessing his keynote speech on the social nature of engineering design at a conference in the first year of my PhD fundamentally changed the orientation of my research, from models and numbers to how people used them, and basically set me off on the multidisciplinary path I've adopted. He introduced me to the Dual Nature of Technical artefacts - the fact that objects have both an objective physical nature, but also a subjective, intentional nature.  How well the physical nature meets the intentional nature determines how "good" the design is - but that clearly depends upon what you intend the design to do, and Designing Engineers is replete with examples of engineers working towards slightly different intents, and the difficulty of aligning their understanding of what they're trying to achieve. Bucciarelli's point is that while engineering science (the stuff  we tend to teach at Universities) is objective, rigorous and methodical, design (and engineering practice generally) is a social process, full of communication, miscommunication, fuzziness and competing social demands. When we have a clearly defined goal, when we've been using a technology for a while and understand what people might want to do with it, this is less of a problem. We understood, broadly speaking, the difference between what makes a good clock or watch. This has changed over the years, as the need for more precise time keeping has been required (particularly with railroads and navigation), but we have a pretty good grasp of what is needed - the challenge lies in delivering it. When we have a brand new technology, we don't have a good grasp of what people might want to do with it. We may have a view on how we think they'll use it, but we can't be sure.

This brings us neatly to Petroski's book, which deals explicity with engineering failures, and the fact that by definition a lot of engineering design deals with the unknown. We are constantly trying to make things lighter, faster, stronger, larger, taller, to push the boundaries. That means that the first time you do something, you build in a big safety margin, just in case. If you can't do that - flight (especially space flight), most noticeably, where extra weight is a huge problem - then testing the first build becomes extremely dangerous. Not because the engineers are careless or incompetent (any flight would be dangerous where this is the case), but because there is no way of knowing for sure that things will behave as expected, or that you haven't overlooked a critical factor, or that we haven't just crossed the line where our simplifying assumptions break down. That's one of the reasons why there are a lot of test flights before you put paying passengers onto a new model of aircraft.

This is a really fundamental point - to what extent do engineers need to foresee the potential dangers involved in their designs, particularly when boundaries are being pushed? There is an expectation that every eventuality must be accounted for, but sometimes you just can't predict what's going to happen.

The collapse of the original Tacoma Narrows bridge ("Galloping Gertie") is often used in dynamics lectures as an example of why it is so important to consider dynamic behaviour even in the design of seemingly static systems, such as bridges. The Tacoma Narrows bridge shook itself to pieces because of the way it caught the wind. Nowadays, every engineer knows (or should know) about this - but was this an engineering failure that should have been foreseen? Or was it just that this was the third longest suspension bridge in the world, and the methods used for controlling vibration had always worked before - why assume that they would fail now? That failure has paved the way for research that has allowed us to address the problem, and build better bridges.

A similar question arises with more modern buildings - the wind effects around Bridgewater place here in Leeds (which have unfortunately led to a death and several injuries) or the solar glare problems of 20 Fenchurch Street (the "Walkie Talkie"). Looking back, the problems seem obvious - but should they have been foreseen up front? I don't know the details of the design process, so I'm not going to comment on that aspect. Did no one think about it? Did it cross someone's mind, only to be forgotten? Did someone mention it in a meeting, just to be told "Don't be silly...". Of course, such questions will be in people's minds the next time.

This raises a slightly different question - to what extent do engineers have a responsibility to learn from history? An unexpected failure when you're trying to build the longest bridge or tallest building or an unusual shape is one thing. To repeat someone else's error is another, though this in turn might depend on how well publicised the original error was. Of course, these issues are related to physical problems - the problem can be modelled and predicted from the physical structure of the system. What we haven't discussed so far is the intentional equivalent: what happens when people use a product or system in a way we didn't intend? If I design a chair, do I need to consider that it might be used as a weapon? Or a ladder? To what extent might an engineer be reasonably expecetd to foresee not just the immediate physical consequences of their work, but the longer term social consequences?

The former are matters of honesty and integrity - recognising the limits of one's competence and when you are near the edge of what you can be sure about. The two points I highlighted earlier from the Engineering Council's ethical principles - "minimise and justify any adverse effect on society" and "Be aware of the issues that engineering and technology raise for society" take us beyond this, into matters of human behaviour and the intentional nature of engineering design. This is where the subtitle of this piece comes in: who are we engineering for? Whose intentions count?

Engineering and Society: Who are We Engineering For?

So, the Engineering Council have determined that engineers have some obligation to society, and that this goes beyond just being honest, competent and not abusing your position. It's not just about doing things right, but doing the right thing (as the saying goes). Doing something harmful in an honest and efficient way doesn't compensate for doing something harmful.  Of course, this leads to debates about what constitutes "harmful". Richard Bowen discusses this eloquently in his Engineering Ethics: Outline of an Aspirational Approach [5], where he discusses the discrepancy between the large proportion of engineers working on military projects, compared to those working on problems such as providing clean water, which might prevent conflict.

Weapons vs. water is an extreme example of the dichotomy facing engineers, and groups such as Engineers without Borders and the Campaign to Stop Killer Robots are addressing precisely these issues. Weapons, like armies, are tolerated as a necessity: they can clearly cause harm, but defence is consider necessary for security. Things become more complicated when robots are involved, as the use of a robot to deliver a lethal bomb in a police operation demonstrated in Dallas this summer. However, we can rephrase this in less extreme versions with lots of things engineers develop: autonomous cars or surgical robots? New smartphones or renewable energy? Bill Gates gave an interesting video introduction to the Royal Academy of Engineering's  Engineering a Better World conference in September, where he notes that if engineers only solve the problems dictated by the market, we won't fix problems like Malaria. It's an interesting point: market forces tend to drive us towards solving the problems of those best able to pay, and those best able to pay tend to have the least problems.

Disability is a good example of this. The traditional medical model of disability tends to treat disability as a deviation from some norm (two arms, two legs, height in a certain range, two eyes, a certain amount of vision, hearing, strength, dexterity, etc) that needs to be corrected in some way. By contrast, the social model of disability views disability as arising from society's failure to accomodate an individual's given impairments. Under this model, being unable to use your legs isn't disabling as long as you can get around in, for example, a wheelchair. Stairs are disabling because they prevent this. This is a massive simplification, but catches the nub of the issue: if a person is unable to participate because of environmental barriers, then this is because at some point, an engineer has taken a decision that excluded rather than enabled that. That may or may not have been a conscious decision. Being disabled makes it more likely that you will be on a low income, and disabled people represent a relatively small share of the market, so it may not seem financially rewarding to make the required devices or adaptations. It's also difficult to know what other people's affordances for an object will be or even to realise what problems might exist in actions that you take for granted.  This raises the question of to what extent engineers have a duty to ensure the accessibility of the systems they develop. That's not simple, and the Inclusive Design team down at Cambridge EDC have done a lot of really helpful work on this. It is accepted that it's difficult to avoid all exclusion, but when do you decide you've done enough?



   Related to this is the term "assistive technology" - which taken literally is a tautology, since I can't think of any technology doesn't assist someone to do something that they couldn't do without it. The term is generally reserved for devices designed specifically to ameliorate the effects of an impairment. Yet, apart from being a neat pun, this rather highlights an interesting point. The function of technology is to enable - to compensate for some current lack. To afford an action that isn't available in the natural world, until it has been reshaped to provide it. Yes, this can be compensating for some perceived deviation from a norm - prosthetic arm serving to "fix" a person by replacing a missing limb, but it can also be to address a lack that even the most "normal" human (insofar as this exists) lacks.

Let's take an example that struck me after attending Margrit Schildrick's lecture on Rethinking Prosthetics: clothes as prosthetic fur. We aren't hairless, and given the existence of things like the hermit crab, we're not the only animal to use protective coverings not part of our natural bodies. Yet we use clothes, all the time. Not just for decency or fashion - for warmth, for protection. We use ladders to allow us to reach places that even the tallest human couldn't. We use mobile phones to project our voices to places we couldn't even shout to. So technology - all technology - is enabling in some way: either it allows you to do something you couldn't, or do something you could faster or with less errors, or it's pointless.

That's interesting, because it raises two questions - what should we enable? and who should we enable to do it? It makes me wonder (perhaps wrongly - I'm thinking aloud here)  if in this context we could reframe disabling (as defined by the social model) as "not enabling" or "selective enabling". Engineers make decisions that determine (both by accident and design) who can and can't access technologies and the new actions they afford. 


The Transhuman and The Posthuman

This raises an interesting issue. This interplay between technology, person and ability has given rise to some of the interesting notions around cyborgs and the way humanity develops as technology does. I recently read an interesting critique of the notion of our becoming 'hybrid beings', not because we aren't, but because we've always been (or at least - have been since long before recorded history). Nevertheless, more and more attention is being given to the fact that as technology develops, so too do our capabilities - and by extension, the gap between those who have access to them, and those who do not. Physical or cognitive capabilities aside, there are also cost and access issues in terms of who gets to use given bits of technology. So some technologies, and their benefits, are restricted to a small group of people.

The intertwining of person and technology and whether technological development forms part of an accelerated human evolution is the domain of transhumanism. I need to be careful about these definitions, as there is a lot of philosophical and ethical work on this, that is outside my domain, so I'm in danger of getting out of my depth with the terminology - so again, feel free to correct me. Most of what I know comes from perusing the UK Transhumanist Party website or reading the work of Nick Bostrom. I don't know that these are the best sources, but they're the most reliable ones that come readily to hand.

Posthumanism is the step beyond this - some people seem to regard transhumanism as a step towards posthumanism (Bostrom, for example - see his paper on Transhumanist Valuesothers seem to see posthumanism as more of a philosophical view of the world (as in post-humanist rather than post-human, I think?). Either way, the argument is more or less that at some point it will become difficult to separate humans, animals and machines and this will have quite a profound effect on our way of thinking. Wowsers.

In this worldview, engineers aren't just responsible for designing handy knick-knacks and infrastructure, or even for deciding who gets to participate and who is excluded. They (among others, as I noted before!) are effectively shaping human evolution. No pressure there, eh?

There's a twin danger here: one is in going down the route of addressing the ever more abstract goals of a small number of people - immortality for people in silicon valley, for example - rather than "wasting" resource on addressing the needs of less important people. Why worry about the disabled? We need to make sure that no asteroids destroy the earth before we achieve immortality! We need to get into space! The future of the species is depending on this, that we invest everything in the needs of the best and brightest. Here, of course, we rub up against the needs of the individual, versus the needs of society (NB: Bostrom cites "wide access" as one of his transhumanist values, so it is a little unfair to assume that transhumanists are arguing that we should focus on the need of the select few, I just highlight this as a danger that can crop up from this line of thinking in deciding the priorities of engineering resources).

The other is the risk of effectively colonising disability by extending it to those who don't get access to transhuman capabilities, focussing efforts on ensuring that more people get to participate in the transhuman future, at the expense of focussing on those who don't get participate in the human present.

In a short summary, here are some of the questions that arise from considering this

1) To what extent should engineers be held accountable for the "selective enabling" of the systems and technologies they devise?

2) To what extent do engineers have a responsibility to ensure the continuation of the species by, for example, preventing asteroid strikes or ensuring that humanity is able to colonise other planets?

3) What are the responsibilities of engineers in terms of steering human evolution, if the transhumanist view is correct?

4) How do we prioritise which problems engineers spend their time solving? Market forces? Equity? Maximising probability of humanity surviving into the posthuman future?

I don't provide any answers, but there are clear issues about to whom engineers have responsibility, and how that should be decided. This is particularly problematic (as any student of Arrow's Impossibility Theorem will attest) given that (except under very specific circumstances), there is no way of aggregating the preferences of multiple individuals that will ensure a rational outcome.

D'oh!

Of course, this perhaps highlights the importance of professional bodies in acting as the interface between the profession and society. But this leads me on to something else - the whole issue that I started this blog around: the Engineering Imagination. How do we give engineers the skills to address and think through some of these complicated issues?

The Engineering Imagination

The idea of the "Engineering Imagination" stemmed from reading C Wright Mills' Sociological Imagination [6] at the suggestion of Angharad Beckett (Sociologist and Co-Investigator on the Together Through Play project) back in 2012. This was a very influential book, I gather, and its ideas have taken on something of a life of their own (as most academic ideas do if they're still around half a century after their inception!). The book is largely about the craft and philosophy of sociological research, and one of Mills' key concepts was the relationship between "private troubles and public issues": in other words, that many of an individual's problems cannot be understood or solved purely from an individual perspective, but are influenced by the society around them, its history and norms. This requires "imagination" - the ability to step away from one's own norms, values and history and so examine the world anew.

    Does this have any relevance to engineering? Engineers are problem solvers, so certainly the issue of how people's problems relate to public issues are important. Clearly, regulation, policy and behavioural norms need to be recognised and considered. Yet, there's something a bit deeper than this. Just as individuals' problems need to be understood in terms of history and social norms, the same applies to technologies. It's that dual nature of technical artefacts again: while the physical nature may be objective, timeless and fixed, the intentional nature can shift with time. How we expect an object to be used will change with time. Developing a weapon today may have different implications than developing a weapon tomorrow. So, perhaps an Engineering Imagination is a useful tool - being able to look at a system being designed critically, from the outside, to view its relationship with the norms and culture and history it will be deployed in.

The question in my mind is: how do we help engineers develop it?

References
[1] See https://en.oxforddictionaries.com/definition/engineer
[2] See http://www.engc.org.uk/professional-registration/the-professional-titles/
[3] Bucciarelli LL (1994) Designing Engineers, MIT Press: Cambridge, MA 
[4] Petroski H (1985) To Engineer is Human: The Role of Failure in Successful Design, MacMillan: London
[5] Bowen WR (2009) Engineering Ethics: Outline of an Aspirational Approach, Springer-Verlag: London
[6] Mills CW (1959) The Sociological Imagination, Oxford University Press: Oxford