Wednesday, 16 August 2017

Humaniteering - What, If Anything, Can Engineers Gain from Working with the Humanities and Sociology?

I'm working on a conference paper around breaking down barriers between disciplines, and (as always) I thought the blog seemed like a good place to think out loud on the subject before getting down to brass tacks in the paper itself. So bear with me: I'm literally thinking out loud here (if we understand "out loud" to mean the clack of my keyboard - my inner pedant feels compelled to point out!).

Anyway, I work with people from a variety of disciplines. I keep meaning to sit down and draw up a diagram of all the people of different disciplines that I've worked with in a research context over my twelve years in academia - either by writing a paper or a grant (successful or otherwise) or co-supervising a PhD student. Perhaps I will, but that's a little bit by-the-by. For now, let me just see if I can rattle off a quick list of my common collaborators outside engineering here at Leeds University:

English/Medical Humanities: Stuart Murray (Augmenting the Body), Amelia de Falco (Augmenting the Body)
Ethics: Chris Megone (PhD supervision), Rob Lawlor (PhD supervision), Kevin Macnish (Tracking People)
Health Sciences: Justin Keen (Tracking People)
Law: Anthea Hucklesby (Tracking People)
Psychology: Mark Mon-Williams (too many links to count!),  Richard Wilkie (ditto)
Rehabilitation Medicine: Bipin Bhakta (MyPAM, iPAM, PhD supervision), Rory O'Connor (MyPAM, PhD supervision), Nick Preston (MyPAM)
Sociology and Social Policy: Angharad Beckett (Together Through Play, LUDI)
Transport Studies: Bryan Matthews (WHISPER)

The list grows longer if you include the engineers, those who work outside Leeds Uni (Andrew Wilson and the cognitive archaeology crew being the most obvious) and the PhD students who have been involved in these various streams. The link to Psychology (motor learning, neuroscience), Health Sciences (health economics and governance), Rehabilitation Medicine (rehabilitating people) and Transport Studies (Assistive Technology for navigation) should be pretty obvious. At the end of the day, these represent the "professional customers" (as distinct from the end users - also an important group, but not one that can easily be captured as an academic discipline!) of the technology that we're building, and engaging with these disciplines is important if we want to be designing the right things, and verifying that our devices actually work (think of the V-model of systems engineering - we want to make sure we're converting needs into technical requirements properly, and properly verifying the end result). Ethics and Law might also seem obvious - we don't want unethical or illegal technology (that's a massive oversimplification, but engineering ethics and the challenge of keeping the law up-to-date with technological development is a big deal at the moment, and you can see why engineering researchers might want to get involved with ethicists to discuss ethical implications of what they do). Why, though, engage with people from English or Sociology, other than a monomaniacal desire to collect cross-disciplinary links? Where does Engineering cross over with these disciplines?

Caveat Lector
As ever, let's have the customary warning: I'm an engineer, and speak from an engineer's perspective (an academic, mechanical engineer's perspective at that), so I can only tell you about my perception of these disciplines. I may be mistaken; I'm certainly speaking from only a partial exposure to these disicplines. With that out the way, let's move forwards.

An Engineering Imagination
Of course, the fact that I named my blog (fully four years ago!) after "The Sociological Imagination" by C. Wright Mills  perhaps suggests some rationale for this interest. In the very first post on this blog, I set out my stall by saying:
"Mills was interested in the relationship between individual and society, and noted that social science wasn't just a matter of getting better data, or analysing it more effectively. Measurements are filtered through a whole set of social norms, and individual assumptions and biases. They colour the way we look at the world, and are often deeply embedded in the methods that we used...  Certainly it applies to engineering: at a fundamental level, what engineers choose to devote their time and energy to (or sell their skills for)... It's not just about what we engineer, but about the way we engineer it: decisions made in developing products and systems have huge implications for their accessibility, use and consequences (bot/h intended and unintended)."

And I revisited it last year in my post on Who Are We Engineering For? noting the challenge of helping engineers to address four key questions:
  1. To what extent should engineers be held accountable for the "selective enabling" of the systems and technologies they devise?
  2. To what extent do engineers have a responsibility to ensure the continuation of the species by, for example, preventing asteroid strikes or ensuring that humanity is able to colonise other planets?
  3. What are the responsibilities of engineers in terms of steering human evolution, if the transhumanist view is correct?
  4. How do we prioritise which problems engineers spend their time solving? Market forces? Equity? Maximising probability of humanity surviving into the posthuman future?
And I concluded that:
"perhaps an Engineering Imagination is a useful tool - being able to look at a system being designed critically, from the outside, to view its relationship with the norms and culture and history it will be deployed in."
That, I think, is the key issue. There are technical things that can be learned from sociology - rigour in analysing qualitative data, for example - but there's something more significant. One of the problems in engineering is a focus on engineering science rather than engineering practice. Learning the technicalities of how to model the behaviour of the world without any thought to what that means in practice. The challenge is that it's easy to say that engineers should be more aware of the implications of their technology - the big question is how do we do that? How do you put a finger on the Engineering Imagination? What does it mean in practice? That, I think, is where the Sociology and Humanities come in.

The Tricky Business of Being Human
Reading up on the Posthuman (I finished Braidotti, by the by - more on that in the next month or so!), makes me a little cagey about using the term human, but it's a convenient shorthand and I don't want to get caught up in lengthy discussions around self-organising systems, the limits of the body, anthropocentrism and humanism. Anyway, the point about the Humanities and Sociology is that they deal with people and their positions within complex social relationships - that link between the personal and the broader "milieu" as the Sociological Imagination puts it. This applies in two ways in engineering: both in terms of stakeholders (users, most obviously, but they can be a multiplicity) and the engineers themselves. So, the stakeholders in an engineering project are neither independent "vitruvian" individuals independent of the world around them, nor an amorphous statistical mass of aggregate data. But it applies to the engineers themselves - who in turn have tastes, background, personalities, histories, families, and find themselves enmeshed in the social processes of an organisation and project management process. They may not all be "engineers" either: a huge range of people are involved in product development, and even the boundaries of what is being developed can be porous. I don't think that's a controversial argument: I have yet to hear anyone make the argument that engineering is a pure, objective  process that leads to a single unquestionably right answer. Most of the challenge in engineering is in mapping complex, messy real world problems into forms that you (as the engineer) have the capability to solve. The "fuzzy front end" and "wicked problems", is a well-recognised problem. And the problem is that the dynamic nature of engineering problems means that these don't just apply at the start of the process. You don't just characterise once and have done with it - you're perpetually having to loop back, adjust expectations, update requirements, work with what you have. It's like user centred design - you don't just ask the user what they want and then go on and make it. You have to keep checking and course-correcting. Sometimes, people don't want what they say they want. Or a product takes so long to develop that it's solving a problem everyone used to have five years ago, but not any more.

This is like Donald Schön's Reflective Practitioner - constantly proposing moves, reflecting on the outcome and reframing the problem in light of the result.  It's this process that I hope that working with Humanities and Sociologists can help with in engineering terms. It's partly about having the concepts and frameworks to process this; partly about methodological tools that help incorporate that into the process. Engineers are people, with all the frailties and limits that implies - Micheal Davis in his essay "Explaining Wrongdoing" talks of microscopic vision (a concept that my one-time PhD Student Helen Morley highlighted to me): that expertise encourages a narrow focus (knowing more and more about less and less...) at the expense of a broader view of consequences. This dovetails beautifully with the notion of selective enabling and design exclusion, but also the Collingridge Dilemma: the potential difficulty in foreseeing the side effects of new technologies until it's too late.

Which isn't to say that we should be abandoning rational judgement and analysis - just that we need to make sure that we're framing problems correctly, and aware of the limits of what we bring to the analysis. I don't know how all this is going to help - that's one for the humaniteers (as I like to call them) and sociologists to answer.

Monday, 31 July 2017

Month in review: July 2017

I've shifted to monthly, rather than weekly reviews, now: that (hopefully!) allows me to hit my two-posts-a-month target, with one "non-review" post each month. Hence,  today is the day to review July.

July is a funny month. I've mentioned before the problem of "doing much but getting nothing done", and July often exhibits this. You work furiously, but nothing of substance gets ticked off the To Do list: there's just a load of half-finished tasks. Which, while frustrating, isn't a real problem: I make a point of trying to run multiple tasks in parallel, so that teaching prep continues over the summer, freeing up time for more research in term time.

I've lately become a fan of the Pomodone app, which is a Web-based implementation of the Pomodoro technique. This means working on a task for 25 minutes at a time. The nice thing about this is that not only does it sync with Wunderlist, where I keep my To Do list, but it logs how much time you've spent on each task, so you can at least see progress. Granted, time spent isn't an exact proxy for progress on a task, but unless you want to micromanage your To Do list to the nth degree, it's at least a satisfying alternative to actually being able to tick the big tasks off.

So, what tasks have been underway this month? Well, I have been reviewing Elizabeth Petrick's Making Computers Accessible for Disability and Society (reading done; writing underway); I've prepared a presentation on Humans, Puppets and Robots for the Superposition to be delivered on the 2nd of August. I've been calibrating PSAT and looking at its development into reach-to-grasp movements; I've rejigged my reach-to-grasp model to accommodate time variations in dynamic behaviour; I'm rewriting a marking scheme for my Level 2 undergraduate module; I've been preparing my handouts for next year (I like to get them printed for the end of July, just to avoid the temptation to keep revising them as term approaches); I've been supervising two undergraduate students who are working on the MagOne force sensor for Grip purposes (first iteration due this week); preparing for the Tracking People network finale in November; developing the PACLab technology roadmap; attending graduations; attending the EPS Conference; supervising PhD students and an MSc student; prepping a paper for submission and working on a grant proposal.

Yeah, that'll keep me busy, alright! Thankfully, August will see things getting ticked off the list. Good fun!

Tuesday, 25 July 2017

Robots, Humans, Puppets: What's the Difference?

I've agreed to give a talk for the Superposition on the 2nd of August on the subject of "Humans, Puppets and Robots: What's the Difference?". This is part of their ASMbly talks and events, which bring together an Artist, a Scientist and a Maker to discuss a particular topic (you can get tickets via Eventbrite, if you so wish!). In this case, the Artist is the excellent Anzir Boodoo (puppeteer of Samuel L. Foxton), the Scientist is Samit Chakrabarty (who specialises in motor control via the spine)  and the Maker is... well, me. Unsurprisingly, Anzir will be talking Puppets, Samit will be talking Humans and it falls to me to talk Robots. As is my custom, I thought I'd use the blog as a handy place to work out my thoughts before putting the presentation together.

The main question that we're looking at is how Puppets, Humans and Robots are similar and how they are different: the clue's in the title of the talk. This is really interesting question. I've often thought about the human vs robot link. It's something that crops up a lot in my line of work, especially when you're looking at how humans and robots interact and when the robot has to use force feedback to help guide human movement. Samit is particularly interesting in this area, because of his modelling of human motor control as an electrical circuit. The links between robots and puppets though has been particularly interesting to reflect on, as it ties in with some of my recent thoughts about the Tracking People project, and algorithms as a set of pre-made decisions. I mean, what is a computer program but a kind of time-delayed puppetry? By that token, a robot is just a specialised type of puppet: at least until Strong AI actually turns up.

I thought I'd break the talk down into four sections:

1) What is a Robot?
2) How do Robots Act?
3) How do Robots Sense?
4) Algorithms: Robots are Stupid

Let's take them in turn.

What is a Robot?

For all that we hear a lot about them, we don't really have a good definition of what constitutes a robot. I work in robotics, so I see a lot of robots, and I'm not sure I have a good sense of what the average person thinks of as a robot. iCub probably comes pretty close (seen here learning to recognise a chicken from our Augmenting the Body visit to Sheffield Robotics last year) to what I imagine most people think of as a robot:

A sort of mechanical person, though the requirement to look like a human probably isn't there - I mean, most people would recognise R2-D2 (full disclosure - R2-D2 remains what I consider the ideal helper robot; which may say as much about my age as its design) or more recently BB-8, as a robot just as much as C-3PO.  Perhaps the key feature is that it's a machine that can interact with its environment, and has a mind of its own? That's not a bad definition, really. The question is: how much interaction and how much autonomy are required for an item to become a robot? To all intents and purposes, the Star Wars droids are electomechanical people, with a personality and the ability to adapt to their environment.

The term Robot originates from the Czech playwright Karel Čapek's Play Rossum's Universal Robots (apparently from the Czech word for forced labor, robota). In this play, robots are not electromechanical, but biomechanical  - but still assembled. This draws an interesting link to Samit's view of the human body, of course. Perhaps one day we will have robots using biological components: for the time being, at least, robots are electromechanical machines. Yet, there are lots of machines that do things, and we don't consider them robots. A washing machine, for example. A computer. What sets a robot aside?

Well, for starters we have movement and the ability to act upon the environment. A computer, for example, is pretty complex, but its moving parts (fans and disc drives, mostly) are fairly limited, and don't do much externally. It doesn't act upon its environment, beyond the need to pull in electricity and vent heat. So, we might take movement as being a key criterion for a robot. We might wish to specify complex movement - so a washing machine, for example, that just spins a drum wouldn't count. No, there needs to be some substantial interaction - movement, or gripping.

We can also differentiate an automaton from a robot - something that provides complex movements, but only in a pre-specified order. It carries on doing the same thing regardless of what happens around it. A wind-up toy, might provide complex movement, for example, but it wouldn't be a robot. We expect a robot to react and adapt to its environment in some way.

This brings up four handy conditions that we can talk through:

1) A robot is an artefact - it has been designed and constructed by humans;
2) A robot can act upon its environment - it possesses actuators of some form;
3) A robot can sense its environment - it has sensors of some form;
4) A robot can adapt its actions based upon what it senses from its environment - it has an algorithm of some form that allows it to adapt what its actuators do based upon its sensors.

The first of these doesn't require further discussion (except in so far as we might note that a puppet is also an artefact, whereas a human is not), but let's take a look at each of the others in turn.

Actuators  - how does a robot act upon its environment?

Actuators imply movement - a motor of some form. A robot could also have loudspeakers to produce sound, LEDs to produce light, all of which can be intelligently controlled, but so can any computer, or mobile phone.  So I'll focus on actuators.

It's worth noting that actuation has two characteristics - some power source which it will convert into mechanical power in the form of movement; and some control signal that tells it how much output to produce. These actuators can be linear (producing movement in a straight line) or rotary (spinning round in a circle). The power source is often electricity, but can be pneumatic (using compressed air) or hydraulic (using liquid).

The mechanical output can then be adapted using all kinds of mechanisms - attached to wheels or propellers to provide propulsion;  attached to four-bar linkage to get more complex oscillations such as moving parallel grip surfaces so that a robot can grip; attached to cables to drive more complex kinematic chains (for example, the Open Bionics ADA Hand).

Apart from the complex mechanism design this is fairly straightforward (thanks, it is worth saying, to all the efforts of those put the hard work into developing those actuators). The challenge lies in getting the right control signal. That's what differentiates robots from automata. Automata have actuators, but the control signal is pre-determined. In a robot, that control signal adapts to its environment. For that, we need the other two elements: sensors and a decision-making algorithm to decide how the system should respond.

Sensors - how does a robot sense its environment?

So, a robot has to have some way of detecting its environment. These can take a huge variety of forms, but as electrical signals are communicated as voltages, anything that can produce a voltage or changes its resistance (which, thanks to the magic of the potential divider can be used to change a voltage) can be measured electronically, and a huge array of sensors are available for this purpose. A few of the most obvious:

Sense of Balance - Accelerometer/Gyroscope: An accelerometer gives a voltage proportional to linear acceleration along a given axis. Since gravity produces downward acceleration, this can be used in a stationary object to detect orientation; it will get confused if other accelerations are involved (for example if the accelerometer is moved linearly); a gyroscope, on the other hand, detects changes in orientation and therefore can be used to detect orientation - with these two, the robot immediately has some sense of balance and inertia, I guess akin to the use of fluid in the inner ear.

Proprioception - Potentiometers: Linear and rotary potentiometers change their resistance as a function of linear or angular position; allowing the robot to detect the actual position of any joints to which they are attached (as opposed to where they are supposed to be). In this way, the robot can know when something has perturbed its movement (for example, one of it's joints has been knocked, or bumped into a wall). Encoders are a more advanced version of this.

Touch - Force Dependent Resistors: As their name suggests, Force Dependent Resistors change their resistance based on the amount of force or pressure they experience. This is useful for telling when an object or barrier has been encountered - but even a simple switch could do that. The benefit of a force dependent resistor is that it gives some indication of how hard the object is being touched. That's important for grip applications, where too little force means the object will slip from the grasp, and too much will damage the object.

Temperature - thermistor: A thermistor will changes its resistance according to the temperature applied to it, providing a way of measuring temperature.

Light - A light dependent resistor will change its resistance according to how much light reaches it. In this way, a robot can know whether it is in darkness, or light.

Distances: Ultrasonic or infrared distance sensors return a voltage based on how long an ultrasonic or infrared signal takes to bounce off an object in front of it. In this way, the robot can be given a sense of how much space is around it, albeit not what is filling that space. In this way, Robots can be equipped with sensors that stop them from bumping into objects around them.

Hearing - Microphones: Microphones are a bit more complex, but they produce a voltage based on soundwaves that arrive. This is the basis of telephones and recording mics, and can be used for simple applications (move towards or away from a noise, for example) or more complex applications (speech recognition is the latest big thing for Google, Apple and Amazon).

Vision - Cameras: Computer vision is a big area, and one that is currently developing at a rapid pace. Object recognition is tricky, but can be done - face recognition has become extremely well developed.  In this way, a robot can recognise whether it is pointing towards a face, for example, or can be trained to keep a particular object in the centre of its vision.

There are a wealth of others (magnetometers to detect magnetic fields; gps location tracking; EMG to trigger prosthetics from muscle signals) but these are the most common. Putting these together, a robot can gather quite complex information about its environment, and the position of its constituent parts within it. The challenge of course, is in making sense of this information. All the sensor provides is a voltage that tells you something about a property near that sensor.

You can get some basic stimulus-response type behaviour with simple circuitry - a potential divider that turns on a light when it gets dark outside, for example. The real challenge is in how to integrate all this information, and respond to it in the form of an algorithm.

Algorithms: Robots are Stupid

Although we hear a lot about  artificial intelligence and the singularity, robots are actually stupid and literal-minded: they do exactly what you tell them, and nothing more. They don't improvise, they don't reinterpret, they don't imagine: they mechanically follow a flowchart of decisions that basically take the form "If the sensor says this, then the actuator should do that". I'm simplifying a lot there, but the basic principle stands. They can go through those flow charts really quickly, if designed properly, and perform calculations in a fraction of the time it would take a human, they might even be able to tune the thresholds in the flowchart to "learn" under which actuator responses best fit a given situation: but that "learning" process has to be built into the flowchart. By a human. The robot, itself, doesn't adapt.

Now, AI research is advancing all the time, and one day we may have genuinely strong Artificial General Intelligence that can adapt itself to anything, or at least a huge range of situations. Right now, even the best AI we have is specialised, and has to be designed. Machine Learning means that the designer may not know exactly what thresholds or what weights in a neural network are being used to say, recognise a given object in a photograph. But they had to design the process by which that neural network was tuned. As we increasingly depend on black-box libraries form scratch, perhaps one day, robots will be able to do some extremely impressive self-assembly of code. For now, Robots learn and can change their behaviour - but only in the specific ways that they have been told to.

So, an algorithm is basically a set of pre-made decisions: the robot doesn't decide, the designers has already pre-made the decisions, and then told the robot what to do in each situation. Robots only do what you tell them to. Granted, there can be a language barrier: sometimes you aren't telling the robot what you thought you were, and that's when bugs arise and you get unexpected behaviour. But that's not the robot experimenting or learning: it's the robot following literally what you told it to do. This also means that as a designer or programmer of robots, you need to have foreseen every eventuality. You need to have identified all the decisions that the robot will need to make, and what to do in every eventuality - including what to do when it can't work out what to do.

Of course, robots can have different degrees of autonomy. For example, a quadcopter is not autonomous. It makes lots of local decisions - how to adjust its motors to keep itself level, so that the user can focus on where they want it to go, rather than how to keep it in the air - but left to its own devices, it does nothing. By contrast, a self-driving car is required to have a much greater level of autonomy, and therefore has to cover a much broader range of eventualities.

Thus, there is always a slightly Wizard-of-Oz type situation: a human behind the robot. In this sense, robots are like puppets - there is always a puppeteer. It's just that the puppeteer has decided all the responses in advance, rather than making those decisions in real-time. What's left to the robot is to read its sensors and determine which of its pre-selected responses it's been asked to give.

There is a side issue here. I mentioned that robots can do calculations much faster than humans - but for a given robot, it still has a finite capacity, represented by its processor speed and memory. It can only run through the flowchart at a given speed. For a simple flowchart, that doesn't matter too much. As the flowchart gets more complex, and more sensors and actuators need to be managed, the rate at which the robot can work through it slows down. Just to complicate matters further, sensors and actuators don't necessarily respond at the same speed as the robot can process flowchart. Even a robot with the fastest processor and masses of memory will be limited by the inertia of its actuators, or the speed at which its sensors can sample reliably.

One response to this is more local control: dividing up the robot's sensors and actuators more locally. A good example of this is the servomotor, where you have a sensor attached to the motor so that it knows its position and speed, and will try to maintain the position or speed specified by a central controller. This is handy because it frees up the designer from having to implement steps in their flowchart to provide this control, which has the benefit of freeing up capacity for other decisions, as well as meaning that if something happens to perturb the actuator, it responds immediately, rather than waiting for robot to work through to the relevant part of its flowchart.

Humans, Puppets and Robots: What's the Difference?

Let's return to the motivating question, then. How is a robot similar to or different from a human or puppet?

There are some obvious similarities to humans, even if the robot is not itself humanoid. It has bones (in the form of its rigid members). It has actuators which are effectively equivalent to muscles. It has sensors which respond to stimuli (analogous to the variety of receptors in the human body). It has a computer (brain) which runs an algorithm to decide what how to respond to given stimuli. Finally, it sends signals between sensors, actuators and computer through electric signals. These are similar to a human. The difference, I guess is that a robot is an artefact, lacks the self-repairing capacities of a human body, and its brain lacks the flexibility of human thought, since a human has to pre-make all the decisions, whereas in person, a human can make decisions in real time.

There are some obvious similarities between a robot and a puppet as well. Both are artefacts, and in both cases the decisions about how to act are taken by a human (in advance in the case of the robot, in real-time in the case of the puppet). Both lack the self-organising/self-repairing/automatically learning nature of a human.

Decisions are made by…
External Human
External Human
Decisions are made in…
Real time*
Real time
Learning occurs…
Not at all.
As dictated by programmer.

* At least, conscious decisions are. There are lots of decisions that are, as I understand it, “pre-programmed”, so we might argue that on this front, we might argue that these decisions are made in advance, and query whether the “self” is really making them.

Earlier, I said that robots can perform calculations much faster than humans, but actually, now that I think about it, I don't know if that's true. Maybe their flowcharts are just a lot simpler, meaning they can work through the steps faster? Simpler, but less flexible. Perhaps Samit will enlighten us.

Also, is a puppet a robot with a human brain? We can't really talk about a puppet without an operator, so is a puppet an extension of the operator? A sort of prosthetic?

I don't know - but I'm looking forward to exploring these issues on the night!

Thursday, 29 June 2017

2016.5: Mid-Year Review

It's halfway through the year, and this seems as good a time as any to take stock of where I've got up to as regards the goals I've set myself for 2017.

Let's take them in turn...

On the Blog 

* At least 24 posts - the same as last year: This is technically post 13, although I did have a slightly naughty "placeholder" post in April. Even so, that'd still make me on course for 24 posts this year if I keep up my current rate.

* At least 2 posts per month:
 Well, I failed that in the first month, though I have at least managed to average 2 posts a month, and 2 is my median and mode for posts per month!

* At least 4 non-review posts:
I've made three, so far: with reflections on Deleuze and Guattari, Braidotti's The Posthuman (at least, the first half!), and some design challenges in tracking people.

* Deliver the Tracking People and Augmenting the Body projects: Basically, done!
* Submit at least five grant applications as either PI or Co-I: I'm on three at the moment, with at least two more in the pipeline, so I should be good to go here.
* Submit at least two papers to high quality journals. Ouch. Haven't done this one yet. Need to actually start writing things up over the summer!
* Get the new iteration of FATKAT into experimental use: Done - or at least doing. My PhD student Latif is preparing to get some experiments underway, and there are plans to look at using it to instrument experimental tools.
* Get PSAT (the postural sway assessment tool) finished and field tested: Done! Two PSAT units have been in the field, and used to conduct over 280 trials now, with more planned. Hurrah!
* Adapt our grip model to address feedback and corrections: This is for the summer, I guess!

* Get an "Engineering Imagination" discussion group up and running for postgraduate students in iDRO. 
* Make some inventions.
* Formulate a reading list for the Engineering Imagination.
Yeah, none of these have happened. I've got a big list of potential books to put together on my Engineering Imagination list, so perhaps this will be a blog post for the second half of the year.

So, What Next?
Well, over the summer, aside from all the usual admin and prep for next year, I have a few concrete things to do, particularly on the CPD front!

1) Get started with AnyBody: I've been talking about this with Claire Brockett for ages, linking up kinetic measures to postural data from biomechanics simulator AnyBody, so I've finally taken the plunge and got it installed on my PC in anticipation of potential undegraduate projects. I'll be working through the tutorials this summer.

2) Learn I2C programming: I need this for the MagOne force sensor projects I'm supervising this summer (since the MagOne uses I2C to communicate). I understand the theory behind I2C communication, but I've never actually used it in anger. Time to change that!

3) Programme a Neural Network: Again, I "know" what a neural network is, and I know (in theory) how they work. What I've never done is actually worked through the process of implementing one, so time permitting, I'll be trying to get a simple one up and running on some example projects.

4) Get PSAT up and running: Two PSATs are in the field, which is great, but there are three more awaiting finalising and PAT testing and calibration. That'll certainly keep me going.

Let's see how it goes, eh?

Saturday, 24 June 2017

4 Weeks Commencing the 23rd of May

This is always a busy time of year. Dissertations have to be read, vivas conducted, coursework marked, and there can be no (or very little) kicking the can: marks must be ready for the external examiner sign-off at the final exam board. Almost every other deadline has some flexibility, but this one is absolute. So one way or another, the end of May and most of June is pedal to the metal. Such are the rhythms of the academic year.

In amongst this, as you will know if you have follow me on Twitter, we have had the grand finale of the Augmenting the Body Sadler Seminar Series, with Margrit Shildrick returning for the second time in just under twelve months, this time talking about the Human Biome; and the third and final seminar for the Tracking People network, at least prior to the conference on the 9th of November which represents its culmination. Not to mention the Cognitive Archaeology meet-up I mentioned in my last post, and a Robotics at Leeds Away Day.

Plus, PSAT has now survived PAT testing, and two have been out into schools this last week. We can officially claim to have a fleet! And two undergradute students have started work with me for the summer investigating applications for Pete Culmer et al's MagOne sensor, in FATKAT and Seating. I'm really excited about the potential there.

So what I'm saying is that it's been even busier than usual this year, albeit also more exciting (in a good way). And for the second year running, someone decided to throw a major vote into the running. Never a dull moment, eh?

Saturday, 10 June 2017

Methodological Challenges in (the Design of Devices for) Tracking People

I'll be giving a talk at a workshop for the AHRC Tracking People Network next week, so as ever - I thought I'd sketch it out up here first, and get my thoughts down. Whereas the first two focused on scoping the landscape and legal issues respectively, this one concentrates on technological and methodological challenges. So, we'll be hearing about the technologies used and how errors can arise, and some of the methodological challenges with doing research in this area. 

I'm taking a slightly different angle: the methodological challenges in designing tracking devices. It's the key difference between a technology that works in the lab, and a product that is successful in the field. I apologies if this comes off as a bit of a brain dump - my aim is to get the thoughts down here, and then trim them back for the presentation.

Let's start with the complexity of tracking systems; the problems this presents to designers and engineers; and then three design tools that might help: user-centered design; sociotechnical systems analysis; and critical design. It's worth saying that I don't think any of the issues discussed here are unique to tracking devices: many will apply to almost any product. But by their nature, tracking devices are complex systems with multiple ans sometimes unwilling, stakeholders.

Tracking Systems as... well, Systems

Perhaps the best way to explain this is with a systems view of the problem. Any tracking technology is a system - GPS, for example. You have an electronic system that sits and listens out for the time signals from the GPS satellites, and by comparing the differences works out how far it is from each satellite and triangulates this to identify a location. So the little GPS unit that sits in your phone or atop your Arduino (to show my colours!) is already just a subsystem in a larger system. If those satellites go down, you've got a problem.

Moreover, the GPS unit just returns co-ordinates (more or less). It has no idea what they mean and doesn't transmit them anywhere except to its output pins. So you need a microcontroller to interface with it, perform operations on this data and decide what to do with it. This will, of course, depend on the application. Maybe you want co-ordinates broadcast continuously. Maybe you want to offload them once a week via USB. Maybe all you want is an alarm broadcast if the device's co-ordinates are outside a given range. And if you're broadcasting, then there needs to be something to broadcast *to*: some system to receive, and store the data. And unless you happen to be broadcasting to a system that is also in your direct possession, then you're relying on communications infrastructure to transfer that data for you.

Yet there's more: this device will need power. A battery, probably - for tracking applications, I doubt you'd want to plug into the mains -  and batteries can run down, and need to be charged up. So the unit isn't going to be entirely self-contained and independent of its user. That's not necessarily bad, but it does mean that you've got to worry about battery life, whether the user can be relied on to do the charging, and what happens if they don't. 

Which brings us to another issue: you need to have some sort of system outside the device itself to do something in response to the data. I mean, unless you're a homebrew hacker who is just playing around to learn how to use GPS, then you're tracking for a reason, and you generally want behaviours to change in relation to tracking - either so that people will contain themselves within an area; or that support can go out if they stop moving around or end up somewhere unexpected; or that they will exercise more because they know they haven't walked far enough, and so on and so forth. So, the success of a device doesn't just depend on the tracking technology, but on the user's behaviour, and the external devices it fits into.

This applies to almost any product, of course, but it poses a particular problem in this case because for applications in criminal justice or health-related tracking (particularly those who have dementia), the user may not be a willing part of the system. Here the Deleuzian concept of assemblages and the language of "territorialisation" is particularly apt. Being tracked means being colonised by a system, whose broader elements you don't control, whether you like it or not.

It gets worse: who's the user? The person or organisation doing the tracking? The person being tracked? And if you're not paying out of your own pocket, then what about the funder? What about family friends, and relations who might immediately be affected by the presence of tracking - for better or for worse? And then you run into the classic problem of user-centered design: designing a bespoke system for one set of users is challenging, but you can potentially sit down with them and thrash out an agreeable solution. But bespoke design is expensive, and there is no guarantee that that will be a great solution for anyone else. In most cases, you want economies of scale, and that means trying to grasp preferences across populations and demographics and even borders.

The issue here is that tracking technologies inevitably entail a complex network of systems interacting, and they all raise the potential for things to go wrong. Designing a tracking device has a lot of the characteristics of a "Wicked Problem": lots of complicated interacting parts and parties whose interests don't necessarily align. Which is an uncontroversial conclusion: I mean, we wouldn't be running this network if these issues didn't crop up. But it does give us a particular handle on the challenges faced by engineers and designers. So let's dig into those a little.

Bounded Rationality: An Awful Lot to Think About

There are a few useful perspectives we can use to think through this issue.

First up, we have the Dual Nature of Technical Artefacts: that artefacts have an objective, measurable physical nature and a subjective intentional nature. It is the fit between this physical nature and the design's environment that determine how well it meets the intentional nature - and therefore whether it is a "successful" design. Of course, with the intentional nature being subjective, the same object may be a great design for one person, and a terrible design for another, even when used in the same environment. And this is one of the reasons why we have so many different makes of car, or mobile phone, or computer: different people have different needs and different priorities. It also means that a design that looks great to you as a designer or engineer may be awful for your intended users. And of course, when you have multiple users all with their own priorities and points of view, you may find that a design that is great for one of them, is awful for another.

Inevitably, you have to make trade-offs, and not just between stakeholders, either. For any given person, the ideal system will be probably be lighter, stronger, more comfortable, more beautiful, more functional and cheaper than can be achieved in real life. Sometimes you're lucky, and you can get a Pareto improvement, and make every important characteristic for everyone better, and only give up on some of the less important characteristics, but that's the exception, not the rule. Usually, you have to decide which takes priority. Will you sacrifice functions for low price? Will you pay more to keep some functions in? And what happens when the priorities of different users conflict?

This, brings us on to the next problem: specialisation. The days of the artisan in product development - where one person worked with the end user and crafted a product from start to finish - are long gone. This may be the norm in the Maker community, but most product development involves discipline specialists who each work on their own aspects of the design. This is something highlighted beautifully by Louis Bucciarelli in Designing Engineers, when he points out that in the design of solar panels, the electrical engineers view.  the product as a series of flows and components, with no mass or physical existence, while the structural engineers view them purely as blocks of material with mass, needing to be held in a given position against given forces due to gravity, the wind, movement to track the sun, and so forth - with no consideration of the flows between them or the electrical considerations. This isn't a bad thing - it's an inevitable part of developing complex products and allows people to play to their strengths. Yet it also means that the physical and the intentional nature are ever more fragmented.  You can see this in the V-model of systems engineering: start with the overall needs, and translate these into requirements - divvy these requirements up amongst the relevant systems, and once you've designed the subsystems, start to combine and test them.

This means that each subsystem is designed with only a subset of the overall intentional nature in mind. That's not necessarily a problem, but it does make it difficult to trace through the potential consequences of decisions.

The decisions made early on in design impact everything downstream in the product's life - manufacturing, assembly and distribution costs and processes; environmental impact; ease of disposal; ease of use for different demographics; ease of maintenance and repair; robustness and resilience to changes in other systems - and these impacts are often uncertain; and changes get harder and harder to make (or at least, more and more expensive) the further you get into the process:

Ethically, this manifests itself in the Collingridge Dilemma: that the consequences of a new technology are difficult to foresee until it has become widely-used; by which stage it is very difficult (not to mention costly) to change because it has become entrenched. This is true with tracking technologies: you won't know how they will be used or misused until they're in widespread use; by which time it is very difficult to put the genie back in the bottle. Moreover, there are interactions here: the very complexity of the systems that contribute to the "success" of the tracking device mean that they may change long after the initial design is complete, and it's very difficult to predict how they will move on.

Finally, there is the problem of Bounded Rationality: individuals only have limited mental processing power and can only attend properly to so much information at any given time. The idea of performing optimal trade-offs in your head between hundreds of competing requirements is naive at best. And asking designers and engineers to just think about more things makes this worse, particularly when you're dealing with a complicated network of interacting systems being designed by different people, with different experiences and priorities, trying to balance the conflicting needs of multiple stakeholders.

So, there are a lot of challenges here. How do designers' address them? Well, there are a few tools in the designers' arsenal.

User-Centered Design

User-Centered Design (UCD) is one approach to this. UCD is more of a philosophy and a set of tools than a single approach, but it emphasises understanding the user and placing them (rather than, say, the technology) at the heart of the design process. It *isn't* asking users what they want: that might be part of it, though users often don't know what they want, or can only give responses by reference to existing products. In most cases, Users don't have the technical skills to develop the product themselves (especially given all of the issues raised above).

Rather, UCD is about developing an understanding of your range of users, their habits, tastes, aspirations, environment - and how whatever is being designed will fit into it. At one end, this can be forming fictional personas and use cases representing typical scenarios based on interviews, surveys and direct observation. This gives the designers something logical to think through: "how would this user respond if the design does this"? At the other end of the scale, it can be participatory design: actually involving users in design decisions or ideation. Somewhere in the middle are the consulting of users for analysis purposes - getting feedback on ideas. It's generally reflected as an iterative loop, as specified by ISO 9241-210:

Ideally, users will be directly involved in every stage: observed and interviewed to get requirements, involved in discussions to represent trade-offs. This creates its own challenges: recruiting and getting time with users can be time-consuming and expensive, especially if the design keeps changing. Plus, as we noted above, "users" are a diverse bunch. By "users" we mean stakeholders, and the opinion of different individuals may conflict even when they represent the same "class" of stakeholder (in this case: tracker, tracked, outside user of data, funders, etc)

This dovetails very easily with the V-model of Systems Engineering we saw above (which, after all, also involves identifying needs, specifying requirements, generating designs and testing the as they are integrated), though as you can imagine, with an iterative loop for every subsystem in the architecture and for the system as a whole, this can get very cumbersome. Of course, with a good grasp of the users' needs, you don't need their input to evaluate every requirement. Provided you've broken down the requirements among the architecture correctly, you're sorted

It's even more challenging when working with vulnerable populations such as children or dementia patients. It's one thing to work with users on the design of a new mainstream health tracking app, where the target users are mobile, able to come to you, and generally have no communication difficulties.

Our own experiences engaging children in the Together Through Play and MyPAM projects highlighted this issue. Children's designs were strongly anchored in existing devices, and feedback was generally very positive - they didn't like to be too negative, and in general they expect the adult designer to know the answer. People like Janet Read have spent a lot of time dealing with this sort of issue, and developing methods for engaging children, which are well worth looking into.

Personas are a way of addressing this, though since the people they represent aren't present to make their points or argue their case, it's easy to "fudge" your assumptions to get your favourite idea through. Proxy users are another approach - asking parents or carers to get involved, though even they may not be able to give a direct answer.

Of course, what we're trying to do is iterate early, when change is cheap, rather than waiting until we've got 10,000 units in assembly to suddenly have to make changes, so some user involvement is better than no user involvement. You also need to recognise that users may not know what problems will arise: something that looks great and feels comfortable in a 2 hour focus group might be excruciating after being worn for twelve hours, for example.

Sociotechnical Systems Analysis
Given the challenges of involving users, you want to make sure that you're getting valuable information from them. That means making sure you iron out problems you could work out in other ways (through anthropometrics, for example), but it also means trying to make sure that you discuss all imporotant angles with them. One way for doing this that I value is the Clegg and Challenger's Sociotechnical Framework:

This was intended for analysing organisations, but applies pretty well to any sociotechical system. It identifies six pillars, each of which can affect the way a system behaves - as can the interactions between them. For example, in the context of tracking, say, a patient with dementia:

In this case, we can see that the system has some technical goals: identifying the patient's location at any given time; and alerting the caregiver if they move away from the area they should be in (a hospital ward, for example). Of course, the Goals will differ for different stakeholders. For the patient's family, the goal may be to ensure the patient's safety, or to reassure themselves of it; for the organisation housing the patient (be it a hospital or a care home), they may wish to minimise the cost of care or the risk of embarassing headlines; the police may wish to spend reduce the resources used in searching for missing patients; the patient's goal may be to walk as often and as freely as they like.

Of course, this still depends on having access to stakeholders: you can use these six "pillars" and still come up with findings that are based on erroneous assumptions, but it's provides a useful structure for checking that you're capturing all the main issues when setting your requirements. After all, setting the requirements is in many ways the most important part of product development - develop to the wrong set of requirements, and you'll only have a product that "works" by fluke. And remember that what "works" is defined by that subjective, intentional nature of the product, and may be different for different people.

Critical Design
I'll finish up with a tangent, partly because it's a thought that keeps coming back to me, and partly because it relates to the workshop that I'll be running on Thursday. One of the problems in thinking through user needs and the possible impacts of technologies is that we get bogged down in the practicalities of what can be achieved, or how easy something will be to design, and so on and so forth. An interesting approach to design has been developed by Dunne and Raby in the form of "Critical Design" - designs intended to evoke debate and discussion, rather than for sale. Similar approaches are Design Fictions or Speculative Design. These provide an interesting way of exploring issues. Hence, one of the ways to understand requirements is to generate an "idealised" object - and then try to understand what makes it "ideal". Equally, you can use these approaches to look at what might happen - something that often occurs with science fiction. We thought we'd give this a go, by asking attendees to come up with their ideal tracking device - not to specifically design it, but to at least conceptualise what would make such a device ideal for them. Hopefully, it'll encourage some interesting discussion. I'm keen to see how it goes.

In Summary
So, what does all this mean? Well, to recap: like many complex products, beyond the technological challenges, tracking devices also present a challenge in managing the diverse (and sometimes conflicting) requirements of multiple stakeholders, making trade-offs between them difficult, particularly given the need to manage those across multiple subsystems and potentially multiple teams or suppliers, each with their own view on the product and what it needs to achieve. This is made all the harder when the users themselves are difficult to access or engage fully in the process: and sometimes the very features that make them hard to engage are the reasons that we wish to track these people (for example, those with dementia). And we'll find out on Thursday whether we can use some "Design Fictions" to explore people's concerns and interests in tracking. 

Wednesday, 31 May 2017

What do I do, exactly?

I attended a Cognitive Archaeology meeting at Leeds Beckett last week. No, really! This was at the invitation of Andrew Wilson of @PsychScientists fame. It was a great day, thanks, and I learned a lot, but it did get me thinking: why would an engineer want to get involved in archaeology? And not the "Let's build Stone Henge" or "Let's build a seige engine" kind of archaeology, where interesting mechanisms might need designing and making. No, this is all about spheroidal rocks, and at the moment, mainly about throwing them. Hence the "cognitive" - as in Perception-Action-Cognition. Actually, there is a lot of interesting engineering in this, and learning more about things like knapping I'm excited to see where we can take it. There's a fascinating blog post there: but not this one.

No, this blog post is titled after a question asked of me about five years ago by one of my PhD students, jointly supervised with our ethics centre, IDEA-CETL. We were discussing engineering decision-making, ethnographic studies of engineering, as well as my work on iPAM and MyPAM (to which I often made reference when discussing engineering decisions, since it was the most direct experience I had of engineering a system), Together Through Play and Postural Assessment. Working with clinicians and physiotherapists on rehabilitation robots; working with sociologists on matters of inclusive play and social participation; working with psychologists and movement scientists on posture and later prehension, while also supervising a number of PhD students doing decision support in supply chains. Which raised the question: What did I do in all this? It's a good question, which is why it stuck with me, and I realised at that point that I really needed to have a sharp answer. Well, I've since added Professors of English (Stuart Murray), Law (Anthea Hucklesby) and Transport Studies (Bryan Matthews) - among others! - to my list of collaborators through the  Augmenting the Body, Tracking People and WHISPER projects over the last two years, and now here I am getting involved with archaeologists. When it came to writing up a blog post about this, it suddenly struck me that the obvious question was - Why? "What do you do, exactly?".

Allow me to explain. 

At heart, I'm interested in design. How the world around us takes shape, and how we can do that more effectively. I'm most interested in how we can make better design decisions, but that's a very, very nebulous area - and if I'm honest, it's the goal of almost all engineering research from Finite Element Analysis and Computational Fluid Dynamics, to Sociotechnical Systems Analysis and Robotics. I mean, there's no point in any of this research if it isn't going to make someone's design decisions better, either by giving them better options, or helping them to identify good options more efficiently. I'll spare you a lengthy discussion on decision support and decision analysis (you can find a good account of it in my thesis), but suffice to say that design tools remain one of my key interests. I just think that generic decision-making tools aren't terribly useful - or at least, the ones that we've got are as useful as they're going to be.

No, if you want to develop tools to help engineers and designers make better decisions, you need to dig into some specific areas, and the area I decided to focus on is the link between task demands and person capabilities, and how we can improve the fit between them. That's where my interest in inclusive design and rehabilitation comes from, and it drives a lot of my bread and butter research - which is largely on finding ways of measuring motor skills and ways of improving them (hence my being embedded in PACLab here at Leeds). Which is all well and good, but if you're going to design assistive devices, or rehabilitation technology, or otherwise assess matters related to disability and inclusivity (rather than just study the underlying science), you're immediately getting into user-centred design, and questions of trade-offs and ethics. Hence my links with the Centre for Disability Studies, with Tracking People and with Augmenting the Body. These things don't exist in a vacuum, and assuming you're interested in having stuff that actually works and is actually useful to people, you need to take these perspectives into account. It's not easy, and I don't necessarily get it right, but still - you try.

So, how does Cognitive Archaeology fit into this? Well, archaeologist Ian Stanistreet of Liverpool University, hit the nail on the head when he noted that (and I'm paraphrasing): you have the objects, and you want to know what they're good for, but you don't have access to the users. And struck me as a very good analogy for design. It's a little bit different, of course: designers generally can get access to their users, but in my experience it's rarely as many as they'd like, and if you're working with people who have specific impairments, then testing time is extremely precious, and you don't want to spend it finding problems that you could have found another way. One of the goals of design tools is to find problems faster, and fix them. So, a lot of the work I want to do in relation to task demands and capabilities falls into exactly this category. I have a design: now I want to know if my intended users can make use of it before I go anywhere near evaluating it with them. Unless they happen to be an exact match for me, then I can't do this just by trying it out myself. The problem is, at some level, the same - Andrew's solution of affordance mapping, which he is applying to the Cognitive Archaeology work, could help to address that issue and it's something I've been chomping at the bit to apply to design. The Cognitive Archaeology work provides a great avenue for exploring that, and developing and getting my head around the methods involved: I'm really looking forward to getting stuck into it.