Tuesday, 31 October 2017

Month in Review: October 2017

If the end of September sees the pre-teaching rush easing off, the end of October sees teaching very much in full swing. There's been a lot going on, a lot of new things starting, but not a lot finished. Unless you count lectures and tutorials and project meetings delivered. I've taken to adding them to my task list, so that every lecture can be ticked off. Otherwise, you do a day of solid teaching and think: "I've got nothing done!". I'd hope the students - whose fees are paying for those tasks - don't see it that way, and makes it no sense to manage your time as if teaching were a drain that got in the way of the real work.

Anyway, I've got three undergraduate team projects for MEng/MDes students, and seven dissertation projects under way. I had industrial visitors in to sponsor an undergraduate project for our level 2 students.

Otherwise, three proposals have been on the go - one died in the internal sift - and I've been hard at work trying to map out the PACLab technology roadmap, so we can keep on top of the tech we need, particularly as VR is becoming a bigger part of our work.

Speaking of which, some of our work with Dubit (particularly driven by Faisal Mushtaq) on health and safety of VR, has now been published, and achieved some note from the media.

Also excitingly, the Apex project I'm doing with Stuart Murray on "Engineering the Imagination" has now been officially announced. We've known about it for a month or two, but had to keep it under wraps! Now we can announce to the world that we'll be doing some critical design of our own, exploring how engineers respond to cultural theory by designing a prosthetic hand to communicate empathy. Really exciting stuff.

November promises to be just as exciting: the grand finale of the Tracking People AHRC network is taking place on the 9th. I'm really looking forward to it: stay tuned!

Monday, 30 October 2017

Present in Absence, pt 2

I've been trying to think a bit more about this whole issue of proximity, and projecting the self that I addressed last month. I feel like there's something in it, but I haven't really figured it out yet, and I may well be retreading old ground. Still, I wanted to dedicate some time to thinking through Virtual Reality, Augmented Reality, Companion Robots and the self, and what better place to think aloud than on the blog? It is, after all, what it's here for. So, buckle in - once more I will be thinking off the top of my head and straying dangerously outside my discipline. Feel free to correct me.

I mooted that there might be five levels of proximity at which something can be found (slightly reworded, you'll note!):


1) Integral: Inside; built into you. Not removable without surgery. By definition, always in contact with you unless drastic action is taken. 

2) Contact: Anything attached to the exterior of the body. Attached, but detachable. Always in contact with you, but removable without permanent damage. 

3) Reachable: Not attached and normally not in contact with you, but easy to bring into contact.  Within arm's reach.

4) Proximity: Not in arm's reach, but within sight or sound. The same room - no interposing barriers.


5) Remote: Not in sight, or sound, barriers prevent interaction except through some third party or device.

Now, one thing that occurs to me is that this refers to spatial proximity. But is that the most relevant form of proximity when "projecting the self"? It sort of makes sense: projecting myself from "here" to "there" inevitably involves a spatial dimension. But it feels like there's a disconnect between integral/worn and reachable/proximity/remote. As if they deal with different types of proximity.

Spatially, at least, they make sense as part of the same scale. From "inside" to "next to" to "can be made next to through greater and greater effort" - reaching out an arm; walking across the room; walking out of the room and if necessary off over the horizon. "Projection" in a sense brings things closer along this scale. Reaching out my arm brings a reachable object into contact; walking across the room brings a proximate object into reach and then into contact. Would ingesting something then be the next level of projection? It doesn't seem quite right.

And yet, remote interfaces sort of follow this pattern - I can project my sense of sight through not just space but potentially (backwards) through time through the use of a camera and a connected display. A microphone and speaker (and amplifier, I'll grant!) will do the same for sound. Thanks to the internet, light and sound from halfway round the world can be projected to my eyes and ears. Arguably, I am the recipient  of the projection (it is projected to me): would we regard this as projecting the senses?

This brings us to the question of immersion. If I watch TV, am I projecting myself or is it (to quote St Vincent) "just like a window?" It brings something distant closer, but it doesn't project "me" there. VR is different, because it cuts out some of the immediate senses. If I put on a VR headset, I now not only receive visual information from a remote (possibly virtual) environment, but I also lose visual information about my immediate environment. Headphones do the same for sound: haptic feedback the same (in theory!) for touch. That, I guess is the projecting element: the extent to which I sacrifice information about my immediate environment for information about a remote environment.  Projecting myself "there" rather than projecting "there" to "here".

So far, this has all discussed projecting senses from one place to another, but what about projecting actions? The partner to telepresence is teleoperation - the ability to perform an action in a remote space. Of course, the microphone and speaker example works in reverse - I can have a webchat with a colleague in Sweden almost as easily as speaking to the in the same room, our voices and images projected across the continent. In teleoperation, though, it feels like we tend to mean the projection of actuation: of force and movement. Of course, remote control has existed for a long time, and the idea of pressing a switch in one place and an action being performed elsewhere is hardly new. 

Based on the ideas discussed by Andrew Wilson et al at the Cognitive Archaeology meet-up, it looks like humans are uniquely well-adapted to throwing things and this was obviously an important step in our development. For an animal with limited natural weapons or defences, the ability to hit something from a distance at which it can't easily hit back is a huge boon, and perhaps the earliest example of telepresence... 

Crossbows and bows caused such concern in the middle ages that they were banned for use against Christians: "29. We prohibit under anathema that murderous art of crossbowmen and archers, which is hateful to God, to be employed against Christians and Catholics from now on." The ability to kill from a distance without putting yourself at close risk was very much the drone strike of its day.  

Anyway, this is a little off the point: I'm just trying to demonstrate that remote action is nothing new. So how does this fit with our model? If we don't want to look at how close something is to us, but how it's proximity is shifted through technology, does the same model still work?

I mean, could we do away with "reachable"? Is reaching just  a way of moving something from "proximity" to "contact"?  Of course, then "proximity" would run out at the limit of reach. Whether something is across the room or across the world makes no difference once it's out of reach. This then raises the question: is walking a form of projection? For me, walking to something just out of reach is trivial; whereas walking to another room is more effort. There again, that effort will increase the further away something is. I can go and put my hand on an object a mile away, it just takes a lot more time and energy. 

This makes me think a few things:

1) That the categories (integral to remote) classify where a device must be for one to operate it, but make less sense in mapping out "projection" of skills. For example, some devices must be implanted to work correctly (a pacemaker is no use in your pocket); some must be in contact (a heart rate monitor; a VR headset); most must be reachable to be useful - contact is required to operate them, but it need not be constant (a kettle, for example - I need to get within arm's reach to turn it on); then we get to voice activation (Alexa, Siri, etc). This is only about projection insofar as it determines how near I need to be to an object to form an assemblage with it.

2) That these will vary from person to person and object to object: how far I "reach" will depend on my arm length; how far my vision extends depends not just on my eyesight but upon what I'm trying to see. I can read a traffic light from 100 metres, but not a novel.

3) I wonder if time might be a better measure of proximity? That is, if we measure the time it would take to form an assemblage with a given object? Hence, an integral or contact object is instantaneously an assemblage: I am automatically assembled with them. For objects not in contact with me, proximity might be measured by the time it takes me to interact with them. For a kettle in arm's reach or Alexa, the time is a second or two: as good as instantaneous. For objects further away, we can either have a binary distinction ("not instantaneous"), or measure the time it takes (five seconds to cross the room; a minute to go next door; an hour to walk two miles away).

4) Perhaps it is the instantaneous/not-instantaneous distinction that is most useful, since this delineates close from remote, and this gap is what projection bridges. Whether my senses or actions, projection means transferring instantaneous senses and actions to somewhere they would not normally be possible, rather than having to take an additional action to get to the relevant place.

Maybe the mapping isn't that useful, then? Or maybe it's useful in mapping the links required to form an assemblage with a device? Perhaps the question is - why am I interested in this? Why do I want to map this out in the first place? I feel like there's something important in here, but I'm not sure.

Let's try a few examples. Ordinarily, if two people wish to engage in conversation, they would need to be in proximity. Hence, in the image below Persons A and B can hold a conversation with each other, but not with Person C (who is remote from them - in another room, building, or country).

Add a telephone (mobile or otherwise) into the mix, however, and as long as it is within reach of both parties (and both have a network, charge, etc.), speech across any distance becomes possible:



Now, there are some complications here. In this example Person B can converse with  Person A (who is in proximity) and Person C (provided both B and C are in contact with their phones). Persons A and C, however, can't converse with each other. Clearly, this need not be the case: what if the telephone is set to speaker? Now, Person C is effectively "proximate" to both Person A and Person B - and no one needs to be physically holding the phone. Of course, Person A and B can see each other, and if anyone gets too far from the phone, their voice will no longer carry, etc. 

A similar issue might be imagined in terms of shaking hands. A and B can shake hands, but only if they are in Reachable distance of each other. Being in earshot of each other isn't sufficient. A telephone won't help A, B or C shake hands with each other, no matter how f
good the speaker and microphones are. 

There's more to this, as well. We need to differentiate different senses and actions. For example, the telephone carries audio information, but not visual information.  Skype or WhatsApp (among other videoconferencing apps) can carry both. They can't carry touch. 

Is it worth thinking of proximity in conductive terms? Conducting information, maybe? Are these assemblages really about "conducting" information? Conducting isn't necessarily the right term - clearly, air conducts sound, wires conduct electricity, but can we speak of electromagnetic radiation being "conducted"?  It hardly matters. For my purpose, these assemblages are analogous to forming a circuit: when the circuit is broken, sensing or actuation breaks down. 

Now maybe that's what I mean by projecting the self - forming sensing or actuating circuits/assemblages with places that otherwise wouldn't be possible: be that because they are remote, or virtual, or just out of reach of our every day capabilities. That's an interesting thought, and worth pondering more: and for that reason, it seems as a good a place as any to end this post.

Thursday, 28 September 2017

Month in Review: September 2017

September is always slightly white-knuckle, as the start of term hurtles into view. Lectures need to be ready, Web pages up-to-date, handouts printed, exams prepared - or you're in for a really hard time once teaching starts.

This year, the problem has been compounded by a big research grant getting approved in principle and needing detailed negotiation to ensure that moves to approval in practice; a small research grant in the same sort of state; the launch of the N8 Robotics and Autonomous Systems Student Network down in Sheffield; and chairing a session for LUDI at the AAATE conference (also in Sheffield). All great things (and a nice break from teaching prep!), but all needing to be slotted into a busy time.

Still, it's done: lectures, handouts, exams, ready; modules launched; new and returning tutees welcomed; teaching underway. Which isn't to say that it's an easy ride from here, but the start of term is always a nice point - when you can draw breath, mop your brow and get down to actually teaching instead of just thinking about it. And the benefit of prepping over the summer is that research marches on, rather than coming to an abrupt halt.

Anyway, I promised highlights of the summer, so here they are:

1) Getting MagOne working. Or rather, undergraduate summer interns Jamie Mawhinney and Kieran Burley getting it working for grip and posture applications respectively. Application specific calibration and housings need to be developed, but we've achieved proof of concept for both grip and posture, and the hardware and software for running off an Arduino Nano are in place. Low cost there-axis force sensing, here we come!

2) The fully housed PSATs getting up and running... and getting prepared for prehension studies. Low-Cost market tracking, here we come!

I think that'll do for now. In the meantime... back to teaching!

Wednesday, 20 September 2017

Present in Absence: Projecting the Self

Last week saw me in Sheffield for the launch of the N8 Robotics and Autonomous Systems Student Network on behalf of Robotics@Leeds (it was great, thanks) and the AAATE conference on behalf of LUDI (also great, thanks!). There are probably blog posts in both, but I mixed in some of my Augmenting the Body/Self duties by catching up with Sheffield Robotics' Tony Prescott and Michael Szollosy, by picking up a MiRo for Stuart Murray to show off on behalf of the project at the Northern Network for Medical Humanities Research Congress, and using my extended commute to spend some time going through our thoughts from the summer workshop. I'll keep those under my hat, since they'll be going into a grant, but between all these things and presentations at AAATE about NAO and ZORA, and with articles appearing about VR at Leeds, I've been thinking a lot about the limits of the body and the self.

So, I thought I would work through them here, at least to get them straight. Bear in mind this is me thinking off the top of my head without a proper literature review, or any kind of philosophical expertise on self  or consciousness. It's a hot take from an engineer responding to the ideas swirling around me: feel free to correct me.

The limits of the body and Deleuzian revisitings of assemblages and rhizomes have been covered elsewhere on this blog. The skin, as I've noted before, is a pretty handy boundary for delimiting self and other, and the one that we probably instinctively default to. Questions of whether my clothes or equipment should be regarded as part of "me" might seem trivial. The answer is obviously no, since I can put these on and off in a way that I couldn't with any other part of me: even hair or nails, while easily shorn, are not rapidly replaced except by extensions or false nails. Yet, if we take ease of removal and replacement as indicative of the boundary of self, then (as Margirit Schildrick pointed out at our Augmenting the Body finale) what about the human microbiome? The  countless bacteria that we lug around inside us? They aren't easily removed or replaced, though they can be - I can swallow antibiotics and probiotics, I guess. Yet, if I have an abscess, it's not easily removed, so is that part of "me"? And by extension, what about a tattoo, or a pacemaker, artificial hip, or insulin pump? I have none of these, so the question is perhaps facetious. But I do have a dental implant: an artificial tooth screwed in to replace the one knocked out on a school playground in the 80s. Is that "me"? Or does it have to be plugged in to my nervous system to be "me"? In which case, my surviving teeth would count, but not the false tooth - what about hair and nails? Do they count? They don't have nerves (do they? I'm getting outside my field here), even if they transmit mechanical signals back to the skin they're attached to. My tooth isn't like my clothing - I can't take it off and put it back in any more easily than my other teeth: it's screwed into my jaw. I can't swap it out for a "party tooth", I don't need to take it out at night. I treat it exactly as I do my real teeth and 99.99% of the time I'm not even conscious of it, despite it offering a block which is absent sensation when I drink something hot or cold.  Which feels really weird when I do stop to think about it, but after nearly three decades, that very rarely happens.

Of course, the answer is probably: "does it matter?" and/or "it depends". Never has the question of whether that tooth is or isn't "me" arisen, and whether our definition of "self" should extend to walking aids, hearing aids, or glasses will almost certainly depend on context. And for my purposes, I'm sure the extent of body and self matters in engineering for at least one reason: telepresence.
Telepresence crops up in three contexts that I can think of: teleoperation (for example, operating a robotic manipulator to clear up a nuclear reactor or robotic surgery such as Da Vinci) - the ability to project skilled movement elsewhere; telepresence robots (to literally be present in absence - sending a tekeoperated robot to attend a meeting on your behalf); and virtual reality (the sense of being somewhere remote - often not physically real).

So this got me thinking about different levels of proximity to the self that technology can exist at. Here's what I thought:

1) Integral: anything physically under the skin or attached to the skeleton; where some form of surgery is required to remove it. Pacemakers, cochlear implants, orthoses, dental implants, insulin pumps. I originally moored "internal", but felt I needed something to differentiate this from devices - camera capsules for example - that are swallowed but only remain inside temporarily.

2) Contact: Anything attached to the exterior of the body: clothes, an apple watch, a fit bit. Also prosthetics. I wonder if puppets fall under this category? Glove puppets at least.

3) Reachable: Anything unattached that I can interact with only if I can get it into contact with me. A mobile phone, maybe - though voice control such as Siri would affect that. Remote control devices likewise - though I would argue that my TV remote requires me to physically touch it. It extends the device, not "me". 

4) Proximity: Siri and Alexa are interesting. Do they extend the device? Or me? I mean, if I speak to another person, I don't regard them as me - but I project my voice to reach them. I extend an auditory and visual (and olefactory!) presence around me. So a camera for gesture control, or a microphone for voice recognition can be activated beyond arm's reach.

5) Remote: At this point, we're talking about things that are no longer in the immediate physical environment: out of sight and hearing. Perhaps another room, perhaps another city, perhaps half way round the world - perhaps in a virtual environment. This is your fundamental telepresence or virtual reality.

In a handy image-based form:

It's not very well thought out: a first iteration rather than a well-founded theory, but I find it helpful in puzzling through a few things. One of these is as a simple way of noting the physical proximity of a piece of technology required for me to form an assemblage with it. An insulin pump or pace maker must be integral; glasses and clothing must be in contact; a smartphone must be reachable; a screen or voice control must be proximate. I can't form an assemblage with anything remote except through an intermediary - for example, I can't talk to someone on the other side of the world, except by getting a phone within reach. 

The other is that we can locate the range of projecting different senses (seeing, feeling, hearing something outside the range of our usual senses) and projecting action (pushing or gripping something we otherwise couldn't; projecting our voice further than usual), and also the relative location of the device in question. For example, a "feature phone" (as in a mobile phone that isn't a smartphone), requires touch - it needs to be in physical contact with me - for me to operate it - but it allows me to hear and speak to someone miles away. Provided, of course, that they also have a phone of some sort.

To what extent am I projecting myself in using such a phone? Probably not much: I don't really regard myself as being in proximity with someone I call. What if I make a video call using Skype or 
Adobe Connection or Face Time? Am I projecting myself then? What if I'm a surgeon using Da Vinci? Am I projecting myself into the patient? There's a whole bunch of questions about what constitutes "self" there, but I feel like the distinction of different types of distance is useful. 

Anyway, I just thought I'd put it up there. It's a work in progress, and I'll have a stab at mapping out some examples to see if it's useful. As always, thoughts are welcome!

Tuesday, 29 August 2017

Month in Review: August 2017

It has been a busy and (fortunately!) productive summer. The corollary to doing much and getting little done (or at least, little finished!) is that at some point things begin to fall off the To Do list. I'm not at the stage of having everything done, but I'm certainly in the "closing" stage of my summer To Do list. Things that have slowly trooped on are coming to a close: MSc and Summer projects finished; dissertations and resits  marked; a review written; an exam and a conference paper almost finished; handouts printed... a big grant getting dangerously close to signed off; big chunks of analysis software completed; PSAT calibration underway. Still a lot of open tasks, but I'm getting there.

I gave my talk at the Superposition on Humans, Robots and Puppets (with Anzir Boodoo and Samit Chakrabarty): it was a really good event and really got me thinking for the Augmenting the Body project. I finished reviewing Elizabeth R. Petrick's Making Computers Accessible for Disability and Society: a history of the development of accessibility features in personal computers from roughly 1980 to 2000, rather than a "how to" guide. I've written a new marking scheme for a module, and been setting up an industry-linked project for our students.

In September, I'll mostly be gearing up for the start of term: we're less than four weeks away from the start of teaching now; one of those weeks is fresher's week, and another I'm mostly in Sheffield for the N8 Robotics students network, and AAATE 2017 (where I'll be chairing a LUDI session on inclusive play). I want to get my exam written before term starts: I've written three of the five questions. And I need to prepare dissertation topics.

Busy times! I'll try to take stock of exactly what I got done over the summer at the end of next month, as term begins...

Wednesday, 16 August 2017

Humaniteering - What, If Anything, Can Engineers Gain from Working with the Humanities and Sociology?

I'm working on a conference paper around breaking down barriers between disciplines, and (as always) I thought the blog seemed like a good place to think out loud on the subject before getting down to brass tacks in the paper itself. So bear with me: I'm literally thinking out loud here (if we understand "out loud" to mean the clack of my keyboard - my inner pedant feels compelled to point out!).

Anyway, I work with people from a variety of disciplines. I keep meaning to sit down and draw up a diagram of all the people of different disciplines that I've worked with in a research context over my twelve years in academia - either by writing a paper or a grant (successful or otherwise) or co-supervising a PhD student. Perhaps I will, but that's a little bit by-the-by. For now, let me just see if I can rattle off a quick list of my common collaborators outside engineering here at Leeds University:

English/Medical Humanities: Stuart Murray (Augmenting the Body), Amelia de Falco (Augmenting the Body)
Ethics: Chris Megone (PhD supervision), Rob Lawlor (PhD supervision), Kevin Macnish (Tracking People)
Health Sciences: Justin Keen (Tracking People)
Law: Anthea Hucklesby (Tracking People)
Psychology: Mark Mon-Williams (too many links to count!),  Richard Wilkie (ditto)
Rehabilitation Medicine: Bipin Bhakta (MyPAM, iPAM, PhD supervision), Rory O'Connor (MyPAM, PhD supervision), Nick Preston (MyPAM)
Sociology and Social Policy: Angharad Beckett (Together Through Play, LUDI)
Transport Studies: Bryan Matthews (WHISPER)

The list grows longer if you include the engineers, those who work outside Leeds Uni (Andrew Wilson and the cognitive archaeology crew being the most obvious) and the PhD students who have been involved in these various streams. The link to Psychology (motor learning, neuroscience), Health Sciences (health economics and governance), Rehabilitation Medicine (rehabilitating people) and Transport Studies (Assistive Technology for navigation) should be pretty obvious. At the end of the day, these represent the "professional customers" (as distinct from the end users - also an important group, but not one that can easily be captured as an academic discipline!) of the technology that we're building, and engaging with these disciplines is important if we want to be designing the right things, and verifying that our devices actually work (think of the V-model of systems engineering - we want to make sure we're converting needs into technical requirements properly, and properly verifying the end result). Ethics and Law might also seem obvious - we don't want unethical or illegal technology (that's a massive oversimplification, but engineering ethics and the challenge of keeping the law up-to-date with technological development is a big deal at the moment, and you can see why engineering researchers might want to get involved with ethicists to discuss ethical implications of what they do). Why, though, engage with people from English or Sociology, other than a monomaniacal desire to collect cross-disciplinary links? Where does Engineering cross over with these disciplines?

Caveat Lector
As ever, let's have the customary warning: I'm an engineer, and speak from an engineer's perspective (an academic, mechanical engineer's perspective at that), so I can only tell you about my perception of these disciplines. I may be mistaken; I'm certainly speaking from only a partial exposure to these disicplines. With that out the way, let's move forwards.

An Engineering Imagination
Of course, the fact that I named my blog (fully four years ago!) after "The Sociological Imagination" by C. Wright Mills  perhaps suggests some rationale for this interest. In the very first post on this blog, I set out my stall by saying:
"Mills was interested in the relationship between individual and society, and noted that social science wasn't just a matter of getting better data, or analysing it more effectively. Measurements are filtered through a whole set of social norms, and individual assumptions and biases. They colour the way we look at the world, and are often deeply embedded in the methods that we used...  Certainly it applies to engineering: at a fundamental level, what engineers choose to devote their time and energy to (or sell their skills for)... It's not just about what we engineer, but about the way we engineer it: decisions made in developing products and systems have huge implications for their accessibility, use and consequences (bot/h intended and unintended)."

And I revisited it last year in my post on Who Are We Engineering For? noting the challenge of helping engineers to address four key questions:
  1. To what extent should engineers be held accountable for the "selective enabling" of the systems and technologies they devise?
  2. To what extent do engineers have a responsibility to ensure the continuation of the species by, for example, preventing asteroid strikes or ensuring that humanity is able to colonise other planets?
  3. What are the responsibilities of engineers in terms of steering human evolution, if the transhumanist view is correct?
  4. How do we prioritise which problems engineers spend their time solving? Market forces? Equity? Maximising probability of humanity surviving into the posthuman future?
And I concluded that:
"perhaps an Engineering Imagination is a useful tool - being able to look at a system being designed critically, from the outside, to view its relationship with the norms and culture and history it will be deployed in."
That, I think, is the key issue. There are technical things that can be learned from sociology - rigour in analysing qualitative data, for example - but there's something more significant. One of the problems in engineering is a focus on engineering science rather than engineering practice. Learning the technicalities of how to model the behaviour of the world without any thought to what that means in practice. The challenge is that it's easy to say that engineers should be more aware of the implications of their technology - the big question is how do we do that? How do you put a finger on the Engineering Imagination? What does it mean in practice? That, I think, is where the Sociology and Humanities come in.

The Tricky Business of Being Human
Reading up on the Posthuman (I finished Braidotti, by the by - more on that in the next month or so!), makes me a little cagey about using the term human, but it's a convenient shorthand and I don't want to get caught up in lengthy discussions around self-organising systems, the limits of the body, anthropocentrism and humanism. Anyway, the point about the Humanities and Sociology is that they deal with people and their positions within complex social relationships - that link between the personal and the broader "milieu" as the Sociological Imagination puts it. This applies in two ways in engineering: both in terms of stakeholders (users, most obviously, but they can be a multiplicity) and the engineers themselves. So, the stakeholders in an engineering project are neither independent "vitruvian" individuals independent of the world around them, nor an amorphous statistical mass of aggregate data. But it applies to the engineers themselves - who in turn have tastes, background, personalities, histories, families, and find themselves enmeshed in the social processes of an organisation and project management process. They may not all be "engineers" either: a huge range of people are involved in product development, and even the boundaries of what is being developed can be porous. I don't think that's a controversial argument: I have yet to hear anyone make the argument that engineering is a pure, objective  process that leads to a single unquestionably right answer. Most of the challenge in engineering is in mapping complex, messy real world problems into forms that you (as the engineer) have the capability to solve. The "fuzzy front end" and "wicked problems", is a well-recognised problem. And the problem is that the dynamic nature of engineering problems means that these don't just apply at the start of the process. You don't just characterise once and have done with it - you're perpetually having to loop back, adjust expectations, update requirements, work with what you have. It's like user centred design - you don't just ask the user what they want and then go on and make it. You have to keep checking and course-correcting. Sometimes, people don't want what they say they want. Or a product takes so long to develop that it's solving a problem everyone used to have five years ago, but not any more.

This is like Donald Schön's Reflective Practitioner - constantly proposing moves, reflecting on the outcome and reframing the problem in light of the result.  It's this process that I hope that working with Humanities and Sociologists can help with in engineering terms. It's partly about having the concepts and frameworks to process this; partly about methodological tools that help incorporate that into the process. Engineers are people, with all the frailties and limits that implies - Micheal Davis in his essay "Explaining Wrongdoing" talks of microscopic vision (a concept that my one-time PhD Student Helen Morley highlighted to me): that expertise encourages a narrow focus (knowing more and more about less and less...) at the expense of a broader view of consequences. This dovetails beautifully with the notion of selective enabling and design exclusion, but also the Collingridge Dilemma: the potential difficulty in foreseeing the side effects of new technologies until it's too late.

Which isn't to say that we should be abandoning rational judgement and analysis - just that we need to make sure that we're framing problems correctly, and aware of the limits of what we bring to the analysis. I don't know how all this is going to help - that's one for the humaniteers (as I like to call them) and sociologists to answer.

Monday, 31 July 2017

Month in review: July 2017

I've shifted to monthly, rather than weekly reviews, now: that (hopefully!) allows me to hit my two-posts-a-month target, with one "non-review" post each month. Hence,  today is the day to review July.

July is a funny month. I've mentioned before the problem of "doing much but getting nothing done", and July often exhibits this. You work furiously, but nothing of substance gets ticked off the To Do list: there's just a load of half-finished tasks. Which, while frustrating, isn't a real problem: I make a point of trying to run multiple tasks in parallel, so that teaching prep continues over the summer, freeing up time for more research in term time.

I've lately become a fan of the Pomodone app, which is a Web-based implementation of the Pomodoro technique. This means working on a task for 25 minutes at a time. The nice thing about this is that not only does it sync with Wunderlist, where I keep my To Do list, but it logs how much time you've spent on each task, so you can at least see progress. Granted, time spent isn't an exact proxy for progress on a task, but unless you want to micromanage your To Do list to the nth degree, it's at least a satisfying alternative to actually being able to tick the big tasks off.

So, what tasks have been underway this month? Well, I have been reviewing Elizabeth Petrick's Making Computers Accessible for Disability and Society (reading done; writing underway); I've prepared a presentation on Humans, Puppets and Robots for the Superposition to be delivered on the 2nd of August. I've been calibrating PSAT and looking at its development into reach-to-grasp movements; I've rejigged my reach-to-grasp model to accommodate time variations in dynamic behaviour; I'm rewriting a marking scheme for my Level 2 undergraduate module; I've been preparing my handouts for next year (I like to get them printed for the end of July, just to avoid the temptation to keep revising them as term approaches); I've been supervising two undergraduate students who are working on the MagOne force sensor for Grip purposes (first iteration due this week); preparing for the Tracking People network finale in November; developing the PACLab technology roadmap; attending graduations; attending the EPS Conference; supervising PhD students and an MSc student; prepping a paper for submission and working on a grant proposal.

Yeah, that'll keep me busy, alright! Thankfully, August will see things getting ticked off the list. Good fun!