Friday, 5 January 2018

2018: Year in Preview

Happy New Year! As is customary, I will start the year not with New Years Resolutions, but with a few goals that will guide me through the tear. As always, a simple bulleted list, in no particular order:

On the Blog
* At least 24 posts - the same as last year.
* At least 2 posts per month - to avoid having gluts followed by long silences.
* At least 1 non-review post per month: this is challenging, but I found that the "tick-tock" model of one monthly review and one "other" post worked out pretty well. The other posts are likely to continue being a bit random, and probably will still crop up at the end of the month when time is running out, but it at least encourages me to get my thoughts down on various topics.

Research
* Deliver the SUITCEYES and APEX projects.
* Submit at least five grant applications as either PI or Co-I: this is a bit ambitious, since I've got several live projects to deliver on this year, but it doesn't hurt to try...
* Submit at least two *more* papers to high quality journals (resubmitting the ones under review don't count).
* Get the MagONE force sensor incorporated into FATKAT.
* Get BIGKAT (the new generation of PSAT that incorporates prehensile as well as postural measures) up and running.
* Continue to develop the grip model to address feedback and corrections: This, having no direct funding attached to it, remains the poor cousin to other work 

Other
* Make some inventions: And get back into Leeds Hackspace while I'm at it. I haven't been for about eighteen months. 
* Formulate a reading list for the Engineering Imagination.
These are two that got dropped last year: let's see if I do any better this time...

This is going to be a demanding year. It certainly has the potential to be an exciting one...
Here goes!

Thursday, 28 December 2017

Year in Review: 2017

This will be my last post for 2017, and what better way to end the year than with my customary review of the year? Right back at the start of the year I set some goals, and I reviewed them halfway through. Let's see how I've done now that the working year is at its end...

On the Blog
* At least 24 posts - the same as last year. Done! This is Post 25, so even if we discount the cheeky "placeholder" post in April, I've hit this target.
* At least 2 posts per month - to avoid having gluts followed by long silences. I'll tick this one off as done, despite a few wobbles. I technically missed this in January (by a day!), only made it in April by cheating, and there are a few a places where I ended up with a review and non-review post in quick succession (or had gaps of three weeks or more between posts), but I think met the spirit of it. The "tick-tock" approach of doing a review and non-review post each month works well, since it's stopped me from just padding with short review posts, and encourages me to have regular deadlines for getting content out. Even if it does mean that it tends to be two posts a the end of the month!
* At least 4 non-review posts (since I managed 3 last year), to avoid the blog being nothing but a diary of how busy I am: Done! I managed three in the first half of the year, and five in the second, for a total of eight. 

Research
* Deliver the Tracking People and Augmenting the Body projects: Done!
* Submit at least five grant applications as either PI or Co-I: Done! I managed six in the end, and three of them got funded to boot, with one still waiting to be heard from.  This will probably go down as the best success rate in my life, so I'll savour this moment before my success rate regresses to mean!
* Submit at least two papers to high quality journals: Done, though both are still under review.
* Get the new iteration of FATKAT into experimental use: Done! PhD student Latif Azyze has been hard at work on this, and I've had a couple of undergraduate project students developing a new three axis version.
* Get PSAT (the postural sway assessment tool) finished and field tested. Done!
* Adapt our grip model to address feedback and corrections: Not done, though progress has been made.

Other
* Get an "Engineering Imagination" discussion group up and running for postgraduate students in iDRO. Not done. I got a bit overtaken by events, and with the Robotics@Leeds and N8 Robotics and Autonomous Systems Student Networks getting up and running, I've opted to focus my energies there instead.

* Make some inventions. Not done. This has definitely been a weak spot this year. While grants have been succesful, I haven't been managing to put a lot of time into my own making activities.

* Formulate a reading list for the Engineering Imagination. Not done. That big list of books is still sat there, and hasn't made its way into an actual list.

Unexpected Highlights
Of course, opportunities arise during the course of the year that I wasn't aware of back in January. These mostly fall under the "grant applications" headings, so they have sort of already been covered, but a few were particularly notable opportunities that came along:

SUITCEYES: I mentioned this in my last post, and you'll be hearing a lot about it over the next few years, I expect. It's been a huge part of this year, and the opportunity to join the consortium came right of out of the blue last February. It wasn't even on the radar at the start of the year, so getting stuck into it will be really exciting. Of course, now there's just the thorny issue of delivering.

APEX - Engineering the Imagination: This was a call for proposals that came out of nowhere, and Stuart and I decided to have a go, if only to force us to think through our ideas on augmenting the body in more detail. It's paid off handsomely, and the process of developing an "empathy hand" is throwing up some really interesting questions, and to cap it all, we've been selected to exhibit down at the British Academy's Summer Showcase next June. This one definitely needs a blog post in the new year!

Robots, Puppets and Humans: What's the Difference? This was another one that I've blogged about elsewhere, which really gave me the opportunity to think about my work from another angle. It was good fun, and contrasting the approaches of Samit, Anzir and myself was really interesting.

So, not a bad year, all-in-all. Join me next year when I'll be looking ahead to what I hope to achieve in 2018...

Friday, 22 December 2017

SUITCEYES

No, it's not a typo: it's an acronym: Smart, User-friendly, Interactive, Cognition-Enhancer, Yielding Extended Sensosphere. Let me explain.

One of the big features of this year (and by extension, the next three years as well!) was a successful bid to the EU's Horizon 2020 Funding Scheme for a 3-year project to explore the use of smart textiles to provide haptic communication and navigation aids for deafblind people. The project is worth €2.4 Million, and has a consortium of five academic partners and two industrial partners from seven countries. You can find more detail on the partners than I could possibly fit into this post at the project's official website.

This builds upon - among other things - the work I undertook with Brian Henson and Bryan Matthews on haptic navigation aids for the visually impaired in the Department for Transport -funded WHISPER project (remember WHISPER?).  Whereas there we looked at barriers to navigation, and the potential for haptic aids to navigation, this project takes the whole approach to a much deeper level, bringing in expertise on Smart Textiles (The University of Borås, Sweden, who are co-ordinating the project), machine learning and object recognition (Centre for Research & Technology Hellas, Greece), psychophysics (Vrije Universiteit Amsterdam, Netherlands) and gamification (Offenburg University of Applied Sciences, Germany), a major producer of adaptive technology (Harpo, Poland) and a producer of tactile books (the delightfully named Les Doigts Qui Rêvent). This focuses on the broader issue of communication, beyond just the requirement for navigation, and emphasises the needs of deafblind individuals, rather than just the visually-impaired.

The work at Leeds focuses on two of the work packages making up the project: engaging with deafblind people to explore their needs and ensure that the project remains focused on addressing these; and exploring the use of haptic signals to aid navigation. The latter goes beyond the simple use of distance sensors and vibration motors that we explored in WHISPER: we'll be looking at bringing in inertial and GPS measures to enrich the navigation information, and by bringing in work from the other partners, we'll be exploring more sophisticated haptic signals (and a more sophisticated interface than wristbands with vibration motors attached!), and the use of object recognition from a camera feed. 

We'll be kicking the project off with a symposium at Borås in January: From Touch to Cognition.

I can't wait to get started!

Thursday, 30 November 2017

Month in Review: November 2017

As you will notice, I came within a hair's (well, a day's!) breadth of missing my two post target! Posting two days in a row isn't brilliant spacing, but never mind! Such is the nature of November: this November in particular.

This is one of those periods of "peak teach" I've discussed before. Projects are in full swing; early assignments are in for marking, later assignments are being finalised and set; exams are due (I wrote mine over the summer, but a late change in regs meant writing an extra question at short notice!); lectures and tutorials must be delivered; applicant days have begun. On top of that, we had an accreditation meeting this week. And no less than three seminars at the University have cropped up.

All good, but it all means a fine balancing act. A new grant due to start in January means finalising budgets, advertising jobs, and booking a trip to Sweden. More on that in due course.

Amongst all this, there have been three significant research events. The grand finale of the Tracking People AHRC network ran at the start of November - I wrote about my thoughts on this in the previous post, so I won't go further here.

On the Apex grant that Stuart Murray and I are working on, we've just had a copy of the Ada hand printed, which is really exciting. And my PhD student, Latif Azyze, has just some exciting results on handwriting forces.

Busy, busy, busy - but all good stuff. Roll on December, eh?

Wednesday, 29 November 2017

Tracking People: The Grand Finale

As you’ll know if you’ve been following me on Twitter, this month saw the culmination of the AHRC Network on Tracking People: a day-long workshop down in London, bringing together a range of industrialists, civil servants, academics and representatives of a range of third sector organisations. There were more than eighty attendees: familiar faces from the three workshops at Leeds, and a range of new faces (the rationale for holding it in London was that it made attendance for policymakers far easier – a move which I think paid off). Policy is a fascinating issue to me: it’s so far outside my usual experiences, so it was interesting to speak with representatives of the Centre for Applied Science and Technology, who advise the government on matters technical.
    The day was generally given over to discussion: opening with summaries from Anthea Hucklesby, Kevin Macnish and myself reflecting on insights from the previous three workshops and what these might mean for the future of “tracking”. You can read summaries from these workshops on the tracking website, and I’ll give a “big picture” overview of my thoughts from the network as a whole in a later post. For now, let me focus on the content of the presentations.
     The first session saw Jeff Hodgkinson (lately of South Wales Police) and Sara Murray of Buddi give practitioners’ perspectives. Jeff spoke of his experiences of electronic monitoring in criminal justice, and the need for more joined up and “creative” thinking to get the most of it: a challenge when you are pressed for time and resources. Sara Murray took a very different, focussing not on Buddi’s experiences, but on its upcoming applications in self-monitoring and “nudging” to address cravings and so improve health. Given Buddi’s experiences in location tracking for both criminal justice and healthcare applications, this was a little disappointing, but it was also a helpful reminder that “tracking” covers more than just electronic monitoring of location, and opened a line of discussion where many similar issues arise.
    After lunch, there was a session on humane and proportionate tracking of individuals. Anita Dockley of the Howard League for Penal Reform expressed their concerns that electronic monitoring of offenders expands the “carceral space” to the home, and begins to encompass familial and social relationships. By contrast, Tom Sorell of Warwick University expressed the view that of all the intrusions that the state can make into your life, electronic monitoring is relatively mild (compared, say, to tapping your phone, or posting a watch on you). Our very own Amanda Keeling (of Leeds Unversity’s Centre for Disablity Studies) discussed legal issues related to the monitoring and detention of disabled people, including some important rulings that had set out the limit of how people could be monitored or detained for “their own good” on the basis of disabilities. Richard Powley of Age UK discussed the pros and cons of tracking technology particularly for people with dementia, adapting a quote from Bishop Heber to suggest that “every (technology) pleases: only man is vile”. In other words, technology offers much potential for benefit – but also misuse.
    The final session saw Magali Ponsal of the Ministry of Justice and Mark Griffiths of G4S offer perspectives from the civil service and industry on the future of electronic monitoring, noting the incremental nature of changes to the underlying technology since its introduction (a contrast to the radical changes in consumer technology and self-tracking), and the significance of the human and policy systems that went around the technology. This led to some discussions about the business models around electronic monitoring, and the extent to which it is outsourced to the private sector in different countries.

    So, after four workshops, what are the lessons to be learned? I can only speak from my own point of view as an engineer – criminologists, social scientists, and ethicists (among others!) may well have different views – but here are my thoughts:
Technology for locating or recognising someone or something is pretty well-developed, and our capacity for doing it reliably and more efficiently will continue to improve. The drivers for location, computer vision and biometric monitoring in robotics applications will ensure that these developments continue irrespective of their value for the tracking of humans.  On the other hand, I don’t see that any game-changing technologies are likely to fundamentally alter the landscape.

Transdermal monitoring – of blood sugar, alcohol, and so forth – is likely to get more advanced, and become increasingly integrated into worn or carried devices. We’ll see more of this electronic monitoring within healthcare. Whether it’ll ever really take off beyond the quantified self crowd, I don’t know, but I daresay if people can be monitored more easily at home, that will be taken up as part of the wider telehealth drive.

The big issues are less to do with developing the technology itself (which, as noted above, is already pretty good and has plenty of drivers to advance), than about human and sociotechnical issues of tracking. This is true for all forms of tracking, from electronic monitoring of offenders as an alternative to prison, to self-tracking of biometrics. Matters of who owns data (legally); who possesses data (practically); how it is prevented from being accessed by unauthorised parties; and who knows what data is held about them. The risks of “black box” machine learning algorithms facing inadequate critique, yet being used to make crucial judgements and decisions. Matters of consent and coercion, and the wider problem of foreseeing the societal risks of fast-moving technologies before they become widespread.

I don’t have any solutions to these: just that these are issues that we will need to grapple with if we hope to control technology (to paraphrase Geoffrey Vickers) rather than have it control us. Of course, identifying these issues is a necessary first step towards grappling with them: and that, of course, is the purpose of the network. The next step, of course, is to begin grappling with them…

Tuesday, 31 October 2017

Month in Review: October 2017

If the end of September sees the pre-teaching rush easing off, the end of October sees teaching very much in full swing. There's been a lot going on, a lot of new things starting, but not a lot finished. Unless you count lectures and tutorials and project meetings delivered. I've taken to adding them to my task list, so that every lecture can be ticked off. Otherwise, you do a day of solid teaching and think: "I've got nothing done!". I'd hope the students - whose fees are paying for those tasks - don't see it that way, and makes it no sense to manage your time as if teaching were a drain that got in the way of the real work.

Anyway, I've got three undergraduate team projects for MEng/MDes students, and seven dissertation projects under way. I had industrial visitors in to sponsor an undergraduate project for our level 2 students.

Otherwise, three proposals have been on the go - one died in the internal sift - and I've been hard at work trying to map out the PACLab technology roadmap, so we can keep on top of the tech we need, particularly as VR is becoming a bigger part of our work.

Speaking of which, some of our work with Dubit (particularly driven by Faisal Mushtaq) on health and safety of VR, has now been published, and achieved some note from the media.

Also excitingly, the Apex project I'm doing with Stuart Murray on "Engineering the Imagination" has now been officially announced. We've known about it for a month or two, but had to keep it under wraps! Now we can announce to the world that we'll be doing some critical design of our own, exploring how engineers respond to cultural theory by designing a prosthetic hand to communicate empathy. Really exciting stuff.

November promises to be just as exciting: the grand finale of the Tracking People AHRC network is taking place on the 9th. I'm really looking forward to it: stay tuned!

Monday, 30 October 2017

Present in Absence, pt 2

I've been trying to think a bit more about this whole issue of proximity, and projecting the self that I addressed last month. I feel like there's something in it, but I haven't really figured it out yet, and I may well be retreading old ground. Still, I wanted to dedicate some time to thinking through Virtual Reality, Augmented Reality, Companion Robots and the self, and what better place to think aloud than on the blog? It is, after all, what it's here for. So, buckle in - once more I will be thinking off the top of my head and straying dangerously outside my discipline. Feel free to correct me.

I mooted that there might be five levels of proximity at which something can be found (slightly reworded, you'll note!):


1) Integral: Inside; built into you. Not removable without surgery. By definition, always in contact with you unless drastic action is taken. 

2) Contact: Anything attached to the exterior of the body. Attached, but detachable. Always in contact with you, but removable without permanent damage. 

3) Reachable: Not attached and normally not in contact with you, but easy to bring into contact.  Within arm's reach.

4) Proximity: Not in arm's reach, but within sight or sound. The same room - no interposing barriers.


5) Remote: Not in sight, or sound, barriers prevent interaction except through some third party or device.

Now, one thing that occurs to me is that this refers to spatial proximity. But is that the most relevant form of proximity when "projecting the self"? It sort of makes sense: projecting myself from "here" to "there" inevitably involves a spatial dimension. But it feels like there's a disconnect between integral/worn and reachable/proximity/remote. As if they deal with different types of proximity.

Spatially, at least, they make sense as part of the same scale. From "inside" to "next to" to "can be made next to through greater and greater effort" - reaching out an arm; walking across the room; walking out of the room and if necessary off over the horizon. "Projection" in a sense brings things closer along this scale. Reaching out my arm brings a reachable object into contact; walking across the room brings a proximate object into reach and then into contact. Would ingesting something then be the next level of projection? It doesn't seem quite right.

And yet, remote interfaces sort of follow this pattern - I can project my sense of sight through not just space but potentially (backwards) through time through the use of a camera and a connected display. A microphone and speaker (and amplifier, I'll grant!) will do the same for sound. Thanks to the internet, light and sound from halfway round the world can be projected to my eyes and ears. Arguably, I am the recipient  of the projection (it is projected to me): would we regard this as projecting the senses?

This brings us to the question of immersion. If I watch TV, am I projecting myself or is it (to quote St Vincent) "just like a window?" It brings something distant closer, but it doesn't project "me" there. VR is different, because it cuts out some of the immediate senses. If I put on a VR headset, I now not only receive visual information from a remote (possibly virtual) environment, but I also lose visual information about my immediate environment. Headphones do the same for sound: haptic feedback the same (in theory!) for touch. That, I guess is the projecting element: the extent to which I sacrifice information about my immediate environment for information about a remote environment.  Projecting myself "there" rather than projecting "there" to "here".

So far, this has all discussed projecting senses from one place to another, but what about projecting actions? The partner to telepresence is teleoperation - the ability to perform an action in a remote space. Of course, the microphone and speaker example works in reverse - I can have a webchat with a colleague in Sweden almost as easily as speaking to the in the same room, our voices and images projected across the continent. In teleoperation, though, it feels like we tend to mean the projection of actuation: of force and movement. Of course, remote control has existed for a long time, and the idea of pressing a switch in one place and an action being performed elsewhere is hardly new. 

Based on the ideas discussed by Andrew Wilson et al at the Cognitive Archaeology meet-up, it looks like humans are uniquely well-adapted to throwing things and this was obviously an important step in our development. For an animal with limited natural weapons or defences, the ability to hit something from a distance at which it can't easily hit back is a huge boon, and perhaps the earliest example of telepresence... 

Crossbows and bows caused such concern in the middle ages that they were banned for use against Christians: "29. We prohibit under anathema that murderous art of crossbowmen and archers, which is hateful to God, to be employed against Christians and Catholics from now on." The ability to kill from a distance without putting yourself at close risk was very much the drone strike of its day.  

Anyway, this is a little off the point: I'm just trying to demonstrate that remote action is nothing new. So how does this fit with our model? If we don't want to look at how close something is to us, but how it's proximity is shifted through technology, does the same model still work?

I mean, could we do away with "reachable"? Is reaching just  a way of moving something from "proximity" to "contact"?  Of course, then "proximity" would run out at the limit of reach. Whether something is across the room or across the world makes no difference once it's out of reach. This then raises the question: is walking a form of projection? For me, walking to something just out of reach is trivial; whereas walking to another room is more effort. There again, that effort will increase the further away something is. I can go and put my hand on an object a mile away, it just takes a lot more time and energy. 

This makes me think a few things:

1) That the categories (integral to remote) classify where a device must be for one to operate it, but make less sense in mapping out "projection" of skills. For example, some devices must be implanted to work correctly (a pacemaker is no use in your pocket); some must be in contact (a heart rate monitor; a VR headset); most must be reachable to be useful - contact is required to operate them, but it need not be constant (a kettle, for example - I need to get within arm's reach to turn it on); then we get to voice activation (Alexa, Siri, etc). This is only about projection insofar as it determines how near I need to be to an object to form an assemblage with it.

2) That these will vary from person to person and object to object: how far I "reach" will depend on my arm length; how far my vision extends depends not just on my eyesight but upon what I'm trying to see. I can read a traffic light from 100 metres, but not a novel.

3) I wonder if time might be a better measure of proximity? That is, if we measure the time it would take to form an assemblage with a given object? Hence, an integral or contact object is instantaneously an assemblage: I am automatically assembled with them. For objects not in contact with me, proximity might be measured by the time it takes me to interact with them. For a kettle in arm's reach or Alexa, the time is a second or two: as good as instantaneous. For objects further away, we can either have a binary distinction ("not instantaneous"), or measure the time it takes (five seconds to cross the room; a minute to go next door; an hour to walk two miles away).

4) Perhaps it is the instantaneous/not-instantaneous distinction that is most useful, since this delineates close from remote, and this gap is what projection bridges. Whether my senses or actions, projection means transferring instantaneous senses and actions to somewhere they would not normally be possible, rather than having to take an additional action to get to the relevant place.

Maybe the mapping isn't that useful, then? Or maybe it's useful in mapping the links required to form an assemblage with a device? Perhaps the question is - why am I interested in this? Why do I want to map this out in the first place? I feel like there's something important in here, but I'm not sure.

Let's try a few examples. Ordinarily, if two people wish to engage in conversation, they would need to be in proximity. Hence, in the image below Persons A and B can hold a conversation with each other, but not with Person C (who is remote from them - in another room, building, or country).

Add a telephone (mobile or otherwise) into the mix, however, and as long as it is within reach of both parties (and both have a network, charge, etc.), speech across any distance becomes possible:



Now, there are some complications here. In this example Person B can converse with  Person A (who is in proximity) and Person C (provided both B and C are in contact with their phones). Persons A and C, however, can't converse with each other. Clearly, this need not be the case: what if the telephone is set to speaker? Now, Person C is effectively "proximate" to both Person A and Person B - and no one needs to be physically holding the phone. Of course, Person A and B can see each other, and if anyone gets too far from the phone, their voice will no longer carry, etc. 

A similar issue might be imagined in terms of shaking hands. A and B can shake hands, but only if they are in Reachable distance of each other. Being in earshot of each other isn't sufficient. A telephone won't help A, B or C shake hands with each other, no matter how f
good the speaker and microphones are. 

There's more to this, as well. We need to differentiate different senses and actions. For example, the telephone carries audio information, but not visual information.  Skype or WhatsApp (among other videoconferencing apps) can carry both. They can't carry touch. 

Is it worth thinking of proximity in conductive terms? Conducting information, maybe? Are these assemblages really about "conducting" information? Conducting isn't necessarily the right term - clearly, air conducts sound, wires conduct electricity, but can we speak of electromagnetic radiation being "conducted"?  It hardly matters. For my purpose, these assemblages are analogous to forming a circuit: when the circuit is broken, sensing or actuation breaks down. 

Now maybe that's what I mean by projecting the self - forming sensing or actuating circuits/assemblages with places that otherwise wouldn't be possible: be that because they are remote, or virtual, or just out of reach of our every day capabilities. That's an interesting thought, and worth pondering more: and for that reason, it seems as a good a place as any to end this post.