Thursday 28 December 2017

Year in Review: 2017

This will be my last post for 2017, and what better way to end the year than with my customary review of the year? Right back at the start of the year I set some goals, and I reviewed them halfway through. Let's see how I've done now that the working year is at its end...

On the Blog
* At least 24 posts - the same as last year. Done! This is Post 25, so even if we discount the cheeky "placeholder" post in April, I've hit this target.
* At least 2 posts per month - to avoid having gluts followed by long silences. I'll tick this one off as done, despite a few wobbles. I technically missed this in January (by a day!), only made it in April by cheating, and there are a few a places where I ended up with a review and non-review post in quick succession (or had gaps of three weeks or more between posts), but I think met the spirit of it. The "tick-tock" approach of doing a review and non-review post each month works well, since it's stopped me from just padding with short review posts, and encourages me to have regular deadlines for getting content out. Even if it does mean that it tends to be two posts a the end of the month!
* At least 4 non-review posts (since I managed 3 last year), to avoid the blog being nothing but a diary of how busy I am: Done! I managed three in the first half of the year, and five in the second, for a total of eight. 

Research
* Deliver the Tracking People and Augmenting the Body projects: Done!
* Submit at least five grant applications as either PI or Co-I: Done! I managed six in the end, and three of them got funded to boot, with one still waiting to be heard from.  This will probably go down as the best success rate in my life, so I'll savour this moment before my success rate regresses to mean!
* Submit at least two papers to high quality journals: Done, though both are still under review.
* Get the new iteration of FATKAT into experimental use: Done! PhD student Latif Azyze has been hard at work on this, and I've had a couple of undergraduate project students developing a new three axis version.
* Get PSAT (the postural sway assessment tool) finished and field tested. Done!
* Adapt our grip model to address feedback and corrections: Not done, though progress has been made.

Other
* Get an "Engineering Imagination" discussion group up and running for postgraduate students in iDRO. Not done. I got a bit overtaken by events, and with the Robotics@Leeds and N8 Robotics and Autonomous Systems Student Networks getting up and running, I've opted to focus my energies there instead.

* Make some inventions. Not done. This has definitely been a weak spot this year. While grants have been succesful, I haven't been managing to put a lot of time into my own making activities.

* Formulate a reading list for the Engineering Imagination. Not done. That big list of books is still sat there, and hasn't made its way into an actual list.

Unexpected Highlights
Of course, opportunities arise during the course of the year that I wasn't aware of back in January. These mostly fall under the "grant applications" headings, so they have sort of already been covered, but a few were particularly notable opportunities that came along:

SUITCEYES: I mentioned this in my last post, and you'll be hearing a lot about it over the next few years, I expect. It's been a huge part of this year, and the opportunity to join the consortium came right of out of the blue last February. It wasn't even on the radar at the start of the year, so getting stuck into it will be really exciting. Of course, now there's just the thorny issue of delivering.

APEX - Engineering the Imagination: This was a call for proposals that came out of nowhere, and Stuart and I decided to have a go, if only to force us to think through our ideas on augmenting the body in more detail. It's paid off handsomely, and the process of developing an "empathy hand" is throwing up some really interesting questions, and to cap it all, we've been selected to exhibit down at the British Academy's Summer Showcase next June. This one definitely needs a blog post in the new year!

Robots, Puppets and Humans: What's the Difference? This was another one that I've blogged about elsewhere, which really gave me the opportunity to think about my work from another angle. It was good fun, and contrasting the approaches of Samit, Anzir and myself was really interesting.

So, not a bad year, all-in-all. Join me next year when I'll be looking ahead to what I hope to achieve in 2018...

Friday 22 December 2017

SUITCEYES

No, it's not a typo: it's an acronym: Smart, User-friendly, Interactive, Cognition-Enhancer, Yielding Extended Sensosphere. Let me explain.

One of the big features of this year (and by extension, the next three years as well!) was a successful bid to the EU's Horizon 2020 Funding Scheme for a 3-year project to explore the use of smart textiles to provide haptic communication and navigation aids for deafblind people. The project is worth €2.4 Million, and has a consortium of five academic partners and two industrial partners from seven countries. You can find more detail on the partners than I could possibly fit into this post at the project's official website.

This builds upon - among other things - the work I undertook with Brian Henson and Bryan Matthews on haptic navigation aids for the visually impaired in the Department for Transport -funded WHISPER project (remember WHISPER?).  Whereas there we looked at barriers to navigation, and the potential for haptic aids to navigation, this project takes the whole approach to a much deeper level, bringing in expertise on Smart Textiles (The University of Borås, Sweden, who are co-ordinating the project), machine learning and object recognition (Centre for Research & Technology Hellas, Greece), psychophysics (Vrije Universiteit Amsterdam, Netherlands) and gamification (Offenburg University of Applied Sciences, Germany), a major producer of adaptive technology (Harpo, Poland) and a producer of tactile books (the delightfully named Les Doigts Qui Rêvent). This focuses on the broader issue of communication, beyond just the requirement for navigation, and emphasises the needs of deafblind individuals, rather than just the visually-impaired.

The work at Leeds focuses on two of the work packages making up the project: engaging with deafblind people to explore their needs and ensure that the project remains focused on addressing these; and exploring the use of haptic signals to aid navigation. The latter goes beyond the simple use of distance sensors and vibration motors that we explored in WHISPER: we'll be looking at bringing in inertial and GPS measures to enrich the navigation information, and by bringing in work from the other partners, we'll be exploring more sophisticated haptic signals (and a more sophisticated interface than wristbands with vibration motors attached!), and the use of object recognition from a camera feed. 

We'll be kicking the project off with a symposium at Borås in January: From Touch to Cognition.

I can't wait to get started!

Thursday 30 November 2017

Month in Review: November 2017

As you will notice, I came within a hair's (well, a day's!) breadth of missing my two post target! Posting two days in a row isn't brilliant spacing, but never mind! Such is the nature of November: this November in particular.

This is one of those periods of "peak teach" I've discussed before. Projects are in full swing; early assignments are in for marking, later assignments are being finalised and set; exams are due (I wrote mine over the summer, but a late change in regs meant writing an extra question at short notice!); lectures and tutorials must be delivered; applicant days have begun. On top of that, we had an accreditation meeting this week. And no less than three seminars at the University have cropped up.

All good, but it all means a fine balancing act. A new grant due to start in January means finalising budgets, advertising jobs, and booking a trip to Sweden. More on that in due course.

Amongst all this, there have been three significant research events. The grand finale of the Tracking People AHRC network ran at the start of November - I wrote about my thoughts on this in the previous post, so I won't go further here.

On the Apex grant that Stuart Murray and I are working on, we've just had a copy of the Ada hand printed, which is really exciting. And my PhD student, Latif Azyze, has just some exciting results on handwriting forces.

Busy, busy, busy - but all good stuff. Roll on December, eh?

Wednesday 29 November 2017

Tracking People: The Grand Finale

As you’ll know if you’ve been following me on Twitter, this month saw the culmination of the AHRC Network on Tracking People: a day-long workshop down in London, bringing together a range of industrialists, civil servants, academics and representatives of a range of third sector organisations. There were more than eighty attendees: familiar faces from the three workshops at Leeds, and a range of new faces (the rationale for holding it in London was that it made attendance for policymakers far easier – a move which I think paid off). Policy is a fascinating issue to me: it’s so far outside my usual experiences, so it was interesting to speak with representatives of the Centre for Applied Science and Technology, who advise the government on matters technical.
    The day was generally given over to discussion: opening with summaries from Anthea Hucklesby, Kevin Macnish and myself reflecting on insights from the previous three workshops and what these might mean for the future of “tracking”. You can read summaries from these workshops on the tracking website, and I’ll give a “big picture” overview of my thoughts from the network as a whole in a later post. For now, let me focus on the content of the presentations.
     The first session saw Jeff Hodgkinson (lately of South Wales Police) and Sara Murray of Buddi give practitioners’ perspectives. Jeff spoke of his experiences of electronic monitoring in criminal justice, and the need for more joined up and “creative” thinking to get the most of it: a challenge when you are pressed for time and resources. Sara Murray took a very different, focussing not on Buddi’s experiences, but on its upcoming applications in self-monitoring and “nudging” to address cravings and so improve health. Given Buddi’s experiences in location tracking for both criminal justice and healthcare applications, this was a little disappointing, but it was also a helpful reminder that “tracking” covers more than just electronic monitoring of location, and opened a line of discussion where many similar issues arise.
    After lunch, there was a session on humane and proportionate tracking of individuals. Anita Dockley of the Howard League for Penal Reform expressed their concerns that electronic monitoring of offenders expands the “carceral space” to the home, and begins to encompass familial and social relationships. By contrast, Tom Sorell of Warwick University expressed the view that of all the intrusions that the state can make into your life, electronic monitoring is relatively mild (compared, say, to tapping your phone, or posting a watch on you). Our very own Amanda Keeling (of Leeds Unversity’s Centre for Disablity Studies) discussed legal issues related to the monitoring and detention of disabled people, including some important rulings that had set out the limit of how people could be monitored or detained for “their own good” on the basis of disabilities. Richard Powley of Age UK discussed the pros and cons of tracking technology particularly for people with dementia, adapting a quote from Bishop Heber to suggest that “every (technology) pleases: only man is vile”. In other words, technology offers much potential for benefit – but also misuse.
    The final session saw Magali Ponsal of the Ministry of Justice and Mark Griffiths of G4S offer perspectives from the civil service and industry on the future of electronic monitoring, noting the incremental nature of changes to the underlying technology since its introduction (a contrast to the radical changes in consumer technology and self-tracking), and the significance of the human and policy systems that went around the technology. This led to some discussions about the business models around electronic monitoring, and the extent to which it is outsourced to the private sector in different countries.

    So, after four workshops, what are the lessons to be learned? I can only speak from my own point of view as an engineer – criminologists, social scientists, and ethicists (among others!) may well have different views – but here are my thoughts:
Technology for locating or recognising someone or something is pretty well-developed, and our capacity for doing it reliably and more efficiently will continue to improve. The drivers for location, computer vision and biometric monitoring in robotics applications will ensure that these developments continue irrespective of their value for the tracking of humans.  On the other hand, I don’t see that any game-changing technologies are likely to fundamentally alter the landscape.

Transdermal monitoring – of blood sugar, alcohol, and so forth – is likely to get more advanced, and become increasingly integrated into worn or carried devices. We’ll see more of this electronic monitoring within healthcare. Whether it’ll ever really take off beyond the quantified self crowd, I don’t know, but I daresay if people can be monitored more easily at home, that will be taken up as part of the wider telehealth drive.

The big issues are less to do with developing the technology itself (which, as noted above, is already pretty good and has plenty of drivers to advance), than about human and sociotechnical issues of tracking. This is true for all forms of tracking, from electronic monitoring of offenders as an alternative to prison, to self-tracking of biometrics. Matters of who owns data (legally); who possesses data (practically); how it is prevented from being accessed by unauthorised parties; and who knows what data is held about them. The risks of “black box” machine learning algorithms facing inadequate critique, yet being used to make crucial judgements and decisions. Matters of consent and coercion, and the wider problem of foreseeing the societal risks of fast-moving technologies before they become widespread.

I don’t have any solutions to these: just that these are issues that we will need to grapple with if we hope to control technology (to paraphrase Geoffrey Vickers) rather than have it control us. Of course, identifying these issues is a necessary first step towards grappling with them: and that, of course, is the purpose of the network. The next step, of course, is to begin grappling with them…

Tuesday 31 October 2017

Month in Review: October 2017

If the end of September sees the pre-teaching rush easing off, the end of October sees teaching very much in full swing. There's been a lot going on, a lot of new things starting, but not a lot finished. Unless you count lectures and tutorials and project meetings delivered. I've taken to adding them to my task list, so that every lecture can be ticked off. Otherwise, you do a day of solid teaching and think: "I've got nothing done!". I'd hope the students - whose fees are paying for those tasks - don't see it that way, and makes it no sense to manage your time as if teaching were a drain that got in the way of the real work.

Anyway, I've got three undergraduate team projects for MEng/MDes students, and seven dissertation projects under way. I had industrial visitors in to sponsor an undergraduate project for our level 2 students.

Otherwise, three proposals have been on the go - one died in the internal sift - and I've been hard at work trying to map out the PACLab technology roadmap, so we can keep on top of the tech we need, particularly as VR is becoming a bigger part of our work.

Speaking of which, some of our work with Dubit (particularly driven by Faisal Mushtaq) on health and safety of VR, has now been published, and achieved some note from the media.

Also excitingly, the Apex project I'm doing with Stuart Murray on "Engineering the Imagination" has now been officially announced. We've known about it for a month or two, but had to keep it under wraps! Now we can announce to the world that we'll be doing some critical design of our own, exploring how engineers respond to cultural theory by designing a prosthetic hand to communicate empathy. Really exciting stuff.

November promises to be just as exciting: the grand finale of the Tracking People AHRC network is taking place on the 9th. I'm really looking forward to it: stay tuned!

Monday 30 October 2017

Present in Absence, pt 2

I've been trying to think a bit more about this whole issue of proximity, and projecting the self that I addressed last month. I feel like there's something in it, but I haven't really figured it out yet, and I may well be retreading old ground. Still, I wanted to dedicate some time to thinking through Virtual Reality, Augmented Reality, Companion Robots and the self, and what better place to think aloud than on the blog? It is, after all, what it's here for. So, buckle in - once more I will be thinking off the top of my head and straying dangerously outside my discipline. Feel free to correct me.

I mooted that there might be five levels of proximity at which something can be found (slightly reworded, you'll note!):


1) Integral: Inside; built into you. Not removable without surgery. By definition, always in contact with you unless drastic action is taken. 

2) Contact: Anything attached to the exterior of the body. Attached, but detachable. Always in contact with you, but removable without permanent damage. 

3) Reachable: Not attached and normally not in contact with you, but easy to bring into contact.  Within arm's reach.

4) Proximity: Not in arm's reach, but within sight or sound. The same room - no interposing barriers.


5) Remote: Not in sight, or sound, barriers prevent interaction except through some third party or device.

Now, one thing that occurs to me is that this refers to spatial proximity. But is that the most relevant form of proximity when "projecting the self"? It sort of makes sense: projecting myself from "here" to "there" inevitably involves a spatial dimension. But it feels like there's a disconnect between integral/worn and reachable/proximity/remote. As if they deal with different types of proximity.

Spatially, at least, they make sense as part of the same scale. From "inside" to "next to" to "can be made next to through greater and greater effort" - reaching out an arm; walking across the room; walking out of the room and if necessary off over the horizon. "Projection" in a sense brings things closer along this scale. Reaching out my arm brings a reachable object into contact; walking across the room brings a proximate object into reach and then into contact. Would ingesting something then be the next level of projection? It doesn't seem quite right.

And yet, remote interfaces sort of follow this pattern - I can project my sense of sight through not just space but potentially (backwards) through time through the use of a camera and a connected display. A microphone and speaker (and amplifier, I'll grant!) will do the same for sound. Thanks to the internet, light and sound from halfway round the world can be projected to my eyes and ears. Arguably, I am the recipient  of the projection (it is projected to me): would we regard this as projecting the senses?

This brings us to the question of immersion. If I watch TV, am I projecting myself or is it (to quote St Vincent) "just like a window?" It brings something distant closer, but it doesn't project "me" there. VR is different, because it cuts out some of the immediate senses. If I put on a VR headset, I now not only receive visual information from a remote (possibly virtual) environment, but I also lose visual information about my immediate environment. Headphones do the same for sound: haptic feedback the same (in theory!) for touch. That, I guess is the projecting element: the extent to which I sacrifice information about my immediate environment for information about a remote environment.  Projecting myself "there" rather than projecting "there" to "here".

So far, this has all discussed projecting senses from one place to another, but what about projecting actions? The partner to telepresence is teleoperation - the ability to perform an action in a remote space. Of course, the microphone and speaker example works in reverse - I can have a webchat with a colleague in Sweden almost as easily as speaking to the in the same room, our voices and images projected across the continent. In teleoperation, though, it feels like we tend to mean the projection of actuation: of force and movement. Of course, remote control has existed for a long time, and the idea of pressing a switch in one place and an action being performed elsewhere is hardly new. 

Based on the ideas discussed by Andrew Wilson et al at the Cognitive Archaeology meet-up, it looks like humans are uniquely well-adapted to throwing things and this was obviously an important step in our development. For an animal with limited natural weapons or defences, the ability to hit something from a distance at which it can't easily hit back is a huge boon, and perhaps the earliest example of telepresence... 

Crossbows and bows caused such concern in the middle ages that they were banned for use against Christians: "29. We prohibit under anathema that murderous art of crossbowmen and archers, which is hateful to God, to be employed against Christians and Catholics from now on." The ability to kill from a distance without putting yourself at close risk was very much the drone strike of its day.  

Anyway, this is a little off the point: I'm just trying to demonstrate that remote action is nothing new. So how does this fit with our model? If we don't want to look at how close something is to us, but how it's proximity is shifted through technology, does the same model still work?

I mean, could we do away with "reachable"? Is reaching just  a way of moving something from "proximity" to "contact"?  Of course, then "proximity" would run out at the limit of reach. Whether something is across the room or across the world makes no difference once it's out of reach. This then raises the question: is walking a form of projection? For me, walking to something just out of reach is trivial; whereas walking to another room is more effort. There again, that effort will increase the further away something is. I can go and put my hand on an object a mile away, it just takes a lot more time and energy. 

This makes me think a few things:

1) That the categories (integral to remote) classify where a device must be for one to operate it, but make less sense in mapping out "projection" of skills. For example, some devices must be implanted to work correctly (a pacemaker is no use in your pocket); some must be in contact (a heart rate monitor; a VR headset); most must be reachable to be useful - contact is required to operate them, but it need not be constant (a kettle, for example - I need to get within arm's reach to turn it on); then we get to voice activation (Alexa, Siri, etc). This is only about projection insofar as it determines how near I need to be to an object to form an assemblage with it.

2) That these will vary from person to person and object to object: how far I "reach" will depend on my arm length; how far my vision extends depends not just on my eyesight but upon what I'm trying to see. I can read a traffic light from 100 metres, but not a novel.

3) I wonder if time might be a better measure of proximity? That is, if we measure the time it would take to form an assemblage with a given object? Hence, an integral or contact object is instantaneously an assemblage: I am automatically assembled with them. For objects not in contact with me, proximity might be measured by the time it takes me to interact with them. For a kettle in arm's reach or Alexa, the time is a second or two: as good as instantaneous. For objects further away, we can either have a binary distinction ("not instantaneous"), or measure the time it takes (five seconds to cross the room; a minute to go next door; an hour to walk two miles away).

4) Perhaps it is the instantaneous/not-instantaneous distinction that is most useful, since this delineates close from remote, and this gap is what projection bridges. Whether my senses or actions, projection means transferring instantaneous senses and actions to somewhere they would not normally be possible, rather than having to take an additional action to get to the relevant place.

Maybe the mapping isn't that useful, then? Or maybe it's useful in mapping the links required to form an assemblage with a device? Perhaps the question is - why am I interested in this? Why do I want to map this out in the first place? I feel like there's something important in here, but I'm not sure.

Let's try a few examples. Ordinarily, if two people wish to engage in conversation, they would need to be in proximity. Hence, in the image below Persons A and B can hold a conversation with each other, but not with Person C (who is remote from them - in another room, building, or country).

Add a telephone (mobile or otherwise) into the mix, however, and as long as it is within reach of both parties (and both have a network, charge, etc.), speech across any distance becomes possible:



Now, there are some complications here. In this example Person B can converse with  Person A (who is in proximity) and Person C (provided both B and C are in contact with their phones). Persons A and C, however, can't converse with each other. Clearly, this need not be the case: what if the telephone is set to speaker? Now, Person C is effectively "proximate" to both Person A and Person B - and no one needs to be physically holding the phone. Of course, Person A and B can see each other, and if anyone gets too far from the phone, their voice will no longer carry, etc. 

A similar issue might be imagined in terms of shaking hands. A and B can shake hands, but only if they are in Reachable distance of each other. Being in earshot of each other isn't sufficient. A telephone won't help A, B or C shake hands with each other, no matter how f
good the speaker and microphones are. 

There's more to this, as well. We need to differentiate different senses and actions. For example, the telephone carries audio information, but not visual information.  Skype or WhatsApp (among other videoconferencing apps) can carry both. They can't carry touch. 

Is it worth thinking of proximity in conductive terms? Conducting information, maybe? Are these assemblages really about "conducting" information? Conducting isn't necessarily the right term - clearly, air conducts sound, wires conduct electricity, but can we speak of electromagnetic radiation being "conducted"?  It hardly matters. For my purpose, these assemblages are analogous to forming a circuit: when the circuit is broken, sensing or actuation breaks down. 

Now maybe that's what I mean by projecting the self - forming sensing or actuating circuits/assemblages with places that otherwise wouldn't be possible: be that because they are remote, or virtual, or just out of reach of our every day capabilities. That's an interesting thought, and worth pondering more: and for that reason, it seems as a good a place as any to end this post.

Thursday 28 September 2017

Month in Review: September 2017

September is always slightly white-knuckle, as the start of term hurtles into view. Lectures need to be ready, Web pages up-to-date, handouts printed, exams prepared - or you're in for a really hard time once teaching starts.

This year, the problem has been compounded by a big research grant getting approved in principle and needing detailed negotiation to ensure that moves to approval in practice; a small research grant in the same sort of state; the launch of the N8 Robotics and Autonomous Systems Student Network down in Sheffield; and chairing a session for LUDI at the AAATE conference (also in Sheffield). All great things (and a nice break from teaching prep!), but all needing to be slotted into a busy time.

Still, it's done: lectures, handouts, exams, ready; modules launched; new and returning tutees welcomed; teaching underway. Which isn't to say that it's an easy ride from here, but the start of term is always a nice point - when you can draw breath, mop your brow and get down to actually teaching instead of just thinking about it. And the benefit of prepping over the summer is that research marches on, rather than coming to an abrupt halt.

Anyway, I promised highlights of the summer, so here they are:

1) Getting MagOne working. Or rather, undergraduate summer interns Jamie Mawhinney and Kieran Burley getting it working for grip and posture applications respectively. Application specific calibration and housings need to be developed, but we've achieved proof of concept for both grip and posture, and the hardware and software for running off an Arduino Nano are in place. Low cost there-axis force sensing, here we come!

2) The fully housed PSATs getting up and running... and getting prepared for prehension studies. Low-Cost market tracking, here we come!

I think that'll do for now. In the meantime... back to teaching!

Wednesday 20 September 2017

Present in Absence: Projecting the Self

Last week saw me in Sheffield for the launch of the N8 Robotics and Autonomous Systems Student Network on behalf of Robotics@Leeds (it was great, thanks) and the AAATE conference on behalf of LUDI (also great, thanks!). There are probably blog posts in both, but I mixed in some of my Augmenting the Body/Self duties by catching up with Sheffield Robotics' Tony Prescott and Michael Szollosy, by picking up a MiRo for Stuart Murray to show off on behalf of the project at the Northern Network for Medical Humanities Research Congress, and using my extended commute to spend some time going through our thoughts from the summer workshop. I'll keep those under my hat, since they'll be going into a grant, but between all these things and presentations at AAATE about NAO and ZORA, and with articles appearing about VR at Leeds, I've been thinking a lot about the limits of the body and the self.

So, I thought I would work through them here, at least to get them straight. Bear in mind this is me thinking off the top of my head without a proper literature review, or any kind of philosophical expertise on self  or consciousness. It's a hot take from an engineer responding to the ideas swirling around me: feel free to correct me.

The limits of the body and Deleuzian revisitings of assemblages and rhizomes have been covered elsewhere on this blog. The skin, as I've noted before, is a pretty handy boundary for delimiting self and other, and the one that we probably instinctively default to. Questions of whether my clothes or equipment should be regarded as part of "me" might seem trivial. The answer is obviously no, since I can put these on and off in a way that I couldn't with any other part of me: even hair or nails, while easily shorn, are not rapidly replaced except by extensions or false nails. Yet, if we take ease of removal and replacement as indicative of the boundary of self, then (as Margirit Schildrick pointed out at our Augmenting the Body finale) what about the human microbiome? The  countless bacteria that we lug around inside us? They aren't easily removed or replaced, though they can be - I can swallow antibiotics and probiotics, I guess. Yet, if I have an abscess, it's not easily removed, so is that part of "me"? And by extension, what about a tattoo, or a pacemaker, artificial hip, or insulin pump? I have none of these, so the question is perhaps facetious. But I do have a dental implant: an artificial tooth screwed in to replace the one knocked out on a school playground in the 80s. Is that "me"? Or does it have to be plugged in to my nervous system to be "me"? In which case, my surviving teeth would count, but not the false tooth - what about hair and nails? Do they count? They don't have nerves (do they? I'm getting outside my field here), even if they transmit mechanical signals back to the skin they're attached to. My tooth isn't like my clothing - I can't take it off and put it back in any more easily than my other teeth: it's screwed into my jaw. I can't swap it out for a "party tooth", I don't need to take it out at night. I treat it exactly as I do my real teeth and 99.99% of the time I'm not even conscious of it, despite it offering a block which is absent sensation when I drink something hot or cold.  Which feels really weird when I do stop to think about it, but after nearly three decades, that very rarely happens.

Of course, the answer is probably: "does it matter?" and/or "it depends". Never has the question of whether that tooth is or isn't "me" arisen, and whether our definition of "self" should extend to walking aids, hearing aids, or glasses will almost certainly depend on context. And for my purposes, I'm sure the extent of body and self matters in engineering for at least one reason: telepresence.
Telepresence crops up in three contexts that I can think of: teleoperation (for example, operating a robotic manipulator to clear up a nuclear reactor or robotic surgery such as Da Vinci) - the ability to project skilled movement elsewhere; telepresence robots (to literally be present in absence - sending a tekeoperated robot to attend a meeting on your behalf); and virtual reality (the sense of being somewhere remote - often not physically real).

So this got me thinking about different levels of proximity to the self that technology can exist at. Here's what I thought:

1) Integral: anything physically under the skin or attached to the skeleton; where some form of surgery is required to remove it. Pacemakers, cochlear implants, orthoses, dental implants, insulin pumps. I originally moored "internal", but felt I needed something to differentiate this from devices - camera capsules for example - that are swallowed but only remain inside temporarily.

2) Contact: Anything attached to the exterior of the body: clothes, an apple watch, a fit bit. Also prosthetics. I wonder if puppets fall under this category? Glove puppets at least.

3) Reachable: Anything unattached that I can interact with only if I can get it into contact with me. A mobile phone, maybe - though voice control such as Siri would affect that. Remote control devices likewise - though I would argue that my TV remote requires me to physically touch it. It extends the device, not "me". 

4) Proximity: Siri and Alexa are interesting. Do they extend the device? Or me? I mean, if I speak to another person, I don't regard them as me - but I project my voice to reach them. I extend an auditory and visual (and olefactory!) presence around me. So a camera for gesture control, or a microphone for voice recognition can be activated beyond arm's reach.

5) Remote: At this point, we're talking about things that are no longer in the immediate physical environment: out of sight and hearing. Perhaps another room, perhaps another city, perhaps half way round the world - perhaps in a virtual environment. This is your fundamental telepresence or virtual reality.

In a handy image-based form:

It's not very well thought out: a first iteration rather than a well-founded theory, but I find it helpful in puzzling through a few things. One of these is as a simple way of noting the physical proximity of a piece of technology required for me to form an assemblage with it. An insulin pump or pace maker must be integral; glasses and clothing must be in contact; a smartphone must be reachable; a screen or voice control must be proximate. I can't form an assemblage with anything remote except through an intermediary - for example, I can't talk to someone on the other side of the world, except by getting a phone within reach. 

The other is that we can locate the range of projecting different senses (seeing, feeling, hearing something outside the range of our usual senses) and projecting action (pushing or gripping something we otherwise couldn't; projecting our voice further than usual), and also the relative location of the device in question. For example, a "feature phone" (as in a mobile phone that isn't a smartphone), requires touch - it needs to be in physical contact with me - for me to operate it - but it allows me to hear and speak to someone miles away. Provided, of course, that they also have a phone of some sort.

To what extent am I projecting myself in using such a phone? Probably not much: I don't really regard myself as being in proximity with someone I call. What if I make a video call using Skype or 
Adobe Connection or Face Time? Am I projecting myself then? What if I'm a surgeon using Da Vinci? Am I projecting myself into the patient? There's a whole bunch of questions about what constitutes "self" there, but I feel like the distinction of different types of distance is useful. 

Anyway, I just thought I'd put it up there. It's a work in progress, and I'll have a stab at mapping out some examples to see if it's useful. As always, thoughts are welcome!

Tuesday 29 August 2017

Month in Review: August 2017

It has been a busy and (fortunately!) productive summer. The corollary to doing much and getting little done (or at least, little finished!) is that at some point things begin to fall off the To Do list. I'm not at the stage of having everything done, but I'm certainly in the "closing" stage of my summer To Do list. Things that have slowly trooped on are coming to a close: MSc and Summer projects finished; dissertations and resits  marked; a review written; an exam and a conference paper almost finished; handouts printed... a big grant getting dangerously close to signed off; big chunks of analysis software completed; PSAT calibration underway. Still a lot of open tasks, but I'm getting there.

I gave my talk at the Superposition on Humans, Robots and Puppets (with Anzir Boodoo and Samit Chakrabarty): it was a really good event and really got me thinking for the Augmenting the Body project. I finished reviewing Elizabeth R. Petrick's Making Computers Accessible for Disability and Society: a history of the development of accessibility features in personal computers from roughly 1980 to 2000, rather than a "how to" guide. I've written a new marking scheme for a module, and been setting up an industry-linked project for our students.

In September, I'll mostly be gearing up for the start of term: we're less than four weeks away from the start of teaching now; one of those weeks is fresher's week, and another I'm mostly in Sheffield for the N8 Robotics students network, and AAATE 2017 (where I'll be chairing a LUDI session on inclusive play). I want to get my exam written before term starts: I've written three of the five questions. And I need to prepare dissertation topics.

Busy times! I'll try to take stock of exactly what I got done over the summer at the end of next month, as term begins...

Wednesday 16 August 2017

Humaniteering - What, If Anything, Can Engineers Gain from Working with the Humanities and Sociology?

I'm working on a conference paper around breaking down barriers between disciplines, and (as always) I thought the blog seemed like a good place to think out loud on the subject before getting down to brass tacks in the paper itself. So bear with me: I'm literally thinking out loud here (if we understand "out loud" to mean the clack of my keyboard - my inner pedant feels compelled to point out!).

Anyway, I work with people from a variety of disciplines. I keep meaning to sit down and draw up a diagram of all the people of different disciplines that I've worked with in a research context over my twelve years in academia - either by writing a paper or a grant (successful or otherwise) or co-supervising a PhD student. Perhaps I will, but that's a little bit by-the-by. For now, let me just see if I can rattle off a quick list of my common collaborators outside engineering here at Leeds University:

English/Medical Humanities: Stuart Murray (Augmenting the Body), Amelia de Falco (Augmenting the Body)
Ethics: Chris Megone (PhD supervision), Rob Lawlor (PhD supervision), Kevin Macnish (Tracking People)
Health Sciences: Justin Keen (Tracking People)
Law: Anthea Hucklesby (Tracking People)
Psychology: Mark Mon-Williams (too many links to count!),  Richard Wilkie (ditto)
Rehabilitation Medicine: Bipin Bhakta (MyPAM, iPAM, PhD supervision), Rory O'Connor (MyPAM, PhD supervision), Nick Preston (MyPAM)
Sociology and Social Policy: Angharad Beckett (Together Through Play, LUDI)
Transport Studies: Bryan Matthews (WHISPER)

The list grows longer if you include the engineers, those who work outside Leeds Uni (Andrew Wilson and the cognitive archaeology crew being the most obvious) and the PhD students who have been involved in these various streams. The link to Psychology (motor learning, neuroscience), Health Sciences (health economics and governance), Rehabilitation Medicine (rehabilitating people) and Transport Studies (Assistive Technology for navigation) should be pretty obvious. At the end of the day, these represent the "professional customers" (as distinct from the end users - also an important group, but not one that can easily be captured as an academic discipline!) of the technology that we're building, and engaging with these disciplines is important if we want to be designing the right things, and verifying that our devices actually work (think of the V-model of systems engineering - we want to make sure we're converting needs into technical requirements properly, and properly verifying the end result). Ethics and Law might also seem obvious - we don't want unethical or illegal technology (that's a massive oversimplification, but engineering ethics and the challenge of keeping the law up-to-date with technological development is a big deal at the moment, and you can see why engineering researchers might want to get involved with ethicists to discuss ethical implications of what they do). Why, though, engage with people from English or Sociology, other than a monomaniacal desire to collect cross-disciplinary links? Where does Engineering cross over with these disciplines?

Caveat Lector
As ever, let's have the customary warning: I'm an engineer, and speak from an engineer's perspective (an academic, mechanical engineer's perspective at that), so I can only tell you about my perception of these disciplines. I may be mistaken; I'm certainly speaking from only a partial exposure to these disicplines. With that out the way, let's move forwards.

An Engineering Imagination
Of course, the fact that I named my blog (fully four years ago!) after "The Sociological Imagination" by C. Wright Mills  perhaps suggests some rationale for this interest. In the very first post on this blog, I set out my stall by saying:
"Mills was interested in the relationship between individual and society, and noted that social science wasn't just a matter of getting better data, or analysing it more effectively. Measurements are filtered through a whole set of social norms, and individual assumptions and biases. They colour the way we look at the world, and are often deeply embedded in the methods that we used...  Certainly it applies to engineering: at a fundamental level, what engineers choose to devote their time and energy to (or sell their skills for)... It's not just about what we engineer, but about the way we engineer it: decisions made in developing products and systems have huge implications for their accessibility, use and consequences (bot/h intended and unintended)."

And I revisited it last year in my post on Who Are We Engineering For? noting the challenge of helping engineers to address four key questions:
  1. To what extent should engineers be held accountable for the "selective enabling" of the systems and technologies they devise?
  2. To what extent do engineers have a responsibility to ensure the continuation of the species by, for example, preventing asteroid strikes or ensuring that humanity is able to colonise other planets?
  3. What are the responsibilities of engineers in terms of steering human evolution, if the transhumanist view is correct?
  4. How do we prioritise which problems engineers spend their time solving? Market forces? Equity? Maximising probability of humanity surviving into the posthuman future?
And I concluded that:
"perhaps an Engineering Imagination is a useful tool - being able to look at a system being designed critically, from the outside, to view its relationship with the norms and culture and history it will be deployed in."
That, I think, is the key issue. There are technical things that can be learned from sociology - rigour in analysing qualitative data, for example - but there's something more significant. One of the problems in engineering is a focus on engineering science rather than engineering practice. Learning the technicalities of how to model the behaviour of the world without any thought to what that means in practice. The challenge is that it's easy to say that engineers should be more aware of the implications of their technology - the big question is how do we do that? How do you put a finger on the Engineering Imagination? What does it mean in practice? That, I think, is where the Sociology and Humanities come in.

The Tricky Business of Being Human
Reading up on the Posthuman (I finished Braidotti, by the by - more on that in the next month or so!), makes me a little cagey about using the term human, but it's a convenient shorthand and I don't want to get caught up in lengthy discussions around self-organising systems, the limits of the body, anthropocentrism and humanism. Anyway, the point about the Humanities and Sociology is that they deal with people and their positions within complex social relationships - that link between the personal and the broader "milieu" as the Sociological Imagination puts it. This applies in two ways in engineering: both in terms of stakeholders (users, most obviously, but they can be a multiplicity) and the engineers themselves. So, the stakeholders in an engineering project are neither independent "vitruvian" individuals independent of the world around them, nor an amorphous statistical mass of aggregate data. But it applies to the engineers themselves - who in turn have tastes, background, personalities, histories, families, and find themselves enmeshed in the social processes of an organisation and project management process. They may not all be "engineers" either: a huge range of people are involved in product development, and even the boundaries of what is being developed can be porous. I don't think that's a controversial argument: I have yet to hear anyone make the argument that engineering is a pure, objective  process that leads to a single unquestionably right answer. Most of the challenge in engineering is in mapping complex, messy real world problems into forms that you (as the engineer) have the capability to solve. The "fuzzy front end" and "wicked problems", is a well-recognised problem. And the problem is that the dynamic nature of engineering problems means that these don't just apply at the start of the process. You don't just characterise once and have done with it - you're perpetually having to loop back, adjust expectations, update requirements, work with what you have. It's like user centred design - you don't just ask the user what they want and then go on and make it. You have to keep checking and course-correcting. Sometimes, people don't want what they say they want. Or a product takes so long to develop that it's solving a problem everyone used to have five years ago, but not any more.

This is like Donald Schön's Reflective Practitioner - constantly proposing moves, reflecting on the outcome and reframing the problem in light of the result.  It's this process that I hope that working with Humanities and Sociologists can help with in engineering terms. It's partly about having the concepts and frameworks to process this; partly about methodological tools that help incorporate that into the process. Engineers are people, with all the frailties and limits that implies - Micheal Davis in his essay "Explaining Wrongdoing" talks of microscopic vision (a concept that my one-time PhD Student Helen Morley highlighted to me): that expertise encourages a narrow focus (knowing more and more about less and less...) at the expense of a broader view of consequences. This dovetails beautifully with the notion of selective enabling and design exclusion, but also the Collingridge Dilemma: the potential difficulty in foreseeing the side effects of new technologies until it's too late.

Which isn't to say that we should be abandoning rational judgement and analysis - just that we need to make sure that we're framing problems correctly, and aware of the limits of what we bring to the analysis. I don't know how all this is going to help - that's one for the humaniteers (as I like to call them) and sociologists to answer.

Monday 31 July 2017

Month in review: July 2017

I've shifted to monthly, rather than weekly reviews, now: that (hopefully!) allows me to hit my two-posts-a-month target, with one "non-review" post each month. Hence,  today is the day to review July.

July is a funny month. I've mentioned before the problem of "doing much but getting nothing done", and July often exhibits this. You work furiously, but nothing of substance gets ticked off the To Do list: there's just a load of half-finished tasks. Which, while frustrating, isn't a real problem: I make a point of trying to run multiple tasks in parallel, so that teaching prep continues over the summer, freeing up time for more research in term time.

I've lately become a fan of the Pomodone app, which is a Web-based implementation of the Pomodoro technique. This means working on a task for 25 minutes at a time. The nice thing about this is that not only does it sync with Wunderlist, where I keep my To Do list, but it logs how much time you've spent on each task, so you can at least see progress. Granted, time spent isn't an exact proxy for progress on a task, but unless you want to micromanage your To Do list to the nth degree, it's at least a satisfying alternative to actually being able to tick the big tasks off.

So, what tasks have been underway this month? Well, I have been reviewing Elizabeth Petrick's Making Computers Accessible for Disability and Society (reading done; writing underway); I've prepared a presentation on Humans, Puppets and Robots for the Superposition to be delivered on the 2nd of August. I've been calibrating PSAT and looking at its development into reach-to-grasp movements; I've rejigged my reach-to-grasp model to accommodate time variations in dynamic behaviour; I'm rewriting a marking scheme for my Level 2 undergraduate module; I've been preparing my handouts for next year (I like to get them printed for the end of July, just to avoid the temptation to keep revising them as term approaches); I've been supervising two undergraduate students who are working on the MagOne force sensor for Grip purposes (first iteration due this week); preparing for the Tracking People network finale in November; developing the PACLab technology roadmap; attending graduations; attending the EPS Conference; supervising PhD students and an MSc student; prepping a paper for submission and working on a grant proposal.

Yeah, that'll keep me busy, alright! Thankfully, August will see things getting ticked off the list. Good fun!

Tuesday 25 July 2017

Robots, Humans, Puppets: What's the Difference?

I've agreed to give a talk for the Superposition on the 2nd of August on the subject of "Humans, Puppets and Robots: What's the Difference?". This is part of their ASMbly talks and events, which bring together an Artist, a Scientist and a Maker to discuss a particular topic (you can get tickets via Eventbrite, if you so wish!). In this case, the Artist is the excellent Anzir Boodoo (puppeteer of Samuel L. Foxton), the Scientist is Samit Chakrabarty (who specialises in motor control via the spine)  and the Maker is... well, me. Unsurprisingly, Anzir will be talking Puppets, Samit will be talking Humans and it falls to me to talk Robots. As is my custom, I thought I'd use the blog as a handy place to work out my thoughts before putting the presentation together.

The main question that we're looking at is how Puppets, Humans and Robots are similar and how they are different: the clue's in the title of the talk. This is really interesting question. I've often thought about the human vs robot link. It's something that crops up a lot in my line of work, especially when you're looking at how humans and robots interact and when the robot has to use force feedback to help guide human movement. Samit is particularly interesting in this area, because of his modelling of human motor control as an electrical circuit. The links between robots and puppets though has been particularly interesting to reflect on, as it ties in with some of my recent thoughts about the Tracking People project, and algorithms as a set of pre-made decisions. I mean, what is a computer program but a kind of time-delayed puppetry? By that token, a robot is just a specialised type of puppet: at least until Strong AI actually turns up.

I thought I'd break the talk down into four sections:

1) What is a Robot?
2) How do Robots Act?
3) How do Robots Sense?
4) Algorithms: Robots are Stupid

Let's take them in turn.

What is a Robot?

For all that we hear a lot about them, we don't really have a good definition of what constitutes a robot. I work in robotics, so I see a lot of robots, and I'm not sure I have a good sense of what the average person thinks of as a robot. iCub probably comes pretty close (seen here learning to recognise a chicken from our Augmenting the Body visit to Sheffield Robotics last year) to what I imagine most people think of as a robot:


A sort of mechanical person, though the requirement to look like a human probably isn't there - I mean, most people would recognise R2-D2 (full disclosure - R2-D2 remains what I consider the ideal helper robot; which may say as much about my age as its design) or more recently BB-8, as a robot just as much as C-3PO.  Perhaps the key feature is that it's a machine that can interact with its environment, and has a mind of its own? That's not a bad definition, really. The question is: how much interaction and how much autonomy are required for an item to become a robot? To all intents and purposes, the Star Wars droids are electomechanical people, with a personality and the ability to adapt to their environment.

The term Robot originates from the Czech playwright Karel Čapek's Play Rossum's Universal Robots (apparently from the Czech word for forced labor, robota). In this play, robots are not electromechanical, but biomechanical  - but still assembled. This draws an interesting link to Samit's view of the human body, of course. Perhaps one day we will have robots using biological components: for the time being, at least, robots are electromechanical machines. Yet, there are lots of machines that do things, and we don't consider them robots. A washing machine, for example. A computer. What sets a robot aside?

Well, for starters we have movement and the ability to act upon the environment. A computer, for example, is pretty complex, but its moving parts (fans and disc drives, mostly) are fairly limited, and don't do much externally. It doesn't act upon its environment, beyond the need to pull in electricity and vent heat. So, we might take movement as being a key criterion for a robot. We might wish to specify complex movement - so a washing machine, for example, that just spins a drum wouldn't count. No, there needs to be some substantial interaction - movement, or gripping.

We can also differentiate an automaton from a robot - something that provides complex movements, but only in a pre-specified order. It carries on doing the same thing regardless of what happens around it. A wind-up toy, might provide complex movement, for example, but it wouldn't be a robot. We expect a robot to react and adapt to its environment in some way.

This brings up four handy conditions that we can talk through:

1) A robot is an artefact - it has been designed and constructed by humans;
2) A robot can act upon its environment - it possesses actuators of some form;
3) A robot can sense its environment - it has sensors of some form;
4) A robot can adapt its actions based upon what it senses from its environment - it has an algorithm of some form that allows it to adapt what its actuators do based upon its sensors.

The first of these doesn't require further discussion (except in so far as we might note that a puppet is also an artefact, whereas a human is not), but let's take a look at each of the others in turn.

Actuators  - how does a robot act upon its environment?

Actuators imply movement - a motor of some form. A robot could also have loudspeakers to produce sound, LEDs to produce light, all of which can be intelligently controlled, but so can any computer, or mobile phone.  So I'll focus on actuators.

It's worth noting that actuation has two characteristics - some power source which it will convert into mechanical power in the form of movement; and some control signal that tells it how much output to produce. These actuators can be linear (producing movement in a straight line) or rotary (spinning round in a circle). The power source is often electricity, but can be pneumatic (using compressed air) or hydraulic (using liquid).

The mechanical output can then be adapted using all kinds of mechanisms - attached to wheels or propellers to provide propulsion;  attached to four-bar linkage to get more complex oscillations such as moving parallel grip surfaces so that a robot can grip; attached to cables to drive more complex kinematic chains (for example, the Open Bionics ADA Hand).

Apart from the complex mechanism design this is fairly straightforward (thanks, it is worth saying, to all the efforts of those put the hard work into developing those actuators). The challenge lies in getting the right control signal. That's what differentiates robots from automata. Automata have actuators, but the control signal is pre-determined. In a robot, that control signal adapts to its environment. For that, we need the other two elements: sensors and a decision-making algorithm to decide how the system should respond.

Sensors - how does a robot sense its environment?

So, a robot has to have some way of detecting its environment. These can take a huge variety of forms, but as electrical signals are communicated as voltages, anything that can produce a voltage or changes its resistance (which, thanks to the magic of the potential divider can be used to change a voltage) can be measured electronically, and a huge array of sensors are available for this purpose. A few of the most obvious:

Sense of Balance - Accelerometer/Gyroscope: An accelerometer gives a voltage proportional to linear acceleration along a given axis. Since gravity produces downward acceleration, this can be used in a stationary object to detect orientation; it will get confused if other accelerations are involved (for example if the accelerometer is moved linearly); a gyroscope, on the other hand, detects changes in orientation and therefore can be used to detect orientation - with these two, the robot immediately has some sense of balance and inertia, I guess akin to the use of fluid in the inner ear.

Proprioception - Potentiometers: Linear and rotary potentiometers change their resistance as a function of linear or angular position; allowing the robot to detect the actual position of any joints to which they are attached (as opposed to where they are supposed to be). In this way, the robot can know when something has perturbed its movement (for example, one of it's joints has been knocked, or bumped into a wall). Encoders are a more advanced version of this.

Touch - Force Dependent Resistors: As their name suggests, Force Dependent Resistors change their resistance based on the amount of force or pressure they experience. This is useful for telling when an object or barrier has been encountered - but even a simple switch could do that. The benefit of a force dependent resistor is that it gives some indication of how hard the object is being touched. That's important for grip applications, where too little force means the object will slip from the grasp, and too much will damage the object.

Temperature - thermistor: A thermistor will changes its resistance according to the temperature applied to it, providing a way of measuring temperature.

Light - A light dependent resistor will change its resistance according to how much light reaches it. In this way, a robot can know whether it is in darkness, or light.

Distances: Ultrasonic or infrared distance sensors return a voltage based on how long an ultrasonic or infrared signal takes to bounce off an object in front of it. In this way, the robot can be given a sense of how much space is around it, albeit not what is filling that space. In this way, Robots can be equipped with sensors that stop them from bumping into objects around them.

Hearing - Microphones: Microphones are a bit more complex, but they produce a voltage based on soundwaves that arrive. This is the basis of telephones and recording mics, and can be used for simple applications (move towards or away from a noise, for example) or more complex applications (speech recognition is the latest big thing for Google, Apple and Amazon).

Vision - Cameras: Computer vision is a big area, and one that is currently developing at a rapid pace. Object recognition is tricky, but can be done - face recognition has become extremely well developed.  In this way, a robot can recognise whether it is pointing towards a face, for example, or can be trained to keep a particular object in the centre of its vision.

There are a wealth of others (magnetometers to detect magnetic fields; gps location tracking; EMG to trigger prosthetics from muscle signals) but these are the most common. Putting these together, a robot can gather quite complex information about its environment, and the position of its constituent parts within it. The challenge of course, is in making sense of this information. All the sensor provides is a voltage that tells you something about a property near that sensor.

You can get some basic stimulus-response type behaviour with simple circuitry - a potential divider that turns on a light when it gets dark outside, for example. The real challenge is in how to integrate all this information, and respond to it in the form of an algorithm.

Algorithms: Robots are Stupid

Although we hear a lot about  artificial intelligence and the singularity, robots are actually stupid and literal-minded: they do exactly what you tell them, and nothing more. They don't improvise, they don't reinterpret, they don't imagine: they mechanically follow a flowchart of decisions that basically take the form "If the sensor says this, then the actuator should do that". I'm simplifying a lot there, but the basic principle stands. They can go through those flow charts really quickly, if designed properly, and perform calculations in a fraction of the time it would take a human, they might even be able to tune the thresholds in the flowchart to "learn" under which actuator responses best fit a given situation: but that "learning" process has to be built into the flowchart. By a human. The robot, itself, doesn't adapt.

Now, AI research is advancing all the time, and one day we may have genuinely strong Artificial General Intelligence that can adapt itself to anything, or at least a huge range of situations. Right now, even the best AI we have is specialised, and has to be designed. Machine Learning means that the designer may not know exactly what thresholds or what weights in a neural network are being used to say, recognise a given object in a photograph. But they had to design the process by which that neural network was tuned. As we increasingly depend on black-box libraries form scratch, perhaps one day, robots will be able to do some extremely impressive self-assembly of code. For now, Robots learn and can change their behaviour - but only in the specific ways that they have been told to.

So, an algorithm is basically a set of pre-made decisions: the robot doesn't decide, the designers has already pre-made the decisions, and then told the robot what to do in each situation. Robots only do what you tell them to. Granted, there can be a language barrier: sometimes you aren't telling the robot what you thought you were, and that's when bugs arise and you get unexpected behaviour. But that's not the robot experimenting or learning: it's the robot following literally what you told it to do. This also means that as a designer or programmer of robots, you need to have foreseen every eventuality. You need to have identified all the decisions that the robot will need to make, and what to do in every eventuality - including what to do when it can't work out what to do.

Of course, robots can have different degrees of autonomy. For example, a quadcopter is not autonomous. It makes lots of local decisions - how to adjust its motors to keep itself level, so that the user can focus on where they want it to go, rather than how to keep it in the air - but left to its own devices, it does nothing. By contrast, a self-driving car is required to have a much greater level of autonomy, and therefore has to cover a much broader range of eventualities.

Thus, there is always a slightly Wizard-of-Oz type situation: a human behind the robot. In this sense, robots are like puppets - there is always a puppeteer. It's just that the puppeteer has decided all the responses in advance, rather than making those decisions in real-time. What's left to the robot is to read its sensors and determine which of its pre-selected responses it's been asked to give.

There is a side issue here. I mentioned that robots can do calculations much faster than humans - but for a given robot, it still has a finite capacity, represented by its processor speed and memory. It can only run through the flowchart at a given speed. For a simple flowchart, that doesn't matter too much. As the flowchart gets more complex, and more sensors and actuators need to be managed, the rate at which the robot can work through it slows down. Just to complicate matters further, sensors and actuators don't necessarily respond at the same speed as the robot can process flowchart. Even a robot with the fastest processor and masses of memory will be limited by the inertia of its actuators, or the speed at which its sensors can sample reliably.

One response to this is more local control: dividing up the robot's sensors and actuators more locally. A good example of this is the servomotor, where you have a sensor attached to the motor so that it knows its position and speed, and will try to maintain the position or speed specified by a central controller. This is handy because it frees up the designer from having to implement steps in their flowchart to provide this control, which has the benefit of freeing up capacity for other decisions, as well as meaning that if something happens to perturb the actuator, it responds immediately, rather than waiting for robot to work through to the relevant part of its flowchart.

Humans, Puppets and Robots: What's the Difference?

Let's return to the motivating question, then. How is a robot similar to or different from a human or puppet?

There are some obvious similarities to humans, even if the robot is not itself humanoid. It has bones (in the form of its rigid members). It has actuators which are effectively equivalent to muscles. It has sensors which respond to stimuli (analogous to the variety of receptors in the human body). It has a computer (brain) which runs an algorithm to decide what how to respond to given stimuli. Finally, it sends signals between sensors, actuators and computer through electric signals. These are similar to a human. The difference, I guess is that a robot is an artefact, lacks the self-repairing capacities of a human body, and its brain lacks the flexibility of human thought, since a human has to pre-make all the decisions, whereas in person, a human can make decisions in real time.

There are some obvious similarities between a robot and a puppet as well. Both are artefacts, and in both cases the decisions about how to act are taken by a human (in advance in the case of the robot, in real-time in the case of the puppet). Both lack the self-organising/self-repairing/automatically learning nature of a human.


Human
Puppet
Robot
Occurs…
Naturally
Artificially
Artificially
Is…
Biological
(Electro)Mechanical
Mechanical
Decisions are made by…
Self*
External Human
External Human
Decisions are made in…
Real time*
Real time
Advance
Sensors?
Yes
No
Yes
Actuators?
Yes
No
Yes
Learning occurs…
Automatically
Not at all.
As dictated by programmer.

* At least, conscious decisions are. There are lots of decisions that are, as I understand it, “pre-programmed”, so we might argue that on this front, we might argue that these decisions are made in advance, and query whether the “self” is really making them.

Earlier, I said that robots can perform calculations much faster than humans, but actually, now that I think about it, I don't know if that's true. Maybe their flowcharts are just a lot simpler, meaning they can work through the steps faster? Simpler, but less flexible. Perhaps Samit will enlighten us.

Also, is a puppet a robot with a human brain? We can't really talk about a puppet without an operator, so is a puppet an extension of the operator? A sort of prosthetic?

I don't know - but I'm looking forward to exploring these issues on the night!