Wednesday, 20 September 2017

Present in Absence: Projecting the Self

Last week saw me in Sheffield for the launch of the N8 Robotics and Autonomous Systems Student Network on behalf of Robotics@Leeds (it was great, thanks) and the AAATE conference on behalf of LUDI (also great, thanks!). There are probably blog posts in both, but I mixed in some of my Augmenting the Body/Self duties by catching up with Sheffield Robotics' Tony Prescott and Michael Szollosy, by picking up a MiRo for Stuart Murray to show off on behalf of the project at the Northern Network for Medical Humanities Research Congress, and using my extended commute to spend some time going through our thoughts from the summer workshop. I'll keep those under my hat, since they'll be going into a grant, but between all these things and presentations at AAATE about NAO and ZORA, and with articles appearing about VR at Leeds, I've been thinking a lot about the limits of the body and the self.

So, I thought I would work through them here, at least to get them straight. Bear in mind this is me thinking off the top of my head without a proper literature review, or any kind of philosophical expertise on self  or consciousness. It's a hot take from an engineer responding to the ideas swirling around me: feel free to correct me.

The limits of the body and Deleuzian revisitings of assemblages and rhizomes have been covered elsewhere on this blog. The skin, as I've noted before, is a pretty handy boundary for delimiting self and other, and the one that we probably instinctively default to. Questions of whether my clothes or equipment should be regarded as part of "me" might seem trivial. The answer is obviously no, since I can put these on and off in a way that I couldn't with any other part of me: even hair or nails, while easily shorn, are not rapidly replaced except by extensions or false nails. Yet, if we take ease of removal and replacement as indicative of the boundary of self, then (as Margirit Schildrick pointed out at our Augmenting the Body finale) what about the human microbiome? The  countless bacteria that we lug around inside us? They aren't easily removed or replaced, though they can be - I can swallow antibiotics and probiotics, I guess. Yet, if I have an abscess, it's not easily removed, so is that part of "me"? And by extension, what about a tattoo, or a pacemaker, artificial hip, or insulin pump? I have none of these, so the question is perhaps facetious. But I do have a dental implant: an artificial tooth screwed in to replace the one knocked out on a school playground in the 80s. Is that "me"? Or does it have to be plugged in to my nervous system to be "me"? In which case, my surviving teeth would count, but not the false tooth - what about hair and nails? Do they count? They don't have nerves (do they? I'm getting outside my field here), even if they transmit mechanical signals back to the skin they're attached to. My tooth isn't like my clothing - I can't take it off and put it back in any more easily than my other teeth: it's screwed into my jaw. I can't swap it out for a "party tooth", I don't need to take it out at night. I treat it exactly as I do my real teeth and 99.99% of the time I'm not even conscious of it, despite it offering a block which is absent sensation when I drink something hot or cold.  Which feels really weird when I do stop to think about it, but after nearly three decades, that very rarely happens.

Of course, the answer is probably: "does it matter?" and/or "it depends". Never has the question of whether that tooth is or isn't "me" arisen, and whether our definition of "self" should extend to walking aids, hearing aids, or glasses will almost certainly depend on context. And for my purposes, I'm sure the extent of body and self matters in engineering for at least one reason: telepresence.
Telepresence crops up in three contexts that I can think of: teleoperation (for example, operating a robotic manipulator to clear up a nuclear reactor or robotic surgery such as Da Vinci) - the ability to project skilled movement elsewhere; telepresence robots (to literally be present in absence - sending a tekeoperated robot to attend a meeting on your behalf); and virtual reality (the sense of being somewhere remote - often not physically real).

So this got me thinking about different levels of proximity to the self that technology can exist at. Here's what I thought:

1) Integral: anything physically under the skin or attached to the skeleton; where some form of surgery is required to remove it. Pacemakers, cochlear implants, orthoses, dental implants, insulin pumps. I originally moored "internal", but felt I needed something to differentiate this from devices - camera capsules for example - that are swallowed but only remain inside temporarily.

2) Contact: Anything attached to the exterior of the body: clothes, an apple watch, a fit bit. Also prosthetics. I wonder if puppets fall under this category? Glove puppets at least.

3) Reachable: Anything unattached that I can interact with only if I can get it into contact with me. A mobile phone, maybe - though voice control such as Siri would affect that. Remote control devices likewise - though I would argue that my TV remote requires me to physically touch it. It extends the device, not "me". 

4) Proximity: Siri and Alexa are interesting. Do they extend the device? Or me? I mean, if I speak to another person, I don't regard them as me - but I project my voice to reach them. I extend an auditory and visual (and olefactory!) presence around me. So a camera for gesture control, or a microphone for voice recognition can be activated beyond arm's reach.

5) Remote: At this point, we're talking about things that are no longer in the immediate physical environment: out of sight and hearing. Perhaps another room, perhaps another city, perhaps half way round the world - perhaps in a virtual environment. This is your fundamental telepresence or virtual reality.

In a handy image-based form:

It's not very well thought out: a first iteration rather than a well-founded theory, but I find it helpful in puzzling through a few things. One of these is as a simple way of noting the physical proximity of a piece of technology required for me to form an assemblage with it. An insulin pump or pace maker must be integral; glasses and clothing must be in contact; a smartphone must be reachable; a screen or voice control must be proximate. I can't form an assemblage with anything remote except through an intermediary - for example, I can't talk to someone on the other side of the world, except by getting a phone within reach. 

The other is that we can locate the range of projecting different senses (seeing, feeling, hearing something outside the range of our usual senses) and projecting action (pushing or gripping something we otherwise couldn't; projecting our voice further than usual), and also the relative location of the device in question. For example, a "feature phone" (as in a mobile phone that isn't a smartphone), requires touch - it needs to be in physical contact with me - for me to operate it - but it allows me to hear and speak to someone miles away. Provided, of course, that they also have a phone of some sort.

To what extent am I projecting myself in using such a phone? Probably not much: I don't really regard myself as being in proximity with someone I call. What if I make a video call using Skype or 
Adobe Connection or Face Time? Am I projecting myself then? What if I'm a surgeon using Da Vinci? Am I projecting myself into the patient? There's a whole bunch of questions about what constitutes "self" there, but I feel like the distinction of different types of distance is useful. 

Anyway, I just thought I'd put it up there. It's a work in progress, and I'll have a stab at mapping out some examples to see if it's useful. As always, thoughts are welcome!

Tuesday, 29 August 2017

Month in Review: August 2017

It has been a busy and (fortunately!) productive summer. The corollary to doing much and getting little done (or at least, little finished!) is that at some point things begin to fall off the To Do list. I'm not at the stage of having everything done, but I'm certainly in the "closing" stage of my summer To Do list. Things that have slowly trooped on are coming to a close: MSc and Summer projects finished; dissertations and resits  marked; a review written; an exam and a conference paper almost finished; handouts printed... a big grant getting dangerously close to signed off; big chunks of analysis software completed; PSAT calibration underway. Still a lot of open tasks, but I'm getting there.

I gave my talk at the Superposition on Humans, Robots and Puppets (with Anzir Boodoo and Samit Chakrabarty): it was a really good event and really got me thinking for the Augmenting the Body project. I finished reviewing Elizabeth R. Petrick's Making Computers Accessible for Disability and Society: a history of the development of accessibility features in personal computers from roughly 1980 to 2000, rather than a "how to" guide. I've written a new marking scheme for a module, and been setting up an industry-linked project for our students.

In September, I'll mostly be gearing up for the start of term: we're less than four weeks away from the start of teaching now; one of those weeks is fresher's week, and another I'm mostly in Sheffield for the N8 Robotics students network, and AAATE 2017 (where I'll be chairing a LUDI session on inclusive play). I want to get my exam written before term starts: I've written three of the five questions. And I need to prepare dissertation topics.

Busy times! I'll try to take stock of exactly what I got done over the summer at the end of next month, as term begins...

Wednesday, 16 August 2017

Humaniteering - What, If Anything, Can Engineers Gain from Working with the Humanities and Sociology?

I'm working on a conference paper around breaking down barriers between disciplines, and (as always) I thought the blog seemed like a good place to think out loud on the subject before getting down to brass tacks in the paper itself. So bear with me: I'm literally thinking out loud here (if we understand "out loud" to mean the clack of my keyboard - my inner pedant feels compelled to point out!).

Anyway, I work with people from a variety of disciplines. I keep meaning to sit down and draw up a diagram of all the people of different disciplines that I've worked with in a research context over my twelve years in academia - either by writing a paper or a grant (successful or otherwise) or co-supervising a PhD student. Perhaps I will, but that's a little bit by-the-by. For now, let me just see if I can rattle off a quick list of my common collaborators outside engineering here at Leeds University:

English/Medical Humanities: Stuart Murray (Augmenting the Body), Amelia de Falco (Augmenting the Body)
Ethics: Chris Megone (PhD supervision), Rob Lawlor (PhD supervision), Kevin Macnish (Tracking People)
Health Sciences: Justin Keen (Tracking People)
Law: Anthea Hucklesby (Tracking People)
Psychology: Mark Mon-Williams (too many links to count!),  Richard Wilkie (ditto)
Rehabilitation Medicine: Bipin Bhakta (MyPAM, iPAM, PhD supervision), Rory O'Connor (MyPAM, PhD supervision), Nick Preston (MyPAM)
Sociology and Social Policy: Angharad Beckett (Together Through Play, LUDI)
Transport Studies: Bryan Matthews (WHISPER)

The list grows longer if you include the engineers, those who work outside Leeds Uni (Andrew Wilson and the cognitive archaeology crew being the most obvious) and the PhD students who have been involved in these various streams. The link to Psychology (motor learning, neuroscience), Health Sciences (health economics and governance), Rehabilitation Medicine (rehabilitating people) and Transport Studies (Assistive Technology for navigation) should be pretty obvious. At the end of the day, these represent the "professional customers" (as distinct from the end users - also an important group, but not one that can easily be captured as an academic discipline!) of the technology that we're building, and engaging with these disciplines is important if we want to be designing the right things, and verifying that our devices actually work (think of the V-model of systems engineering - we want to make sure we're converting needs into technical requirements properly, and properly verifying the end result). Ethics and Law might also seem obvious - we don't want unethical or illegal technology (that's a massive oversimplification, but engineering ethics and the challenge of keeping the law up-to-date with technological development is a big deal at the moment, and you can see why engineering researchers might want to get involved with ethicists to discuss ethical implications of what they do). Why, though, engage with people from English or Sociology, other than a monomaniacal desire to collect cross-disciplinary links? Where does Engineering cross over with these disciplines?

Caveat Lector
As ever, let's have the customary warning: I'm an engineer, and speak from an engineer's perspective (an academic, mechanical engineer's perspective at that), so I can only tell you about my perception of these disciplines. I may be mistaken; I'm certainly speaking from only a partial exposure to these disicplines. With that out the way, let's move forwards.

An Engineering Imagination
Of course, the fact that I named my blog (fully four years ago!) after "The Sociological Imagination" by C. Wright Mills  perhaps suggests some rationale for this interest. In the very first post on this blog, I set out my stall by saying:
"Mills was interested in the relationship between individual and society, and noted that social science wasn't just a matter of getting better data, or analysing it more effectively. Measurements are filtered through a whole set of social norms, and individual assumptions and biases. They colour the way we look at the world, and are often deeply embedded in the methods that we used...  Certainly it applies to engineering: at a fundamental level, what engineers choose to devote their time and energy to (or sell their skills for)... It's not just about what we engineer, but about the way we engineer it: decisions made in developing products and systems have huge implications for their accessibility, use and consequences (bot/h intended and unintended)."

And I revisited it last year in my post on Who Are We Engineering For? noting the challenge of helping engineers to address four key questions:
  1. To what extent should engineers be held accountable for the "selective enabling" of the systems and technologies they devise?
  2. To what extent do engineers have a responsibility to ensure the continuation of the species by, for example, preventing asteroid strikes or ensuring that humanity is able to colonise other planets?
  3. What are the responsibilities of engineers in terms of steering human evolution, if the transhumanist view is correct?
  4. How do we prioritise which problems engineers spend their time solving? Market forces? Equity? Maximising probability of humanity surviving into the posthuman future?
And I concluded that:
"perhaps an Engineering Imagination is a useful tool - being able to look at a system being designed critically, from the outside, to view its relationship with the norms and culture and history it will be deployed in."
That, I think, is the key issue. There are technical things that can be learned from sociology - rigour in analysing qualitative data, for example - but there's something more significant. One of the problems in engineering is a focus on engineering science rather than engineering practice. Learning the technicalities of how to model the behaviour of the world without any thought to what that means in practice. The challenge is that it's easy to say that engineers should be more aware of the implications of their technology - the big question is how do we do that? How do you put a finger on the Engineering Imagination? What does it mean in practice? That, I think, is where the Sociology and Humanities come in.

The Tricky Business of Being Human
Reading up on the Posthuman (I finished Braidotti, by the by - more on that in the next month or so!), makes me a little cagey about using the term human, but it's a convenient shorthand and I don't want to get caught up in lengthy discussions around self-organising systems, the limits of the body, anthropocentrism and humanism. Anyway, the point about the Humanities and Sociology is that they deal with people and their positions within complex social relationships - that link between the personal and the broader "milieu" as the Sociological Imagination puts it. This applies in two ways in engineering: both in terms of stakeholders (users, most obviously, but they can be a multiplicity) and the engineers themselves. So, the stakeholders in an engineering project are neither independent "vitruvian" individuals independent of the world around them, nor an amorphous statistical mass of aggregate data. But it applies to the engineers themselves - who in turn have tastes, background, personalities, histories, families, and find themselves enmeshed in the social processes of an organisation and project management process. They may not all be "engineers" either: a huge range of people are involved in product development, and even the boundaries of what is being developed can be porous. I don't think that's a controversial argument: I have yet to hear anyone make the argument that engineering is a pure, objective  process that leads to a single unquestionably right answer. Most of the challenge in engineering is in mapping complex, messy real world problems into forms that you (as the engineer) have the capability to solve. The "fuzzy front end" and "wicked problems", is a well-recognised problem. And the problem is that the dynamic nature of engineering problems means that these don't just apply at the start of the process. You don't just characterise once and have done with it - you're perpetually having to loop back, adjust expectations, update requirements, work with what you have. It's like user centred design - you don't just ask the user what they want and then go on and make it. You have to keep checking and course-correcting. Sometimes, people don't want what they say they want. Or a product takes so long to develop that it's solving a problem everyone used to have five years ago, but not any more.

This is like Donald Schön's Reflective Practitioner - constantly proposing moves, reflecting on the outcome and reframing the problem in light of the result.  It's this process that I hope that working with Humanities and Sociologists can help with in engineering terms. It's partly about having the concepts and frameworks to process this; partly about methodological tools that help incorporate that into the process. Engineers are people, with all the frailties and limits that implies - Micheal Davis in his essay "Explaining Wrongdoing" talks of microscopic vision (a concept that my one-time PhD Student Helen Morley highlighted to me): that expertise encourages a narrow focus (knowing more and more about less and less...) at the expense of a broader view of consequences. This dovetails beautifully with the notion of selective enabling and design exclusion, but also the Collingridge Dilemma: the potential difficulty in foreseeing the side effects of new technologies until it's too late.

Which isn't to say that we should be abandoning rational judgement and analysis - just that we need to make sure that we're framing problems correctly, and aware of the limits of what we bring to the analysis. I don't know how all this is going to help - that's one for the humaniteers (as I like to call them) and sociologists to answer.

Monday, 31 July 2017

Month in review: July 2017

I've shifted to monthly, rather than weekly reviews, now: that (hopefully!) allows me to hit my two-posts-a-month target, with one "non-review" post each month. Hence,  today is the day to review July.

July is a funny month. I've mentioned before the problem of "doing much but getting nothing done", and July often exhibits this. You work furiously, but nothing of substance gets ticked off the To Do list: there's just a load of half-finished tasks. Which, while frustrating, isn't a real problem: I make a point of trying to run multiple tasks in parallel, so that teaching prep continues over the summer, freeing up time for more research in term time.

I've lately become a fan of the Pomodone app, which is a Web-based implementation of the Pomodoro technique. This means working on a task for 25 minutes at a time. The nice thing about this is that not only does it sync with Wunderlist, where I keep my To Do list, but it logs how much time you've spent on each task, so you can at least see progress. Granted, time spent isn't an exact proxy for progress on a task, but unless you want to micromanage your To Do list to the nth degree, it's at least a satisfying alternative to actually being able to tick the big tasks off.

So, what tasks have been underway this month? Well, I have been reviewing Elizabeth Petrick's Making Computers Accessible for Disability and Society (reading done; writing underway); I've prepared a presentation on Humans, Puppets and Robots for the Superposition to be delivered on the 2nd of August. I've been calibrating PSAT and looking at its development into reach-to-grasp movements; I've rejigged my reach-to-grasp model to accommodate time variations in dynamic behaviour; I'm rewriting a marking scheme for my Level 2 undergraduate module; I've been preparing my handouts for next year (I like to get them printed for the end of July, just to avoid the temptation to keep revising them as term approaches); I've been supervising two undergraduate students who are working on the MagOne force sensor for Grip purposes (first iteration due this week); preparing for the Tracking People network finale in November; developing the PACLab technology roadmap; attending graduations; attending the EPS Conference; supervising PhD students and an MSc student; prepping a paper for submission and working on a grant proposal.

Yeah, that'll keep me busy, alright! Thankfully, August will see things getting ticked off the list. Good fun!

Tuesday, 25 July 2017

Robots, Humans, Puppets: What's the Difference?

I've agreed to give a talk for the Superposition on the 2nd of August on the subject of "Humans, Puppets and Robots: What's the Difference?". This is part of their ASMbly talks and events, which bring together an Artist, a Scientist and a Maker to discuss a particular topic (you can get tickets via Eventbrite, if you so wish!). In this case, the Artist is the excellent Anzir Boodoo (puppeteer of Samuel L. Foxton), the Scientist is Samit Chakrabarty (who specialises in motor control via the spine)  and the Maker is... well, me. Unsurprisingly, Anzir will be talking Puppets, Samit will be talking Humans and it falls to me to talk Robots. As is my custom, I thought I'd use the blog as a handy place to work out my thoughts before putting the presentation together.

The main question that we're looking at is how Puppets, Humans and Robots are similar and how they are different: the clue's in the title of the talk. This is really interesting question. I've often thought about the human vs robot link. It's something that crops up a lot in my line of work, especially when you're looking at how humans and robots interact and when the robot has to use force feedback to help guide human movement. Samit is particularly interesting in this area, because of his modelling of human motor control as an electrical circuit. The links between robots and puppets though has been particularly interesting to reflect on, as it ties in with some of my recent thoughts about the Tracking People project, and algorithms as a set of pre-made decisions. I mean, what is a computer program but a kind of time-delayed puppetry? By that token, a robot is just a specialised type of puppet: at least until Strong AI actually turns up.

I thought I'd break the talk down into four sections:

1) What is a Robot?
2) How do Robots Act?
3) How do Robots Sense?
4) Algorithms: Robots are Stupid

Let's take them in turn.

What is a Robot?

For all that we hear a lot about them, we don't really have a good definition of what constitutes a robot. I work in robotics, so I see a lot of robots, and I'm not sure I have a good sense of what the average person thinks of as a robot. iCub probably comes pretty close (seen here learning to recognise a chicken from our Augmenting the Body visit to Sheffield Robotics last year) to what I imagine most people think of as a robot:


A sort of mechanical person, though the requirement to look like a human probably isn't there - I mean, most people would recognise R2-D2 (full disclosure - R2-D2 remains what I consider the ideal helper robot; which may say as much about my age as its design) or more recently BB-8, as a robot just as much as C-3PO.  Perhaps the key feature is that it's a machine that can interact with its environment, and has a mind of its own? That's not a bad definition, really. The question is: how much interaction and how much autonomy are required for an item to become a robot? To all intents and purposes, the Star Wars droids are electomechanical people, with a personality and the ability to adapt to their environment.

The term Robot originates from the Czech playwright Karel Čapek's Play Rossum's Universal Robots (apparently from the Czech word for forced labor, robota). In this play, robots are not electromechanical, but biomechanical  - but still assembled. This draws an interesting link to Samit's view of the human body, of course. Perhaps one day we will have robots using biological components: for the time being, at least, robots are electromechanical machines. Yet, there are lots of machines that do things, and we don't consider them robots. A washing machine, for example. A computer. What sets a robot aside?

Well, for starters we have movement and the ability to act upon the environment. A computer, for example, is pretty complex, but its moving parts (fans and disc drives, mostly) are fairly limited, and don't do much externally. It doesn't act upon its environment, beyond the need to pull in electricity and vent heat. So, we might take movement as being a key criterion for a robot. We might wish to specify complex movement - so a washing machine, for example, that just spins a drum wouldn't count. No, there needs to be some substantial interaction - movement, or gripping.

We can also differentiate an automaton from a robot - something that provides complex movements, but only in a pre-specified order. It carries on doing the same thing regardless of what happens around it. A wind-up toy, might provide complex movement, for example, but it wouldn't be a robot. We expect a robot to react and adapt to its environment in some way.

This brings up four handy conditions that we can talk through:

1) A robot is an artefact - it has been designed and constructed by humans;
2) A robot can act upon its environment - it possesses actuators of some form;
3) A robot can sense its environment - it has sensors of some form;
4) A robot can adapt its actions based upon what it senses from its environment - it has an algorithm of some form that allows it to adapt what its actuators do based upon its sensors.

The first of these doesn't require further discussion (except in so far as we might note that a puppet is also an artefact, whereas a human is not), but let's take a look at each of the others in turn.

Actuators  - how does a robot act upon its environment?

Actuators imply movement - a motor of some form. A robot could also have loudspeakers to produce sound, LEDs to produce light, all of which can be intelligently controlled, but so can any computer, or mobile phone.  So I'll focus on actuators.

It's worth noting that actuation has two characteristics - some power source which it will convert into mechanical power in the form of movement; and some control signal that tells it how much output to produce. These actuators can be linear (producing movement in a straight line) or rotary (spinning round in a circle). The power source is often electricity, but can be pneumatic (using compressed air) or hydraulic (using liquid).

The mechanical output can then be adapted using all kinds of mechanisms - attached to wheels or propellers to provide propulsion;  attached to four-bar linkage to get more complex oscillations such as moving parallel grip surfaces so that a robot can grip; attached to cables to drive more complex kinematic chains (for example, the Open Bionics ADA Hand).

Apart from the complex mechanism design this is fairly straightforward (thanks, it is worth saying, to all the efforts of those put the hard work into developing those actuators). The challenge lies in getting the right control signal. That's what differentiates robots from automata. Automata have actuators, but the control signal is pre-determined. In a robot, that control signal adapts to its environment. For that, we need the other two elements: sensors and a decision-making algorithm to decide how the system should respond.

Sensors - how does a robot sense its environment?

So, a robot has to have some way of detecting its environment. These can take a huge variety of forms, but as electrical signals are communicated as voltages, anything that can produce a voltage or changes its resistance (which, thanks to the magic of the potential divider can be used to change a voltage) can be measured electronically, and a huge array of sensors are available for this purpose. A few of the most obvious:

Sense of Balance - Accelerometer/Gyroscope: An accelerometer gives a voltage proportional to linear acceleration along a given axis. Since gravity produces downward acceleration, this can be used in a stationary object to detect orientation; it will get confused if other accelerations are involved (for example if the accelerometer is moved linearly); a gyroscope, on the other hand, detects changes in orientation and therefore can be used to detect orientation - with these two, the robot immediately has some sense of balance and inertia, I guess akin to the use of fluid in the inner ear.

Proprioception - Potentiometers: Linear and rotary potentiometers change their resistance as a function of linear or angular position; allowing the robot to detect the actual position of any joints to which they are attached (as opposed to where they are supposed to be). In this way, the robot can know when something has perturbed its movement (for example, one of it's joints has been knocked, or bumped into a wall). Encoders are a more advanced version of this.

Touch - Force Dependent Resistors: As their name suggests, Force Dependent Resistors change their resistance based on the amount of force or pressure they experience. This is useful for telling when an object or barrier has been encountered - but even a simple switch could do that. The benefit of a force dependent resistor is that it gives some indication of how hard the object is being touched. That's important for grip applications, where too little force means the object will slip from the grasp, and too much will damage the object.

Temperature - thermistor: A thermistor will changes its resistance according to the temperature applied to it, providing a way of measuring temperature.

Light - A light dependent resistor will change its resistance according to how much light reaches it. In this way, a robot can know whether it is in darkness, or light.

Distances: Ultrasonic or infrared distance sensors return a voltage based on how long an ultrasonic or infrared signal takes to bounce off an object in front of it. In this way, the robot can be given a sense of how much space is around it, albeit not what is filling that space. In this way, Robots can be equipped with sensors that stop them from bumping into objects around them.

Hearing - Microphones: Microphones are a bit more complex, but they produce a voltage based on soundwaves that arrive. This is the basis of telephones and recording mics, and can be used for simple applications (move towards or away from a noise, for example) or more complex applications (speech recognition is the latest big thing for Google, Apple and Amazon).

Vision - Cameras: Computer vision is a big area, and one that is currently developing at a rapid pace. Object recognition is tricky, but can be done - face recognition has become extremely well developed.  In this way, a robot can recognise whether it is pointing towards a face, for example, or can be trained to keep a particular object in the centre of its vision.

There are a wealth of others (magnetometers to detect magnetic fields; gps location tracking; EMG to trigger prosthetics from muscle signals) but these are the most common. Putting these together, a robot can gather quite complex information about its environment, and the position of its constituent parts within it. The challenge of course, is in making sense of this information. All the sensor provides is a voltage that tells you something about a property near that sensor.

You can get some basic stimulus-response type behaviour with simple circuitry - a potential divider that turns on a light when it gets dark outside, for example. The real challenge is in how to integrate all this information, and respond to it in the form of an algorithm.

Algorithms: Robots are Stupid

Although we hear a lot about  artificial intelligence and the singularity, robots are actually stupid and literal-minded: they do exactly what you tell them, and nothing more. They don't improvise, they don't reinterpret, they don't imagine: they mechanically follow a flowchart of decisions that basically take the form "If the sensor says this, then the actuator should do that". I'm simplifying a lot there, but the basic principle stands. They can go through those flow charts really quickly, if designed properly, and perform calculations in a fraction of the time it would take a human, they might even be able to tune the thresholds in the flowchart to "learn" under which actuator responses best fit a given situation: but that "learning" process has to be built into the flowchart. By a human. The robot, itself, doesn't adapt.

Now, AI research is advancing all the time, and one day we may have genuinely strong Artificial General Intelligence that can adapt itself to anything, or at least a huge range of situations. Right now, even the best AI we have is specialised, and has to be designed. Machine Learning means that the designer may not know exactly what thresholds or what weights in a neural network are being used to say, recognise a given object in a photograph. But they had to design the process by which that neural network was tuned. As we increasingly depend on black-box libraries form scratch, perhaps one day, robots will be able to do some extremely impressive self-assembly of code. For now, Robots learn and can change their behaviour - but only in the specific ways that they have been told to.

So, an algorithm is basically a set of pre-made decisions: the robot doesn't decide, the designers has already pre-made the decisions, and then told the robot what to do in each situation. Robots only do what you tell them to. Granted, there can be a language barrier: sometimes you aren't telling the robot what you thought you were, and that's when bugs arise and you get unexpected behaviour. But that's not the robot experimenting or learning: it's the robot following literally what you told it to do. This also means that as a designer or programmer of robots, you need to have foreseen every eventuality. You need to have identified all the decisions that the robot will need to make, and what to do in every eventuality - including what to do when it can't work out what to do.

Of course, robots can have different degrees of autonomy. For example, a quadcopter is not autonomous. It makes lots of local decisions - how to adjust its motors to keep itself level, so that the user can focus on where they want it to go, rather than how to keep it in the air - but left to its own devices, it does nothing. By contrast, a self-driving car is required to have a much greater level of autonomy, and therefore has to cover a much broader range of eventualities.

Thus, there is always a slightly Wizard-of-Oz type situation: a human behind the robot. In this sense, robots are like puppets - there is always a puppeteer. It's just that the puppeteer has decided all the responses in advance, rather than making those decisions in real-time. What's left to the robot is to read its sensors and determine which of its pre-selected responses it's been asked to give.

There is a side issue here. I mentioned that robots can do calculations much faster than humans - but for a given robot, it still has a finite capacity, represented by its processor speed and memory. It can only run through the flowchart at a given speed. For a simple flowchart, that doesn't matter too much. As the flowchart gets more complex, and more sensors and actuators need to be managed, the rate at which the robot can work through it slows down. Just to complicate matters further, sensors and actuators don't necessarily respond at the same speed as the robot can process flowchart. Even a robot with the fastest processor and masses of memory will be limited by the inertia of its actuators, or the speed at which its sensors can sample reliably.

One response to this is more local control: dividing up the robot's sensors and actuators more locally. A good example of this is the servomotor, where you have a sensor attached to the motor so that it knows its position and speed, and will try to maintain the position or speed specified by a central controller. This is handy because it frees up the designer from having to implement steps in their flowchart to provide this control, which has the benefit of freeing up capacity for other decisions, as well as meaning that if something happens to perturb the actuator, it responds immediately, rather than waiting for robot to work through to the relevant part of its flowchart.

Humans, Puppets and Robots: What's the Difference?

Let's return to the motivating question, then. How is a robot similar to or different from a human or puppet?

There are some obvious similarities to humans, even if the robot is not itself humanoid. It has bones (in the form of its rigid members). It has actuators which are effectively equivalent to muscles. It has sensors which respond to stimuli (analogous to the variety of receptors in the human body). It has a computer (brain) which runs an algorithm to decide what how to respond to given stimuli. Finally, it sends signals between sensors, actuators and computer through electric signals. These are similar to a human. The difference, I guess is that a robot is an artefact, lacks the self-repairing capacities of a human body, and its brain lacks the flexibility of human thought, since a human has to pre-make all the decisions, whereas in person, a human can make decisions in real time.

There are some obvious similarities between a robot and a puppet as well. Both are artefacts, and in both cases the decisions about how to act are taken by a human (in advance in the case of the robot, in real-time in the case of the puppet). Both lack the self-organising/self-repairing/automatically learning nature of a human.


Human
Puppet
Robot
Occurs…
Naturally
Artificially
Artificially
Is…
Biological
(Electro)Mechanical
Mechanical
Decisions are made by…
Self*
External Human
External Human
Decisions are made in…
Real time*
Real time
Advance
Sensors?
Yes
No
Yes
Actuators?
Yes
No
Yes
Learning occurs…
Automatically
Not at all.
As dictated by programmer.

* At least, conscious decisions are. There are lots of decisions that are, as I understand it, “pre-programmed”, so we might argue that on this front, we might argue that these decisions are made in advance, and query whether the “self” is really making them.

Earlier, I said that robots can perform calculations much faster than humans, but actually, now that I think about it, I don't know if that's true. Maybe their flowcharts are just a lot simpler, meaning they can work through the steps faster? Simpler, but less flexible. Perhaps Samit will enlighten us.

Also, is a puppet a robot with a human brain? We can't really talk about a puppet without an operator, so is a puppet an extension of the operator? A sort of prosthetic?

I don't know - but I'm looking forward to exploring these issues on the night!

Thursday, 29 June 2017

2016.5: Mid-Year Review

It's halfway through the year, and this seems as good a time as any to take stock of where I've got up to as regards the goals I've set myself for 2017.

Let's take them in turn...

On the Blog 

* At least 24 posts - the same as last year: This is technically post 13, although I did have a slightly naughty "placeholder" post in April. Even so, that'd still make me on course for 24 posts this year if I keep up my current rate.

* At least 2 posts per month:
 Well, I failed that in the first month, though I have at least managed to average 2 posts a month, and 2 is my median and mode for posts per month!

* At least 4 non-review posts:
I've made three, so far: with reflections on Deleuze and Guattari, Braidotti's The Posthuman (at least, the first half!), and some design challenges in tracking people.

Research
* Deliver the Tracking People and Augmenting the Body projects: Basically, done!
* Submit at least five grant applications as either PI or Co-I: I'm on three at the moment, with at least two more in the pipeline, so I should be good to go here.
* Submit at least two papers to high quality journals. Ouch. Haven't done this one yet. Need to actually start writing things up over the summer!
* Get the new iteration of FATKAT into experimental use: Done - or at least doing. My PhD student Latif is preparing to get some experiments underway, and there are plans to look at using it to instrument experimental tools.
* Get PSAT (the postural sway assessment tool) finished and field tested: Done! Two PSAT units have been in the field, and used to conduct over 280 trials now, with more planned. Hurrah!
* Adapt our grip model to address feedback and corrections: This is for the summer, I guess!

Other
* Get an "Engineering Imagination" discussion group up and running for postgraduate students in iDRO. 
* Make some inventions.
* Formulate a reading list for the Engineering Imagination.
Yeah, none of these have happened. I've got a big list of potential books to put together on my Engineering Imagination list, so perhaps this will be a blog post for the second half of the year.

So, What Next?
Well, over the summer, aside from all the usual admin and prep for next year, I have a few concrete things to do, particularly on the CPD front!

1) Get started with AnyBody: I've been talking about this with Claire Brockett for ages, linking up kinetic measures to postural data from biomechanics simulator AnyBody, so I've finally taken the plunge and got it installed on my PC in anticipation of potential undegraduate projects. I'll be working through the tutorials this summer.

2) Learn I2C programming: I need this for the MagOne force sensor projects I'm supervising this summer (since the MagOne uses I2C to communicate). I understand the theory behind I2C communication, but I've never actually used it in anger. Time to change that!

3) Programme a Neural Network: Again, I "know" what a neural network is, and I know (in theory) how they work. What I've never done is actually worked through the process of implementing one, so time permitting, I'll be trying to get a simple one up and running on some example projects.

4) Get PSAT up and running: Two PSATs are in the field, which is great, but there are three more awaiting finalising and PAT testing and calibration. That'll certainly keep me going.

Let's see how it goes, eh?

Saturday, 24 June 2017

4 Weeks Commencing the 23rd of May

This is always a busy time of year. Dissertations have to be read, vivas conducted, coursework marked, and there can be no (or very little) kicking the can: marks must be ready for the external examiner sign-off at the final exam board. Almost every other deadline has some flexibility, but this one is absolute. So one way or another, the end of May and most of June is pedal to the metal. Such are the rhythms of the academic year.

In amongst this, as you will know if you have follow me on Twitter, we have had the grand finale of the Augmenting the Body Sadler Seminar Series, with Margrit Shildrick returning for the second time in just under twelve months, this time talking about the Human Biome; and the third and final seminar for the Tracking People network, at least prior to the conference on the 9th of November which represents its culmination. Not to mention the Cognitive Archaeology meet-up I mentioned in my last post, and a Robotics at Leeds Away Day.

Plus, PSAT has now survived PAT testing, and two have been out into schools this last week. We can officially claim to have a fleet! And two undergradute students have started work with me for the summer investigating applications for Pete Culmer et al's MagOne sensor, in FATKAT and Seating. I'm really excited about the potential there.

So what I'm saying is that it's been even busier than usual this year, albeit also more exciting (in a good way). And for the second year running, someone decided to throw a major vote into the running. Never a dull moment, eh?