Monday 30 October 2017

Present in Absence, pt 2

I've been trying to think a bit more about this whole issue of proximity, and projecting the self that I addressed last month. I feel like there's something in it, but I haven't really figured it out yet, and I may well be retreading old ground. Still, I wanted to dedicate some time to thinking through Virtual Reality, Augmented Reality, Companion Robots and the self, and what better place to think aloud than on the blog? It is, after all, what it's here for. So, buckle in - once more I will be thinking off the top of my head and straying dangerously outside my discipline. Feel free to correct me.

I mooted that there might be five levels of proximity at which something can be found (slightly reworded, you'll note!):


1) Integral: Inside; built into you. Not removable without surgery. By definition, always in contact with you unless drastic action is taken. 

2) Contact: Anything attached to the exterior of the body. Attached, but detachable. Always in contact with you, but removable without permanent damage. 

3) Reachable: Not attached and normally not in contact with you, but easy to bring into contact.  Within arm's reach.

4) Proximity: Not in arm's reach, but within sight or sound. The same room - no interposing barriers.


5) Remote: Not in sight, or sound, barriers prevent interaction except through some third party or device.

Now, one thing that occurs to me is that this refers to spatial proximity. But is that the most relevant form of proximity when "projecting the self"? It sort of makes sense: projecting myself from "here" to "there" inevitably involves a spatial dimension. But it feels like there's a disconnect between integral/worn and reachable/proximity/remote. As if they deal with different types of proximity.

Spatially, at least, they make sense as part of the same scale. From "inside" to "next to" to "can be made next to through greater and greater effort" - reaching out an arm; walking across the room; walking out of the room and if necessary off over the horizon. "Projection" in a sense brings things closer along this scale. Reaching out my arm brings a reachable object into contact; walking across the room brings a proximate object into reach and then into contact. Would ingesting something then be the next level of projection? It doesn't seem quite right.

And yet, remote interfaces sort of follow this pattern - I can project my sense of sight through not just space but potentially (backwards) through time through the use of a camera and a connected display. A microphone and speaker (and amplifier, I'll grant!) will do the same for sound. Thanks to the internet, light and sound from halfway round the world can be projected to my eyes and ears. Arguably, I am the recipient  of the projection (it is projected to me): would we regard this as projecting the senses?

This brings us to the question of immersion. If I watch TV, am I projecting myself or is it (to quote St Vincent) "just like a window?" It brings something distant closer, but it doesn't project "me" there. VR is different, because it cuts out some of the immediate senses. If I put on a VR headset, I now not only receive visual information from a remote (possibly virtual) environment, but I also lose visual information about my immediate environment. Headphones do the same for sound: haptic feedback the same (in theory!) for touch. That, I guess is the projecting element: the extent to which I sacrifice information about my immediate environment for information about a remote environment.  Projecting myself "there" rather than projecting "there" to "here".

So far, this has all discussed projecting senses from one place to another, but what about projecting actions? The partner to telepresence is teleoperation - the ability to perform an action in a remote space. Of course, the microphone and speaker example works in reverse - I can have a webchat with a colleague in Sweden almost as easily as speaking to the in the same room, our voices and images projected across the continent. In teleoperation, though, it feels like we tend to mean the projection of actuation: of force and movement. Of course, remote control has existed for a long time, and the idea of pressing a switch in one place and an action being performed elsewhere is hardly new. 

Based on the ideas discussed by Andrew Wilson et al at the Cognitive Archaeology meet-up, it looks like humans are uniquely well-adapted to throwing things and this was obviously an important step in our development. For an animal with limited natural weapons or defences, the ability to hit something from a distance at which it can't easily hit back is a huge boon, and perhaps the earliest example of telepresence... 

Crossbows and bows caused such concern in the middle ages that they were banned for use against Christians: "29. We prohibit under anathema that murderous art of crossbowmen and archers, which is hateful to God, to be employed against Christians and Catholics from now on." The ability to kill from a distance without putting yourself at close risk was very much the drone strike of its day.  

Anyway, this is a little off the point: I'm just trying to demonstrate that remote action is nothing new. So how does this fit with our model? If we don't want to look at how close something is to us, but how it's proximity is shifted through technology, does the same model still work?

I mean, could we do away with "reachable"? Is reaching just  a way of moving something from "proximity" to "contact"?  Of course, then "proximity" would run out at the limit of reach. Whether something is across the room or across the world makes no difference once it's out of reach. This then raises the question: is walking a form of projection? For me, walking to something just out of reach is trivial; whereas walking to another room is more effort. There again, that effort will increase the further away something is. I can go and put my hand on an object a mile away, it just takes a lot more time and energy. 

This makes me think a few things:

1) That the categories (integral to remote) classify where a device must be for one to operate it, but make less sense in mapping out "projection" of skills. For example, some devices must be implanted to work correctly (a pacemaker is no use in your pocket); some must be in contact (a heart rate monitor; a VR headset); most must be reachable to be useful - contact is required to operate them, but it need not be constant (a kettle, for example - I need to get within arm's reach to turn it on); then we get to voice activation (Alexa, Siri, etc). This is only about projection insofar as it determines how near I need to be to an object to form an assemblage with it.

2) That these will vary from person to person and object to object: how far I "reach" will depend on my arm length; how far my vision extends depends not just on my eyesight but upon what I'm trying to see. I can read a traffic light from 100 metres, but not a novel.

3) I wonder if time might be a better measure of proximity? That is, if we measure the time it would take to form an assemblage with a given object? Hence, an integral or contact object is instantaneously an assemblage: I am automatically assembled with them. For objects not in contact with me, proximity might be measured by the time it takes me to interact with them. For a kettle in arm's reach or Alexa, the time is a second or two: as good as instantaneous. For objects further away, we can either have a binary distinction ("not instantaneous"), or measure the time it takes (five seconds to cross the room; a minute to go next door; an hour to walk two miles away).

4) Perhaps it is the instantaneous/not-instantaneous distinction that is most useful, since this delineates close from remote, and this gap is what projection bridges. Whether my senses or actions, projection means transferring instantaneous senses and actions to somewhere they would not normally be possible, rather than having to take an additional action to get to the relevant place.

Maybe the mapping isn't that useful, then? Or maybe it's useful in mapping the links required to form an assemblage with a device? Perhaps the question is - why am I interested in this? Why do I want to map this out in the first place? I feel like there's something important in here, but I'm not sure.

Let's try a few examples. Ordinarily, if two people wish to engage in conversation, they would need to be in proximity. Hence, in the image below Persons A and B can hold a conversation with each other, but not with Person C (who is remote from them - in another room, building, or country).

Add a telephone (mobile or otherwise) into the mix, however, and as long as it is within reach of both parties (and both have a network, charge, etc.), speech across any distance becomes possible:



Now, there are some complications here. In this example Person B can converse with  Person A (who is in proximity) and Person C (provided both B and C are in contact with their phones). Persons A and C, however, can't converse with each other. Clearly, this need not be the case: what if the telephone is set to speaker? Now, Person C is effectively "proximate" to both Person A and Person B - and no one needs to be physically holding the phone. Of course, Person A and B can see each other, and if anyone gets too far from the phone, their voice will no longer carry, etc. 

A similar issue might be imagined in terms of shaking hands. A and B can shake hands, but only if they are in Reachable distance of each other. Being in earshot of each other isn't sufficient. A telephone won't help A, B or C shake hands with each other, no matter how f
good the speaker and microphones are. 

There's more to this, as well. We need to differentiate different senses and actions. For example, the telephone carries audio information, but not visual information.  Skype or WhatsApp (among other videoconferencing apps) can carry both. They can't carry touch. 

Is it worth thinking of proximity in conductive terms? Conducting information, maybe? Are these assemblages really about "conducting" information? Conducting isn't necessarily the right term - clearly, air conducts sound, wires conduct electricity, but can we speak of electromagnetic radiation being "conducted"?  It hardly matters. For my purpose, these assemblages are analogous to forming a circuit: when the circuit is broken, sensing or actuation breaks down. 

Now maybe that's what I mean by projecting the self - forming sensing or actuating circuits/assemblages with places that otherwise wouldn't be possible: be that because they are remote, or virtual, or just out of reach of our every day capabilities. That's an interesting thought, and worth pondering more: and for that reason, it seems as a good a place as any to end this post.

No comments:

Post a Comment