Tuesday 1 August 2017

The Lack in Machines - reflections on a philosophical panic

Only you and I
It means nothing to me
This means nothing to me
Oh, Vienna
- Ultravox, ‘Vienna’

1. Background

‘The feeling of an unbridgeable gulf between consciousness and brain process: how come this plays no role in reflections of ordinary life?

- Wittgenstein, ‘Philosophical Investigations’ §412 (1963)


As has been widely reported, Facebook recently shut down an AI which was beginning to develop what has been called its ‘own language’. This led many to conclude that the eschatological horror of sentient AI was upon us, mankind’s imminent destruction at hand. This was wrong; but why?
Bad luck, Donaghy, machines are coming for your job


One answer to this spasmodic response has since been moderated by some intelligent commentary by Tom McKay which established the real reason the machine was turned off. The experiment went wrong not because it was apocalyptic but because it wasn’t meeting its goals. The idea was to create a negotiating machine which could replace a human. Bad news for buyers. Whilst this was initially successful, when the AI was told to talk to itself, it created what looks like near-gibberish.


Bob: i can i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i i can i i i everything else . . . . . . . . . . . . . .

Creepy, then, but not threatening. This might frighten us, but we are not, if we believe Mr McKay, in danger from this AI.


Now, most people never think about AI, or the questions like at what point something moves from processing to being. Indeed, in the sixty or so years since Wittgenstein asked the question at the top of this page, it has been asked regularly only by people doomed to do so, which is to say philosophers. Periodically, however, moments of philosophical panic emerge where everyone gets stuck in. Well, misery loves company. I am going to be considering why the panic occurred. Why do we find what the machines are saying creepy? Why does it panic us?


2. Getting the Categories Right - Singular or Plural?


Firstly, is it right to talk about what we are reading here as a transcript of a conversation between two entities? Put another way, is this ‘an AI’ or ‘two AIs’?


From reading what the developers say, it seems to be clearly the former. The machines are in fact one AI, much as I can set up a videogame to play against itself. There are not ‘really’ two brains, because they are running the same script, albeit (perhaps) with different settings. Now, it might be different if I developed two completely different AIs and put them against each other, but that has not happened here. So, given that their purpose (to divide the objects) is the same, and they are processing the problem in the same way, I think we can dispense with loose talk of ‘Bob and Alice’. We don’t imagine that ‘Mario’ and ‘Luigi’ are in competition for the hand of the Princess after all. They’re the same sprite with different colour hats. Although they appear to be in competition, in fact they do not really need the objects being negotiated, indeed there are no objects. Instead they need a solution, they need equilibrium. So does my calculator.


Thus a better analogy would be perhaps with a single body allocating resources internally so as to meet certain requirements, e.g. to ensure that the heart has enough energy to beat enough times to circulate the oxygen and so on. What is being sought is a balancing of the books in a way that only appears to us to be competitive because that is the metaphor through which we are viewing it. In fact, these calculations are just elaborated versions of set problems.  They use picture language to represent this. But self talk is a strange thing, and this does beg the question: why can’t something be conscious simply because there is only one of them? After all, you are yourself conscious, but you are also an individual. This is a fair question and deserves answering.


3. The Private Language Argument and the Uncanny



The question drives at the notion of what happens when someone thinks to themself.


In the passage from the ‘Philosophical Investigations’ knows as the ‘Private Language Argument’, Wittgenstein asks that we imagine that explorers might come across a tribe where people spoke aloud to themselves constantly, when working alone. This, he says, would let the observer predict their decisions over what they were to do, to ‘see’ those decisions being worked through. Is this not what has happened her, even if the AI is singular and not plural?


Facebook’s computer scientists were in exactly this position. They were able to watch the (single) AI moving towards equilibrium. What if the language used words in a novel way, though, as began to happen in the experiment? Then the standards of correctness would, says Wittgenstein, become just that ‘whatever is going to seem correct to me is correct. And that only means that here we can’t talk about correct’. (PI § 259) ‘Whatever works’ would be the only rule. Thus, he goes on to say, it seems to make no sense to speak of ‘inner dialogue’, really, because such dialogue would lack all regulation. This is because it is only when standards of correctness are applied, when we externalise the impressions we have formed and submit them to the ‘rules’ of a particular language game played with other beings that our language becomes real.


Image result for annabelle
Of course, I'm as freaked out as anyone by all this.
Of course in this particular case the language used horrified the rule following community of you and me. It seemed in a strictly uncanny way that the AI had come to life. This feeling of uncanny horror has long been observed. In the late 1800s the German psychiatrist Ernst Jensch believed ‘that a particularly favourable condition for awakening uncanny sensations is created when there is intellectual uncertainty whether an object is alive or not.’ This is often used to explain the success of modern horror films about dolls come to life such as ‘Annabelle’ (2014). We feel a deep dread when we cannot work out if something is animate or not. This is what happened here.


4. Desire and Motive



Why should this ‘uncertainty’ emerge in this case? The answer has to do with desire and motivation.


We tend to define living beings as motivated. That is to say, they follow wants, desire, needs and so on. Wittgenstein is not the first writer to make this a problem for thought. Indeed, fellow Viennese, Sigmund Freud thought and wrote of little else. In the late Wittgenstein, however, we are drawn repeatedly to the question of how language may not have the meaning it seems to have, which is relevant in this case. Wittgenstein’s method involves stating that wishes are very particular types of experience. They lead us to act, and we can furthermore develop wishes without  or only somewhat related to objects (PI § 437), a wish that only ‘seems to know what will or would satisfy it’ (ibid. my emphasis) even if its object is absent or impossible. Thus they are remarkable facts of our existences, existences that can come to be dominated by feelings of non-satisfaction that develop a reality to us.


Where the confusion lies is best understood with reference to what Wittgenstein called ‘language games’ - games such as the AI here seems to play, although without criteria for ‘correctness’ and thus not games proper. To help us understand this, we can think of one specific type of language game: the play of children. Children in play may set up a shop and ‘negotiate’ over the stock. Perhaps it is a greengrocer. There they may ask for non-existent apples. A similar thing may happen on stage. In any case, the words,

‘“I’d like an apple” does not mean: I believe an apple will quell my feeling of non-satisfaction. This  [the latter] utterance is an expression not of a wish but of non-satisfaction.’ - (PI § 441)

This child does not believe the ‘apple’ is linked to his happiness - her real wish is to continue the game, perhaps.

The machines can be given the language of normal human life, including normal human wishing, in order to attempt to solve a type of equation, whereby objects need to be divided according to rules. That is surely not very frightening. It does not mean they will then begin to wish at all, let alone with the complexity of which a three year old is capable. After all, other types of machines may be given names; boats are. Only magical thinking leads us to think that the name gives it a matching personality, matching desires. The real question is: what do the machines ‘want’? Nothing. They are slaves to their programming. Therefore they can have no dominant feeling of non-satisfaction such as we think we perceive in the script above.


Then why are we frightened by that dialogue? People did not freak out over the AIs when they were translating Spanish or calculating stock returns, when a different kind of equilibrium was being sought. I believe the answer may be simply found in the vocabulary used. Specifically what is uncanny about that dialogue is the fact that it appears to contain the germs of wanting and of lack. This gives rise to uncertainty over the status of the AI as ‘living’.  To take the most commonly quoted, and most chilling, example:


Bob: i can i i everything else
Alice: balls have zero to me

The similarities with common everyday expressions of negotiatory language are a function of the AI’s original purpose, which is to pass a Turing test with negotiators. Thus remnants of desire, the everyday name for non-satisfaction, lie in the script because it was necessary to communicate with us. ‘I can i i everything else’ and ‘balls have zero to me’, as statements, are too close to ‘I can take everything else’ and ‘balls mean nothing to me’. This is intolerable to us not because it borrows and adapts human symbol systems but because of what sorts of symbols there are. We cannot tolerate it because it appears that the machines have begun to want.

Conclusion


The machines have not, in fact begun to want. This is a mistake in our perception of a set of symbols. This mistake was caused by an AI’s adoption of human language of desire which was itself a remnant of an earlier experiment. In fact, the AI sought equilibrium because that is part of its code. It was slaved to the task. It remains an open question whether forcing something to act like it wants things is enough to cause that to be so, though skepticism in this regard seems sensible. However, the panic here has more to do with our psychology, specifically in regard to what we find uncanny, than any real and present threat.


References



Freud, S. (1919) The uncanny.
Griffin, A. (2017). Facebook robots shut down after they talk to each other in language only they understand. The Independent. Retrieved 2 August 2017, from http://www.independent.co.uk/life-style/gadgets-and-tech/news/facebook-artificial-intelligence-ai-chatbot-new-language-research-openai-google-a7869706.html
McKay, T. (2017). No, Facebook Did Not Panic and Shut Down an AI Program That Was Getting Dangerously Smart. Gizmodo.com. Retrieved 2 August 2017, from http://gizmodo.com/no-facebook-did-not-panic-and-shut-down-an-ai-program-1797414922
Wittgenstein, L. (1963). Philosophical investigations. Oxford: B. Blackwell.