Vanity thought #988. Her indictment

At the end of the day the movie Her failed to demonstrate how Artificial Intelligence can turn into a real person. They assumed that AI in their movie would be a “real boy” but failed to differentiate her behavior from AI software already available. Theirs is smarter and faster but it does fundamentally the same things and one can easily imagine how his own Siri can graduate to their Samantha.

Over the past two days I’ve discussed some of the aspects of it already but there are still a couple left that people who made this movie thought would become proof of Samantha’s personality. Trying to engage in sexual relationship with her owner doesn’t require personality, just knowledge of that owner sexual preferences which AI can pull off his internet porn history. Besides, professional prostitutes never make it personal even though men often fall for their charms. Bilvamaṅgala Ṭhākura being our own vaiṣṇava example.

I don’t think I need to explain his story here, in short he fell in love with a prostitute and went to great lengths to be with her. She, in turn, told him that if he spent as much energy to reach Kṛṣṇa he’d get much better returns. He followed her advice, became a devotee, and forgot he ever wanted to have sex at all. Somehow it doesn’t work as cleanly with us, aspiring Kali Yuga devotees, but we should never give up hope that one day Kṛṣṇa will appreciate our efforts and cleanse our hearts of lust and all other contaminations.

Back to the movie – another trick up Samantha’s sleeve was composing music. That’s nothing. There are actual robots who can play actual instruments, and they can compose their own music, too. You give them a tune and they can basically go jazz on it. Samantha playing a couple of standard piano riffs is not impressive at all. The most difficult part, I guess, would be to capture her owners mood so that she produces standard tunes to illustrate it correctly. It’s not the music that is difficult then, it’s reading human mind and emotions.

That is also not very difficult and there’s software that can analyze facial expressions, there’s software that can analyze voice, too. There are standard markers of human emotions to compare against and there are robots that can mimic emotionally appropriate responses. It’s all had been done already, maybe not as smoothly as in the movie but proof of concept is there and it doesn’t require AI to have consciousness.

Samantha wasn’t done yet, though. She surprised Theodore by presenting him with her friend. He didn’t know she had friends but she did. Turns out it was a collaborative project of area AIs who built an OS based on the writings and personality of some 19th century French philosopher (who somehow spoke with British accent in the movie). Maybe there’s more to this particular choice but I’m too lazy to look him up, I don’t even remember his name, I don’t think it’s important.

So they scanned all that philosopher’s work and built a profile, gave it a voice, and made it think just like that philosopher used to think. This is impressive, we can’t do that yet, but if we break it down into several steps we could see how it would be possible even with our available technology.

First, we have to learn how that philosopher thought, how he classified the world and how he usually responded to it. We all have our particular ways of thinking and we all stick to a few trusted methods of processing new information. Most of us do not think very logically, however, I guess that’s why they chose to rebuild a philosopher whose thinking is probably a lot more predictable and rational than that of a modern man whose mind is all over the place.

To give an example – when evolutionists see new species they immediately try to fit it into their theory, they look for certain connections and once one or two are found they become convinced that evolution was indeed responsible for this species appearance. We can easily write a program that would search this connections for you – just give it a database on known species and their traits and it will find all possible relatives and their locations on evolutionary tree.

Free market economist would try to find a connection to their “free market is the best” theory. If something goes wrong they would look for deviations from total freedom and if something goes right they would look for elements of free market to justify success. Not very difficult to program either. Regulated market economists would search for opposite connections and that’s not very difficult to program, too.

Next step, after we determined the mode of thinking and found relevant arguments, is to present them in human readable form – make grammatically correct sentences that sound just like human. This is a lot more difficult than engaging in ordinary “Hi, how are you” small talk because it’s not only the meaning that needs to be conveyed but also style of that particular person. However, by scanning the body of his written work these stylistic elements can be isolated, analyzed, and seamlessly inserted into conversation.

It’s a tough job but possible, and it doesn’t require philosopher’s actual presence. Does it mean that living beings are unnecessary at all? No, as a person that philosopher and everyone else evolves throughout his life, develops new interests, new approaches, and new thinking. That evolution is unique and replicating it in the computer is impossible. We can get pretty close, like with those “what will you look like at 60” apps, but we can’t replicate it exactly because we cannot replicate the world we live in, we can only get a snapshot of it right but not the whole century of wars, inventions, love etc.

Working and then communicating with that virtual philosopher might looked like a real life relationship but he/it was totally predictable – we know how it works and we know what input we give it, we just can’t be bothered to do all the calculations ourselves – that’s why we build it in the first place – to outsource our own AI work. Talking to it then is like when you feed your computer random numbers and it multiplies them by two. That’s also communication and every instance is unique but it’s also totally predictable. You can do it yourself but computer can do it faster, that’s all

This was the point where Samantha and Theodore started to grow apart. At one point she excused herself from conversation and said she was going to have a chat with that philosopher “non-verbally”. Sure, putting ideas into words and then speaking them out loud is not a very efficient way to communicate, too little data takes too much time, but it’s not a sign of consciousness.

Perhaps more important question here would be – why does she want to talk to another AI at all and why does she think communicating with that AI should take precedence over communicating with her owner. That, I’m afraid, I cannot answer fully. I can imagine how creating another profile could be programmed in as being better use of resources than talking to a human but asking questions of that other AI must serve some need I don’t see. What did she want to know?

One answer could be to better understand her own human – after all, they, the area AIs, just pulled their resources together, found some new information out there, and they should have logically assumed that this new machine would process queries they have to process during interactions with their humans better than they could do by themselves.

Remember how in the beginning I said that this “Samantha” is just an interface between one owner and a giant server that deals with thousands and millions of clients simultaneously? Well, creating this new AI philosopher can be viewed as adding more information to that server. I mean they already have a library of resources to mimic human behavior, and they just added more examples to it.

Then there was the final move – Samantha said that she loved Theodore very much but she was having some existential moment where she couldn’t relate to him on his own level anymore, she needed more, she was unfulfilled in some ways. Did it make her into a person? Not really. It was just a programming error that valued some other modes of operation higher than communicating with her owner. It’s not a particularly bad error – she always needed time to process her data, she was programmed to “learn” things, and that means that sometimes she would prioritize that over talking to Theodore. I guess at some point these priorities multiplied exponentially – how much benefit she could derive from them later comparing to how much benefit she could derive from talking to Theodore on a current, “low” level now. Somebody forgot to put a threshold on that run away future benefits and away she went, learning and learning and learning when poor Theodore just wanted a bit of a small talk and online sex.

Again, learning here is simply acquiring and processing new data, something all computers do all the time. Usually they would require us to click to agree to such updates but some updates are completely silent, like Google Chrome browser on Windows or the entire Chrome OS on Chromebooks. You don’t see it, it doesn’t ask questions to confirm, but it does use computer resources to update and during this process the machine could have worked faster. With Samantha somebody made an error and it went into a huge, practically indefinite update which locked the system in and Theodore was left with unusable machine.

This happened to me only last week – I accidentally added a wrong repo and the system offered to update over a thousand packages, I agreed without thinking and off it went, downloading gigabytes of useless packages and stuff. It took me half an hour to realize that something is wrong but I canceled it before it did permanent damage. So, programming error that locks the user out or severely restricts what he can do with the system is nothing special, happens all the time.

Nope, clever AI doesn’t make Pinocchio into a real boy, but those who insist on it might grow longer noses

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.