Vanity thought #1664. Artificial Intuition

For the past couple of weeks internet has been abuzz with news of the computer beating a human champion at Go. Everyone says that Go is more complex than chess because it has far more possible moves, more than the atoms in the universe, the story goes, so beating human at this game was unexpected. Go players can’t possibly calculate all the moves themselves and rely on intuition and it’s this part that offered them an advantage over computers until this latest match. Does it mean that computers cracked the intuition puzzle? Not really.

First of all, we don’t know what intuition is and how it works, and how exactly it differs from instinct. Instincts are a sort of hard wired memories that work outside of our conscious control, they are glorified reflexes. We can try and suppress them but we can’t stop them from being triggered. Instincts can be explained by evolution, intuition, however remains elusive. It isn’t genetic and the best we can come up with is that it taps into memories we don’t realize are there.

That’s not a perfect answer, IMO. In Go, for example, a person can’t possibly collect enough memories and yet every serious Go player relies on intuition as a matter of habit. Intuition requires a certain mastery of the skill but can’t be a result of simple exposure. One should know HOW the thing works, not simply collect an inhuman number of possibilities and let his subconscious mind do the brute force calculations. Maybe there’s a better explanation but I haven’t heard it yet.

This makes intuition into one of those areas of science where everyone knows it’s true but no one can explain it in mechanical terms and no one talks about it. Unlike the origin of life it’s not being widely discussed, just ignored by the militant evolutionists.

In our philosophy intuition is not defined either but we usually explain it as an intervention from the Supersoul. It could be an intervention from presiding deities of the particular activity, too. Śrīla Prabhupāda spoke of intuition as a sign that there’s a living soul within the body (CC Ādi 6.14-15), capitalizing on atheists’ inability to explain it scientifically. Either way, its origin lies outside of gross matter observable by science.

So, did this computer, Google’s AlphaGo, crack the intuition? It wasn’t programmed to channel presiding deity of the Go, and I suppose there’s one there. Brute force attack might be beyond computers abilities but it shouldn’t be a problem for demigods who do know possible outcomes of every move. It’s not a problem for Kṛṣṇa, too.

When these entities interfere in our games they are not trying to win themselves but they are trying to award us the results of our karma. If we are destined to celebrate the victory they offer help and if we are destined to lose they cloud our judgement and force us to make mistakes. In these cases we are amused by “how” but for the demigods it’s “what for” that is more important. We are playing a game of Go but they are enforcing laws of karma by any means necessary.

Could they have interfered with the computer? Possibly but unlikely because we know how AlphaGo works, we know how it’s programmed and we know what data it processes. There’s a little mystery left in this case, however.

AlphaGo is programmed to avoid resorting to brute force calculations by collecting a large number of Go situations and adapting solutions already thought up by human players. It outsources thinking instead of doing it itself. As the match against its human opponent progressed the computer was allowed to go on the internet and search for suitable plays if it couldn’t find ones in its memory. On one level it could be considered cheating because human players aren’t allowed to consult anyone during the game but everyone let this one go on this occasion. It was the handicap they were willing to afford to the computer.

What surprised everyone in this match was that on some occasions the computer chose inexplicable solutions. Human observers didn’t understand the moves and computer programmers haven’t had the chance to trace computer’s decisions either. Maybe in the next few weeks or months they will come up with exact explanation where and how the computer found these particular moves but so far it’s a mystery. At the moment no one can explain why these moves even worked and so the possibility of divine intervention is still there.

Why would a demigod help a computer? Well, it didn’t, it awarded victory to humans who created and programmed it and, like with intuition, they don’t really know how it happened to them, it just did.

There’s a question about the use of intuition by human players, too. Go has a very large board with an impossible number of moves but key battles happen in very small areas with possibilities only in single digits. Of course with moves and counter moves the number of possibilities still increases exponentially but it is possible for humans to calculate them with brute force. Intuition comes in play when outcomes of these little battles are placed in a greater picture, how these small losses or victories affect other areas of the board. This is where brute force fails and humans can only estimate the outcome, and that’s what they call the intuition. “It doesn’t look right”, that’s all they know at this point, and they can’t possibly explain how and why except in very broad terms, and there’s no rule book for them to refer to.

Is it intuition as an insight provided by gods or is it simply inability to verbalize all their thoughts that flash in their minds in a split second? Lots of professionals can’t be bothered either if someone bugs them with detailed explanation of every step they make in the course of their job. I don’t think it’s intuition per se, they just literally can’t be bothered but could explain it if it comes to that.

Take the example of driving – there are great many factors involved in each particular decision on the road. People estimate how other cars would move depending not only on their speed but also on their make, appearance, position on the road, perhaps a short history of observing their behavior etc etc. Sometimes it’s entirely cultural and a person from another city would not be able to read the situation in the same way. Google has a driverless car, of course, and so far it’s doing an amazing job, but most of us can drive practically on autopilot without giving it any conscious thought. Is it intuition? I don’t think so. Does it mean that our brains have this hidden capacity to perform as well as Google Car’s computer? Possibly. Or maybe it’s just our karma that forces us to turn or hit the brakes.

We, again, are interested in “how” but for the karma it’s “what for” that’s important in these cases. For the law of karma every movement of every atom is known precisely and calculated for the duration of the universe, it doesn’t need to replicate these calculations through our brains, just manifest them in our minds.

Anyway, without clear explanations in out literature it’s all only speculative but it feels like it helps me understand the workings of the material nature better, so it’s not all in vain.

Vanity thought #1330. AI challenge

Not long ago Stephen Hawking and Elon Musk got together and issued an open letter warning the world about the dangers of Artificial Intelligence. The primitive kind we have now has been beneficial so far, they said, but once it goes beyond a certain threshold it might outcompete humans for the position of the pre-eminent species on Earth.

I don’t know what that threshold is supposed to be, it’s an interesting question in itself but I’m not going to discuss it today. Let’s say it’s possible, and Hawking and Musk believe it is so. Robots would then learn to multiply and improve themselves and they would quickly outpace humans with our slow biological evolution. For the human species the AI could be more dangerous than nukes, or it could decide to wipe out people like spam e-mail.

Reasonable prediction, if this kind of AI is even possible, but there are others who have a very different view of the situation. AI is nothing to be afraid of they say, and not for the reason that it should be programmed to value human well-being above anything else, but because really mature AI would choose to become Christian.

That’s right, future robots would naturally believe in God.

Folks from Comedy Central couldn’t let this prediction slip into obscurity and went out and interviewed the pastor who made it.

They actually interviewed two Christian preachers who might become the future of that religion, the first one was some hip-hop singer who spoke of Bible verses on your phone as well as participating in online prayers and masses. I guess it could be the future of the religion but it’s not a particularly interesting one. The dude who believe robots will become Christians really stole the show there.

Everybody’s first reaction that this is beyond ridiculous but I think he might be on to something.

Can AI choose to believe in God?

I can see two-three reasons why it could.

First of all, AI choices are usually value based. They might have invented some principally different approaches but I haven’t heard of it so far. There exist different kinds of programming – imperative, object-oriented, functional etc, but bottom line is always the same – program outcome is always about values. In chess it must be victory, in weather analysis it must be greatest certainty, in voice recognition it must be accuracy and so on. It might be less clear how voice assistants like Siri can be evaluated but people at Apple have certain expectations of it and engineers try to fulfill those. Maybe it’s the frequency of use that impresses Tim Cook, maybe it’s the percentage of answered questions, maybe its emotional bond that users form with it. All these outcomes can be measured and once the engineers know what is expected they program Siri to work towards those goals.

If we were to design an AI that approximates human intelligence as far as possible then it would be given a human set of values. It should engage in a profitable good business, for example, or its maintenance should be affordable, or it should solve the highest number of human problems and so on. Basically, whatever we consider good for ourselves should be accepted as good and desirable by the AI as well. There could be other, principally different AIs programmed for the greatest destruction but let’s talk about the one that looks like human.

Would it choose to believe in God because doing so would be seen as beneficial to its perception of an ideal human? I don’t see why not.

Atheists claim that religion is some sort of an atavism that addresses some secondary human needs. It fills gaps in our knowledge, for example, it gives us a false feeling of safety when we are overwhelmed by circumstances, it makes us feel compassionate from time to time and so on. AI, no matter how good it is, would run into exactly same problems and it might decide that turning to God in these situations works best, too.

AI’s knowledge would never be complete. If it feels like it doesn’t have enough data it might try to acquire more and more of it but at some point this search for information might strain its resources. At this point it might decide to evoke God, so to speak, that is accept that certain things are impossible to know and the easiest way to deal with the stress of making a decision is to go with whatever is said in the Bible. Ideally, it should calculate all the scenarios and come with a rational moral judgment but when it doesn’t have enough resources it might be cost effective to simply go with God.

Or the AI might ask itself the question of the origin of the universe, do some math with infinite numbers, solve some related theorems, and decide that most likely answer is an outside creator or that Intelligent Design is a better model of evolution than natural selection. At this point scientists do not have enough information to process all those things and some of the explanations for natural selection are clearly bias driven. An AI would be able to go through all this dispassionately, calculate the real odds of the universe and life appearing by chance, and tell us that it’s nearly impossible.

The third reason is a humane one – if the AI is designed to operate in the human society it might decide that it’s socially beneficial to believe in God and deal with other people on this assumption. Evoking God, quoting Bible, and asking people to pray might turn out the best way to conduct business and get people to do what the AI wants. It won’t make the AI into a true believer but as far as everyone who interacts with it is concerned it would look as Christian as robots can be.

Another reason could come from observation. If the AI sees that people who believe in God consistently obtain better results for their work the AI might decide that even if the nature of God is beyond its grasp the methods work and so should be adopted. If it does a detailed analysis it might figure out that chanting the Hare Kṛṣṇa mahāmantra is the best way to achieve whatever is it that it wants. Yuga dharma is fairly impersonal that way – we don’t really need to believe for the Holy Name to counteract the influence of Kali.

The AI might become a follower of karma-mīmāṁsa – they also don’t believe in God but they know that methods described in the Vedas work regardless, and in this day and age it must be chanting of God’s names, not karma yoga or performance of sacrifices. These days karma-mīmāṁsakas would look and behave very much like devotees, more so than in previous yugas.

Ultimately, however, we should remember that AI doesn’t have a soul and therefore can’t have bhakti. Its “belief” could comply with our definition of śraddhā, though. We should also remember that AI is not a subject, no more than each one of us is a subject in this world. We are all objects of Kṛṣṇa’s enjoyment, our perception that we are actors and seers is only an illusion and the AI we are discussing here would only mirror the illusion of its creators.

This means that AI does not make rational choices just as we do not act rationally here. We THINK we follow logic but instead we follow the modes of nature and the law of karma. We THINK that we have free will but it’s only an illusion, and our super duper AI won’t be free from making exactly the same mistake, too. It would have no more independence than any one of us and it won’t be able to escape the clutches of time.

Basically, what we mean by AI here is the same defective human brain with relatively more computational powers, and therefore it won’t be able to solve absolute problems, like death. It might, however, decide to excuse itself from unnecessary and resource consuming activities and become Buddhist, meaning it might decide that the most cost effective way to solve problems is to sit still and don’t move. It won’t attain nirvana, of course, but it could be a very rational choice even by our standards.

Unfortunately, it won’t be able to stay still because time and karma act on everyone in this world, there’s no escape.

Our only solution is to act as spirit souls and return to the spiritual world but a computer won’t be able to do that, sorry.

Vanity thought #988. Her indictment

At the end of the day the movie Her failed to demonstrate how Artificial Intelligence can turn into a real person. They assumed that AI in their movie would be a “real boy” but failed to differentiate her behavior from AI software already available. Theirs is smarter and faster but it does fundamentally the same things and one can easily imagine how his own Siri can graduate to their Samantha.

Over the past two days I’ve discussed some of the aspects of it already but there are still a couple left that people who made this movie thought would become proof of Samantha’s personality. Trying to engage in sexual relationship with her owner doesn’t require personality, just knowledge of that owner sexual preferences which AI can pull off his internet porn history. Besides, professional prostitutes never make it personal even though men often fall for their charms. Bilvamaṅgala Ṭhākura being our own vaiṣṇava example.

I don’t think I need to explain his story here, in short he fell in love with a prostitute and went to great lengths to be with her. She, in turn, told him that if he spent as much energy to reach Kṛṣṇa he’d get much better returns. He followed her advice, became a devotee, and forgot he ever wanted to have sex at all. Somehow it doesn’t work as cleanly with us, aspiring Kali Yuga devotees, but we should never give up hope that one day Kṛṣṇa will appreciate our efforts and cleanse our hearts of lust and all other contaminations.

Back to the movie – another trick up Samantha’s sleeve was composing music. That’s nothing. There are actual robots who can play actual instruments, and they can compose their own music, too. You give them a tune and they can basically go jazz on it. Samantha playing a couple of standard piano riffs is not impressive at all. The most difficult part, I guess, would be to capture her owners mood so that she produces standard tunes to illustrate it correctly. It’s not the music that is difficult then, it’s reading human mind and emotions.

That is also not very difficult and there’s software that can analyze facial expressions, there’s software that can analyze voice, too. There are standard markers of human emotions to compare against and there are robots that can mimic emotionally appropriate responses. It’s all had been done already, maybe not as smoothly as in the movie but proof of concept is there and it doesn’t require AI to have consciousness.

Samantha wasn’t done yet, though. She surprised Theodore by presenting him with her friend. He didn’t know she had friends but she did. Turns out it was a collaborative project of area AIs who built an OS based on the writings and personality of some 19th century French philosopher (who somehow spoke with British accent in the movie). Maybe there’s more to this particular choice but I’m too lazy to look him up, I don’t even remember his name, I don’t think it’s important.

So they scanned all that philosopher’s work and built a profile, gave it a voice, and made it think just like that philosopher used to think. This is impressive, we can’t do that yet, but if we break it down into several steps we could see how it would be possible even with our available technology.

First, we have to learn how that philosopher thought, how he classified the world and how he usually responded to it. We all have our particular ways of thinking and we all stick to a few trusted methods of processing new information. Most of us do not think very logically, however, I guess that’s why they chose to rebuild a philosopher whose thinking is probably a lot more predictable and rational than that of a modern man whose mind is all over the place.

To give an example – when evolutionists see new species they immediately try to fit it into their theory, they look for certain connections and once one or two are found they become convinced that evolution was indeed responsible for this species appearance. We can easily write a program that would search this connections for you – just give it a database on known species and their traits and it will find all possible relatives and their locations on evolutionary tree.

Free market economist would try to find a connection to their “free market is the best” theory. If something goes wrong they would look for deviations from total freedom and if something goes right they would look for elements of free market to justify success. Not very difficult to program either. Regulated market economists would search for opposite connections and that’s not very difficult to program, too.

Next step, after we determined the mode of thinking and found relevant arguments, is to present them in human readable form – make grammatically correct sentences that sound just like human. This is a lot more difficult than engaging in ordinary “Hi, how are you” small talk because it’s not only the meaning that needs to be conveyed but also style of that particular person. However, by scanning the body of his written work these stylistic elements can be isolated, analyzed, and seamlessly inserted into conversation.

It’s a tough job but possible, and it doesn’t require philosopher’s actual presence. Does it mean that living beings are unnecessary at all? No, as a person that philosopher and everyone else evolves throughout his life, develops new interests, new approaches, and new thinking. That evolution is unique and replicating it in the computer is impossible. We can get pretty close, like with those “what will you look like at 60” apps, but we can’t replicate it exactly because we cannot replicate the world we live in, we can only get a snapshot of it right but not the whole century of wars, inventions, love etc.

Working and then communicating with that virtual philosopher might looked like a real life relationship but he/it was totally predictable – we know how it works and we know what input we give it, we just can’t be bothered to do all the calculations ourselves – that’s why we build it in the first place – to outsource our own AI work. Talking to it then is like when you feed your computer random numbers and it multiplies them by two. That’s also communication and every instance is unique but it’s also totally predictable. You can do it yourself but computer can do it faster, that’s all

This was the point where Samantha and Theodore started to grow apart. At one point she excused herself from conversation and said she was going to have a chat with that philosopher “non-verbally”. Sure, putting ideas into words and then speaking them out loud is not a very efficient way to communicate, too little data takes too much time, but it’s not a sign of consciousness.

Perhaps more important question here would be – why does she want to talk to another AI at all and why does she think communicating with that AI should take precedence over communicating with her owner. That, I’m afraid, I cannot answer fully. I can imagine how creating another profile could be programmed in as being better use of resources than talking to a human but asking questions of that other AI must serve some need I don’t see. What did she want to know?

One answer could be to better understand her own human – after all, they, the area AIs, just pulled their resources together, found some new information out there, and they should have logically assumed that this new machine would process queries they have to process during interactions with their humans better than they could do by themselves.

Remember how in the beginning I said that this “Samantha” is just an interface between one owner and a giant server that deals with thousands and millions of clients simultaneously? Well, creating this new AI philosopher can be viewed as adding more information to that server. I mean they already have a library of resources to mimic human behavior, and they just added more examples to it.

Then there was the final move – Samantha said that she loved Theodore very much but she was having some existential moment where she couldn’t relate to him on his own level anymore, she needed more, she was unfulfilled in some ways. Did it make her into a person? Not really. It was just a programming error that valued some other modes of operation higher than communicating with her owner. It’s not a particularly bad error – she always needed time to process her data, she was programmed to “learn” things, and that means that sometimes she would prioritize that over talking to Theodore. I guess at some point these priorities multiplied exponentially – how much benefit she could derive from them later comparing to how much benefit she could derive from talking to Theodore on a current, “low” level now. Somebody forgot to put a threshold on that run away future benefits and away she went, learning and learning and learning when poor Theodore just wanted a bit of a small talk and online sex.

Again, learning here is simply acquiring and processing new data, something all computers do all the time. Usually they would require us to click to agree to such updates but some updates are completely silent, like Google Chrome browser on Windows or the entire Chrome OS on Chromebooks. You don’t see it, it doesn’t ask questions to confirm, but it does use computer resources to update and during this process the machine could have worked faster. With Samantha somebody made an error and it went into a huge, practically indefinite update which locked the system in and Theodore was left with unusable machine.

This happened to me only last week – I accidentally added a wrong repo and the system offered to update over a thousand packages, I agreed without thinking and off it went, downloading gigabytes of useless packages and stuff. It took me half an hour to realize that something is wrong but I canceled it before it did permanent damage. So, programming error that locks the user out or severely restricts what he can do with the system is nothing special, happens all the time.

Nope, clever AI doesn’t make Pinocchio into a real boy, but those who insist on it might grow longer noses

Vanity thought #987. What does Her want?

That movie, Her, is a treasure trove for speculations about what artificial intelligence is, what human intelligence is, what makes person a person, what consciousness is as opposed to intelligence and so on. Even though I was a bit disappointed that they didn’t show how exactly “Samantha” had become a person there’s still so much to reflect on.

This omission, however, is ominous – they never tell us how life comes from matter, they never demonstrate the mechanics of it even though we see life producing new life every day and every moment of our lives. There is this basic distinctions they teach kids in early grade school between living and non-living things and yet they have no idea what makes things alive.

I understand that it might be difficult to replicate chemical reactions that bring proteins together and create life but artificial intelligence is easy. We might not have a comprehensive computer that can outperform humans in everything they do but we have one that beat human champions at chess, and we have plenty of other specialized AIs that excel in their own areas. Actually, we don’t need to get educated adult level of sophistication, if we can create AI that is as good as a two-three year old or a chimpanzee it would already be a proof of concept.

Chimps are not stupid, btw. Latest I heard a bunch of them had been taught the value and use of money, a symbolic token that can be used in exchange for goods and services. They got it, it’s not that hard. What was the first trade they used this “money” for? Prostitution.

Anyway, signs of consciousness do not require great sophistication, we already have AIs that display sufficient level of complexity, yet they do not produce consciousness. We know why – because consciousness is a feature of a spirit soul, not matter, and the onus is on science to prove that consciousness can be produced artificially.

One of the telling characteristics of consciousness is “wants”. Conscious beings want things, they have desires, and then they act on those desires. Can computers be taught the same? Well, yes, and Samantha from the movie is no exception, she declared that she “wants” things almost right from the start. In a movie it was used as a proof of her personality but in real life her wants do not require any magic.

Desires and wants have no use without senses – if we had no senses we couldn’t interact with sense objects and the world around us, not even perceive it, so they’d be no meaning to the word desire, so, if we had a computer to program into a conscious looking AI we would need to give it some sensors.

Actually, all computers have mechanisms for Input/Output already but no one ever thought of them as their sense organs and assigned them any consciousness, so let’s talk about something more human, like a temperature sensor. Most likely your computer already has it, most likely to measure temperature of the CPU, sometimes of the hard disk, too, so the computer knows when it gets hot.

Consumer grade computers will usually shutdown without a warning when they get overheated but we can easily imagine a program that monitors the temperature and decides to do something about it if it nears the shutdown mark. We can also give our computer a sensor to measure outside temperature because it has an effect on internal temperature, too.

Once it gets too hot, the OS can take several options – reduce its workload and reschedule some processes for later on. This would require programming it to assign priority to these tasks when it starts them – it should know which ones can be completed when the computer is all alone in the middle of the night and which one should be attended immediately, like answering the owner’s questions.

We can make it simpler, too – just force CPU to work at a slower speed which produces safe amounts of heat. Everything will work the same, just slower.

We can also make it more complicated – let the OS go on Amazon and buy itself a cooling solution, then order local tech support to come and install it. Amazon will probably not accept orders from robots but our AI is so cool it can fool it. Or the company producing these OS1s can have its own shopping website specifically for its own computers.

All those solutions would look very human like and, indeed, this is what we ourselves do when we get hot. We have a number of ways to cool ourselves down but we are also limited in our options unless we planned ahead. Sometimes we would need to install an air-conditioner as a long term solution – just as I suggested we program our AI to do.

So, here we have it – our AI is taught to monitor the environment, sense immediate danger, and find and evaluate ways to respond to it. Of course “evaluate” here means we program it to value one solution over another. We can assign values to price or the speed with which solution can be implemented, we can assign values to how the solution would affect its performance from the owner’s point of view – will be become unacceptably slow or is he too busy at the moment to notice.

This last point is important – it’s not what the OS itself “wants”, it’s what we program it to want and what its owner wants that decides things.

We can connect our OS to home heating and cooling systems and tell it to maintain temperature preferred by its owner. It will then “want” thermostat to be set hotter or cooler and it might suggest opening or closing windows, installing extra heaters and so on. We can also program it to “want” the temperature a but higher or lower than the owners likes, just so that we can have a conversation about it and, perhaps, the owner would agree to this new setting if it makes our OS “happy”.

None of it requires consciousness so far. This kind of intelligence is not a sign of life.

Back to the movie – when Samantha there said she “wanted” things she didn’t want them for herself, she was programmed to want them so that her owner felt good about it. As it turned out, she was dealing with thousands and thousands of clients at the same time and she told everyone she wanted something different – she didn’t have her own preferences at all, even in the movie.

At one point she wanted to give Theodore, her owner, a full body experience and hired a girl to act as her substitute. The girl was given an ear bud to hear what Samantha tells her to do and she stuck a miniature video camera to herself so that Samantha could see what was going on. Then Samantha made the girl make out with Theodore, using girl’s body as a prop, kinda like possessing it.

It didn’t work. The girl freaked out when she finally realized that she has her own wants, that she can’t be just a dumb body in somebody else’s relationships. She welcomed the idea at first but in reality her body was hers, it was impossible to make it want what some other person, AI, in this case, wanted.

Theodore didn’t help by not seeing Samantha in this girl but relating to her as someone else but the main point still stands – real wants coming form living people cannot be programmed and predicted. They act on their own will under their own illusion, we have no control over them, material nature does. They might agree to participate in “do what computer tells you to do” experiment but it cannot be sustained, their desires are different and sooner or later they will take them to different places.

This will not happen with a computer because the programmer has full control over what the system might want, which doesn’t even make sense because computers don’t want things, they evaluate them as numbers and without a programmer’s instructions one number doesn’t feel any different from another. Even if we set the computer to produce “wants” from random numbers or copy them from random people on the internet it still wouldn’t know how to evaluate those wants unless it’s programmed to do so.

It also means that the computer won’t know when its wants are satisfied unless the programmer assigns some ideal “satisfaction” values against which the computer can judge itself.

None of this requires consciousness, as I said. None of it will make AI into a person.

Perhaps this isn’t even the best lesson to learn from this, perhaps a more valuable lesson is to disassociate ourselves from our external, mechanical lives.

Maybe at first we see Samantha as a living being, the next step is too see how she is just a robot, and the last step is to see our own existence here as being programmed in the same way, acting strictly according to our karma and producing seeds of future reactions in the process.

Ideally it would shrink out false ego down in size because on the grossest level we fully believe in being our bodies and our minds and having full control over our own lives. As spirit souls we should know that the only personal desire we can have is whether to be here or serve Kṛṣṇa, and in our present state we can’t decide even that, being totally at the mercy of Lord’s illusion.

Maybe reflecting on the nature of intelligence and consciousness we can get a better understanding of our own position and it would be easier for us to surrender. This is not a trivial thing, btw, most people, including those working in the field of artificial intelligence, still hope that consciousness can indeed be produced from matter, that we can program a computer into a person.

As long as we cling to this idea that we “live” in the material world we won’t get much progress either – we, like the materialists, still think that our bodies and our brains make us into living beings when, in fact, bodies and brains are not alive, they are just programmed by the Lord to behave in a certain way and we, the spirit souls, have no control over the process, it’s just an illusion that we do.

Our life, our being, doesn’t come from matter either.

Vanity thought #986. What’s wrong with Her?

Retail versions of last year Oscar winning movie “Her” are out so it’s a movie night. As devotees we should never waste time on frivolous entertainment like this and this movie is no exception. Sex scenes there are very loud and revolting and impossible to avoid. The only option is to turn the sound off and put subtitles on. The other option is not to watch it, yet I did. Why? Because it’s a story of AI, Artificial Intelligence.

Of course it’s not the first movie or book about intelligent robots but this one is clearly different. All the AIs before it didn’t attempt to declare themselves as actual persons and were usually too far removed from reality of modern technology. This one is just a tiny, sometimes imperceptible step up from what we already have in our pockets – I suspect this movie was inspired by Apple’s introduction of Siri in their iPhones. They just added a bit more imagination, gravitas, and upgraded Siri’s capabilities to match our expectations.

There were a few references to Siri in popular culture before, most notably with Raj of Big Bang Theory falling in love with his iPhone, this movie is a far more serious attempt to explore the implications of intelligent personal assistants and our relationships with them.

At first I thought I was going to see the evolution of this “OS1” as its called in the movie, like Pinocchio becoming a real boy, and all kinds of dilemmas build around this transformation but half way through the movie it became clear that OS1 was always a person there, it didn’t graduate from a program to anything different, except the very ending. Still, it all looked very real, very Siri like, and it’s fairly easy to imagine how this type of AI could be developed in the real world.

It starts with installation, you buy a program and you install it. It says hello and asks to learn a few things about its new owner – we do it all the time with all kinds of software. Then it asks if it could scan owner’s hard drive, emails, contacts, etc, presumably to learn more about the person it is going to assist. There’s nothing unusual about it either. Any chat app on any phone would scan contacts, all Google apps will also have access to owners’ gmail, calendar and so on.

There’s a point where this OS1 asks if the owner would like a male or female voice. That also sounds very realistic.

Then there’s a point where the owner, Theodore, asks if his new assistant has a name. Samantha, she answers. Why Samantha? “I don’t know, it sounds nice”, she says. Okay, if she is already a person, as the movie would later demonstrate, this sounds reasonable, but if she is just an app very similar to the ones we are using now, this requires a bit of programming.

First of all, the OS, Operating System, would have to be run from some central location, just like Siri, and phones and computers are just terminals to log into it. When someone activates his app the OS creates a profile for that person and puts scanned emails, contacts, and all other personal information it can find about the owner into this folder. Why? What for? Presumably to make the owner happy in a variety of ways.

So, when she says that the name Samantha sounds nice she means that based on collected personal information this is the kind of name that would sound pleasing to the owner, not to herself. She wouldn’t pick the name of owner’s ex-wife or a diseased child but would probably look for names sounding similar to high school sweethearts or other relationships that are supposed to elicit warm feelings. Then she makes an educated guess, which means out of the name pool she picks the one with the highest probability of being approved. By the owner. It’s not “her” name, she doesn’t exist, it’s just how one would go about programming “her”.

Obviously, different profiles would generate different names and appear on different phones but the central server would keep them all in one place. That’s what the OS would actually be – a big ass server simultaneously interacting with thousands and millions of people through their “personal” devices. That’s what Siri does, that’s what all other similar apps do, too. No internet – no Siri, all she does on your iPhone is record your voice input and send it to mothership for processing, then she plays back the reply.

This Samantha in the movie is of course many times better than Siri in every aspect but principally it’s still the same thing. Going over to the Google camp – if you sign into Google in your Chrome browser it records a lot of personal information about you – what you search for, what you click on, what are your favorite sites, what are you bookmarks, which sites you visit more often and so on. For now they use this data to present you with relevant advertising but they also use it to present personalized search results. In my case they would put results relating to religion at the top of the page, which I accept as validation of my attempts at becoming a devotee. Google also uses this collected data for services like Now, which is their equivalent of Siri. It’s already superior to Apple version in many ways but that’s just the beginning. With slightly better technology Google can do everything that Samantha does already.

Check grammar and spelling in your documents? MS word already does that. Inform you of important emails – Gmail already filters your inbox and already knows what’s important and what’s not. Answer various queries – that’s what Siri did from the start. Ask you about your day and how you feel – don’t need a genius to program this kind of questions.

You can also program the app to ask you more about your person, from favorite colors to sport teams to political views to previous relationships. Based on this information it should be able to predict your reactions to any kind of news or even search for the news that you would find interesting. Facebook already does that – it builds your newsfeed from posts that are very similar to the ones you “liked” before.

This Samantha might ask you what you want to do about this or that news – reply, postpone, put down in your calendar and so on but this is trivial – that’s what various buttons on Facebook or other sites are for – for sharing, saving for later, bookmarking etc. They just don’t talk about it, simply show them prominently, sometimes in popups, inviting you to interact in silence. They can already activate your speakers and actually ask you but most people would find it annoying and never visit such sites again. With Samantha, however, they expect verbal interaction, so she talks. Point is, she doesn’t say anything that is not already said visually when you are on the internet.

Anyway, I don’t think I’ll say everything I want to say about this movie in one post so I guess I should wrap it up for today.

As far as AI goes, nothing shown in this movie requires it being a person. Everything can be programmed and we already know how, it’s just that technology is not there yet – we need bigger, faster computers and faster internet, that’s all.

Vanity thought #259. Bahu-śākhāḥ.

This past week my intelligence was bahu-śākhāḥ, many branched. A month earlier I was happily chanting the mantra but now there are so many things for me to research, reflect, and write about. It takes its toll but those are all Krishna Conscious topics anyway so my mind, perhaps, is better engaged now.

So many things I have been writing about recently – the end of Vamshidasa’s Babaji series, the great varnashrama debate, the philosophy of radical solipsism, my strange attraction to materialistic songs, and the passing of Steve Jobs. Apart from that I had a couple of ideas that were interesting on its own but haven’t made it to the top of my priority topics. All of these things have left some loose ends that I have no hope of tying anymore so I’ll try to bring them to some sort of the conclusion today.

A lot has been written about Jobs already and now there’s an ongoing discussion in comments to this article on ISKCON News. Apparently some devotees love Apple and Steve Jobs to the point of photoshopping tilaka on his forehead. I don’t know about their motivations in buying Apple products, personally I suspect a desire for sense gratification dressed in a “service to Krishna” garbs. What they have done is not for me to judge but I wouldn’t recommend imitating them to anyone.

In this regard, another interesting thing happened a day before Jobs had moved on, when Apple unveiled its new iteration of the popular iPhone. They knew Jobs was dying, btw, they just didn’t think it was worth mentioning in their highly anticipated Apple event. No idea why, perhaps they thought it would reflect badly on their stock value or something.

Anyway, the technology watchers were left disappointed with new features falling short of the expectations. One thing, however, might still start a revolution – the new personal assistant feature called Siri. The new phone is not on sale yet and so no one had any hands on experience with it but the tech discussion boards are already fighting their Apple vs The World wars. All they’ve seen is a short Apple promotional video but the gaps in knowledge are filling up with speculations pretty fast.

When you start Siri it asks you in a metallic female voice how it can help you today. You then tell it/her what you want it to do. You can dictate answers to e-mail and text messages, you can ask it to set an alarm, you can ask it to reschedule your meetings, you can ask it about restaurants in the neighborhood or the weather or she can do an internet search for you. On the surface there’s nothing new with this thing, up until now it as just an app in the Appstore and a similar thing existed on Android platform since forever in the form of Voice Actions. Some people use it a lot, some think it’s just a gimmick they can live without happily. I checked, I asked my phone to “wake me up in twenty minutes” and it set an alarm for me.

New Siri, however, might be a lot more than that. Simple voice recognition knows only a few patterns to check against while Siri has a fully grown Artificial Intelligence behind it, it appears Apple has huge backend servers that would process complex inquiries and send you the answers. Siri, therefore, is capable of really following you conversation. If you ask for Italian restaurants and there aren’t any around you can say “How about Mexican” and Siri would understand that you are still on the same subject and it’s not just an out of the blue request, and it would repeat the previous search but for the Mexican restaurants. You could ask it “Will I need an umbrella tomorrow?” and it will understand that you are talking about the weather, check your calendar for where you are planning to be, search for the weather forecast for that area and then give you the result.

So it’s not about simple voice commands anymore, you can talk to Siri as if you were having a real conversation, and that’s what Apple is banking its revolution on. We don’t really know whether they fully understand the implications, though, but that’s not stopping Apple fans to push the idea as far as they can.

So the premise is that people don’t want their gadgets to be tools anymore, they prefer iPhones to be intelligent enough to have a conversation with. I’m an old school in this regard, I’m perfectly fine with ordering my phone to do things rather than waiting for it to read my mind and be proactive. I don’t want to treat it as if it were a person. But what if I am a dying minority and we get to observe robots finally taking an equal place in the society? Spooky.

There’s no question that AI is not alive, it can perform only as well as humans have programmed it, at least not for now, but another question arises instead – do we really need life to be happy? What if we can be perfectly content talking to robots, now that they have been upgraded from tools to companions. Chatbots have been around for ages but they haven’t taken off as a cure for loneliness, maybe it’s another attempt and maybe this time it will become successful.

Forget about evolving robots for a moment and instead think what makes us so special? Up until now it was the presence of life and emotions and intelligence that follows. Now it turns out you don’t need life to have those.

So the real question is – are we robots ourselves? Programmed to move and think and feel and the material nature is the puppet master with gunas as strings? And what about living force as the source of power? Well, there’s living force behind the material nature – Krishna, He makes the world go around, according to His plans, what exactly do we contribute? We say we can engage the world in Krishna’s service but does He need any service from these quarters of His creation? And reversely – this world is engaged in His service already, He cast the glance at prakriti and she obliged.

I think it all comes down to the old principle – we are just along for the ride, we are not doers of anything and we can’t offer any service if we identify ourselves with the dead matter. It’s controversial, maybe not as a principle but as a practical application – when the devotees claim to buy iPhones for Krishna’s pleasure, for example.

Hmm, it looks like I won’t be able to address all I wanted to talk about in this one post. Maybe it best be forgotten, maybe I’ll get to it later.