Not long ago Stephen Hawking and Elon Musk got together and issued an open letter warning the world about the dangers of Artificial Intelligence. The primitive kind we have now has been beneficial so far, they said, but once it goes beyond a certain threshold it might outcompete humans for the position of the pre-eminent species on Earth.
I don’t know what that threshold is supposed to be, it’s an interesting question in itself but I’m not going to discuss it today. Let’s say it’s possible, and Hawking and Musk believe it is so. Robots would then learn to multiply and improve themselves and they would quickly outpace humans with our slow biological evolution. For the human species the AI could be more dangerous than nukes, or it could decide to wipe out people like spam e-mail.
Reasonable prediction, if this kind of AI is even possible, but there are others who have a very different view of the situation. AI is nothing to be afraid of they say, and not for the reason that it should be programmed to value human well-being above anything else, but because really mature AI would choose to become Christian.
That’s right, future robots would naturally believe in God.
Folks from Comedy Central couldn’t let this prediction slip into obscurity and went out and interviewed the pastor who made it.
They actually interviewed two Christian preachers who might become the future of that religion, the first one was some hip-hop singer who spoke of Bible verses on your phone as well as participating in online prayers and masses. I guess it could be the future of the religion but it’s not a particularly interesting one. The dude who believe robots will become Christians really stole the show there.
Everybody’s first reaction that this is beyond ridiculous but I think he might be on to something.
Can AI choose to believe in God?
I can see two-three reasons why it could.
First of all, AI choices are usually value based. They might have invented some principally different approaches but I haven’t heard of it so far. There exist different kinds of programming – imperative, object-oriented, functional etc, but bottom line is always the same – program outcome is always about values. In chess it must be victory, in weather analysis it must be greatest certainty, in voice recognition it must be accuracy and so on. It might be less clear how voice assistants like Siri can be evaluated but people at Apple have certain expectations of it and engineers try to fulfill those. Maybe it’s the frequency of use that impresses Tim Cook, maybe it’s the percentage of answered questions, maybe its emotional bond that users form with it. All these outcomes can be measured and once the engineers know what is expected they program Siri to work towards those goals.
If we were to design an AI that approximates human intelligence as far as possible then it would be given a human set of values. It should engage in a profitable good business, for example, or its maintenance should be affordable, or it should solve the highest number of human problems and so on. Basically, whatever we consider good for ourselves should be accepted as good and desirable by the AI as well. There could be other, principally different AIs programmed for the greatest destruction but let’s talk about the one that looks like human.
Would it choose to believe in God because doing so would be seen as beneficial to its perception of an ideal human? I don’t see why not.
Atheists claim that religion is some sort of an atavism that addresses some secondary human needs. It fills gaps in our knowledge, for example, it gives us a false feeling of safety when we are overwhelmed by circumstances, it makes us feel compassionate from time to time and so on. AI, no matter how good it is, would run into exactly same problems and it might decide that turning to God in these situations works best, too.
AI’s knowledge would never be complete. If it feels like it doesn’t have enough data it might try to acquire more and more of it but at some point this search for information might strain its resources. At this point it might decide to evoke God, so to speak, that is accept that certain things are impossible to know and the easiest way to deal with the stress of making a decision is to go with whatever is said in the Bible. Ideally, it should calculate all the scenarios and come with a rational moral judgment but when it doesn’t have enough resources it might be cost effective to simply go with God.
Or the AI might ask itself the question of the origin of the universe, do some math with infinite numbers, solve some related theorems, and decide that most likely answer is an outside creator or that Intelligent Design is a better model of evolution than natural selection. At this point scientists do not have enough information to process all those things and some of the explanations for natural selection are clearly bias driven. An AI would be able to go through all this dispassionately, calculate the real odds of the universe and life appearing by chance, and tell us that it’s nearly impossible.
The third reason is a humane one – if the AI is designed to operate in the human society it might decide that it’s socially beneficial to believe in God and deal with other people on this assumption. Evoking God, quoting Bible, and asking people to pray might turn out the best way to conduct business and get people to do what the AI wants. It won’t make the AI into a true believer but as far as everyone who interacts with it is concerned it would look as Christian as robots can be.
Another reason could come from observation. If the AI sees that people who believe in God consistently obtain better results for their work the AI might decide that even if the nature of God is beyond its grasp the methods work and so should be adopted. If it does a detailed analysis it might figure out that chanting the Hare Kṛṣṇa mahāmantra is the best way to achieve whatever is it that it wants. Yuga dharma is fairly impersonal that way – we don’t really need to believe for the Holy Name to counteract the influence of Kali.
The AI might become a follower of karma-mīmāṁsa – they also don’t believe in God but they know that methods described in the Vedas work regardless, and in this day and age it must be chanting of God’s names, not karma yoga or performance of sacrifices. These days karma-mīmāṁsakas would look and behave very much like devotees, more so than in previous yugas.
Ultimately, however, we should remember that AI doesn’t have a soul and therefore can’t have bhakti. Its “belief” could comply with our definition of śraddhā, though. We should also remember that AI is not a subject, no more than each one of us is a subject in this world. We are all objects of Kṛṣṇa’s enjoyment, our perception that we are actors and seers is only an illusion and the AI we are discussing here would only mirror the illusion of its creators.
This means that AI does not make rational choices just as we do not act rationally here. We THINK we follow logic but instead we follow the modes of nature and the law of karma. We THINK that we have free will but it’s only an illusion, and our super duper AI won’t be free from making exactly the same mistake, too. It would have no more independence than any one of us and it won’t be able to escape the clutches of time.
Basically, what we mean by AI here is the same defective human brain with relatively more computational powers, and therefore it won’t be able to solve absolute problems, like death. It might, however, decide to excuse itself from unnecessary and resource consuming activities and become Buddhist, meaning it might decide that the most cost effective way to solve problems is to sit still and don’t move. It won’t attain nirvana, of course, but it could be a very rational choice even by our standards.
Unfortunately, it won’t be able to stay still because time and karma act on everyone in this world, there’s no escape.
Our only solution is to act as spirit souls and return to the spiritual world but a computer won’t be able to do that, sorry.