“Open the pod bay doors, HAL”, is the sci-fi originated sentence that is becoming more a closer reality and less a fictional line by each passing day, plainly depicting what the threat of artificial intelligence (AI) might look like in the not too distant future, and what “intelligence explosion” ultimately means for humanity. HAL is not willing to comply, mainly because he is being protective of serving his purpose, and his possible elimination implies him no longer being able to do what he has been programmed to do. Like many of the things that Kubrick leaves unanswered for the audience in 2001: A Space Odyssey (1968), whether HAL can actually feel remains an ambiguous statement. “The H-A-L 9000 computer, which can reproduce, though some experts still prefer to use the word ‘mimic’, most of the activities of the human brain, and with incalculably greater speed and reliability”, is defined in the film as the “brain and central nervous system of the the ship”. When interviewed about HAL, David (the lead character) states: “Well, he acts like he has genuine emotions. Of course, he’s programmed that way to make it easier for us to talk to him; but as to whether or not he has real feelings is something I don’t think anyone can truthfully answer”.
There are actually two aspects to beliefs, desires and emotions; there’s a behavioral side (also known as the “easy problem”, and machines could certainly copy this), and then there’s the sensation of what it feels like to have those desires and emotions, the experience of it (also known as the “hard problem”), which is called “qualia». At the very least, HAL seems to be feeling that subjective experience of himself and his surroundings, and he also knows how to appeal to the emotions in others. When David makes his way back into the ship, HAL says everything that he can possible think of to convince him of not disconnecting him. As David is turning him off, he pleas: “Dave, stop. Stop, will you? (…) I’m afraid, Dave (…) My mind is going, I can feel it.” No mere computer has qualitative experiences, and we don’t know yet how to build one that does. But the key question here is: could we, ever? Could HAL be a possibility?
Even though the question of his actual feelings remains blurry, I believe that HAL does have a consciousness, in the sense that there’s a qualitative aspect to his “mental life”. Thus and so, there is no doubt to me that a computer like this could exist. It would chillingly mirror human behavior, and seem filled with emotions to us. But just like with us, their “will” will come from an unconscious programming. Hence, the illusion of free will can be equally present in both our consciousness and the one that artificially intelligent entities could potentially develop. And compatibilists might feel compelled to say: “Well, this just proves that AI systems would have free will, just as we do”, to which I would reply: the “free will” that I’m talking about is the one that states that we are the original, conscious source of our thoughts and actions (not merely that we can choose to do one thing or the other); since the nature of AI is programmed at its base level, it would be impossible for them to have free will, just as it is for us, carriers of unconscious programming as well, but one we haven’t been able to figure out yet.
That being said, I also feel that straightaway assuming that AI would never be able to develop emotions and experience feelings, is prejudice. I believe that, at least, we owe it to our future to address that question, and ask ourselves if the notion of computers feeling is not too crazy an idea. Because, on what grounds can we state that, in the same way that our consciousness makes us feel stuff, it won’t just as easily happen with other conscious entities? To me, it makes perfect sense that feeling emotions is directly proportional to the level of consciousness in question; and one leading theory in this field points to the fact that the level of consciousness is correlated with the complexity of the system.