Since the computer only regurgitates what past people have said, that does not make them human. The computers do not feel emotion when something awful happens to the person it is talking to, it just automatically replies with what past people have said. The bots do not understand, or are even able to comprehend, what is being talked about in the conversation. They do not have the ability to read and fully process what is going on. Computers only work with ones and zeros, nothing more. The bots follow commands and protocols humans code for them, they have no sense of reasoning. Just by analyzing the nature of computers, it is then easy to realize that there is no human authenticity associated with them, it is all just an illusion. When memory is brought into question, computer programs that use artificial intelligence algorithms tend to fail to act human as well. .
When people communicate, it is crucial to have a stable point of view and memory of previous situations. As humans, we have two types of memory: personal memory and social memory. Personal memory is defined as the ability to remember our personal history, which allows for a coherent personal vision. Computers do not have this luxury. An example of this was brought up in the subtitle "One Self, Any Self." "Joan," a Cleverbot-offshoot program, replies logically to everything said when looked at separately, but when the conversation is looked at as a whole, there are conflicts in identity. When first asked whether "Joan" has a boyfriend, she replies that she is searching for one. The second time she answers that she is happily married. When asked if she has a husband, she says that she is a male. After asking what gender she is, "Joan" replies that she is a female. Then, when indirectly asked again, "Joan" connotes that she is in fact male (Christian 101-2). Computers do not have personal history, they cannot have consistent replies to certain questions unless their protocol says so.