His article was published in Technology News and Analysis Magazine under BetaNews agency on February 2, 2015. The article's title is: "Could artificial intelligence really threaten human existence?" The opposing view consists of some reasons that they are first of all, he thinks every concern is just an assumption or conjecture and there is no mathematical proof for the issue. Also everything which is provided by scientists is based on inductive reasoning which is unacceptable. Inductive reasoning is sort of reasoning which is looking for some evidences based on small-scale and repetitive observations to get (not exact proof of) the truth. While the deductive reasoning is exact, certain and absolutely acceptable. Based on evidence given inductive argument's conclusions might be probable. All in all, in science no one can accept inductive reasoning without sufficient evidences; Second of all, if we accept that there will be a danger for human life and we know it is far off from now, so United Nations can pass a lot of heavy-handed laws to prevent military organizations and companies to achieve and build Ultra-Intelligent machines. He puts "Recently, however, nations have been talking about the issue. The United Nations held a discussion on the matter at its Convention on Certain Conventional Weapons (CCW) last year, while the subject has also received serious academic debate through a recently released Oxford University paper." (Ballard 2) Finally, with keeping moral and ethical principles in human mind, no one wants build a machine which can destroy human life in the future; additionally, if there is a way in the future that scientists can put moral and ethical principles inside the robots body, exactly like humans, it might be useful to protect them from doing ruthless behavior. (Ballard) .
In mathematical, physics and engineering problems, without exact scientific solution and proof, nobody accepts the claim.