Concerns have been expressed that AI is dangerous as revealed in a recent article on CNN titled, Is AI a threat to humanity? I think the article does an excellent job summarizing the views for and against artificial intelligence. I recommend that you read the article to understand the range of views on this topic. I take a different approach to the question. The first piece of information is that no one has proven mathematically, scientifically, or rationally the likelihood of malevolent artificial intelligence. The absence of such proofs in the mainstream consciousness suggests that such arguments are at best speculative and subjective.
The second consideration is the definition of intelligence. Disciplines of thought by way of psychology, philosophy, and neurology can describe intelligence. Describing something is far short of fully understanding that which is described. We exist within consciousness, thought, emotion, and reason. We inhabit the mind too natively to fully recreate it. All of the problems of society and human relations and how no one really creates entirely reliable technology 100% of the time are indications of mental blind spots. A condition that requires generations of persons to refine the qualities of living and technology.
Artificial intelligence exists now. It is not human intelligence. In fact, it is more limited than human intelligence. It is true that artificial intelligence can make complex astrophysical, financial, and traffic systems calculations faster and broadly than a person. Those things are limited activities and like a person who is an expert in one field, that expertise does not translate into all areas of life. What we call artificial intelligence today is a narrow copy of narrow thought processes people use in functional work. Thought processes. Not insight, and not the kind of insight that is computationally capable of drifting from expertise in financial derivatives calculations to producing artistic works.
The third consideration is who creates the artificial intelligence? If people create it, you can guarantee that an off switch exists within due to plain human error. Computer systems crash, databases are breached by hackers, and malware enjoys an endless buffet of programs to feast upon. After 40+ years of digital computers and their design cultivated by a legion of highly educated professionals, the problems of security, reliability, and scale show no real conclusion on the horizon. Most of the great advice on those matters are routinely unknown or ignored in the creation of new technology in favor of expediency. Therefore, the natural processes that produce artificial intelligence now and the future are likely to produce a solution that has inherent flaws that can be discovered and applied to disrupt such system.
What about machines creating machines? Their original design came from people. Despite the machine’s ability to refine such designs and sift out errors there is a problem. The scope of a machine’s ability to improve itself beyond its original design is itself derived from people. As creations of people, machines will only maximally achieve, in inherent quality, what people possess and then in limited capacity.
People understand people, but intuitively. Whenever you pursue a strictly rational description of people you easily lay aside relevant things such as motivation, awareness, aspiration, and caution as to their depth and range of experience. We can encode behavior in machines but not understanding. No algebraic equation exist that completely represents the mind of a person. Not Isaac Newton, Albert Einstein, or even the enlightened Greek philosophers some of which strove to describe all life through numbers could achieve a theory of everything that includes what people are by the numbers. Therefore, the absence of a mathematical formula for true human intelligence means nothing exists that can be encoded into a computational model that would express a true mind full of motivation, awareness, aspiration, caution and a litany of numerous characteristics I have excluded that would fail to be an exhaustive list.
The fear of artificial intelligence is the objectification of computers as people. It is true that the natural understanding of computers is not widespread. Truth is, what we call a computer is a printed circuit board of metal strips that electricity flows through exercising a system of mathematics used to convert data. A video signal is converted to a movie you watch. Part of a database is converted into a catalog you order from on Amazon.com. Stock market data is converted into trades to buy and sell. Artificial intelligence is the same thing. Data and maths on that data to produce yet more data in the form of activation signals for traffic systems, transportation systems, and so on. Video game artificial intelligence is the same thing. It feels like a person because you have allowed your mind to treat it as such, but the person who wrote the program that defines the characters and situations knows exactly what that is. It is not a real person or anything matching the biological complexity of even a beetle bug.
Much of the fear of artificial intelligence also comes from a belief, not a fact, that computers will become transcendent. Although you cannot prove that computers have the qualities for transcendence, you also cannot show that they have even a fraction of the qualities for such. Computers are an extension of people in as far as they are tools. The amazing progress that we have made in consolidating into them the processes of delivering and watching movies, books, music, interactive play, education, professional work, analysis, and external machine control means that we do not see them as similar to single purpose tools they are replacing such as typewriters, TVs, sports stadiums, classrooms, tree houses and playgrounds, live concerts, and various forms of labor. The computer has become all things, but just things and it is really an expression of efficiency. We have merged all these things into the computer partly because our hunter-gatherer ancestors were a mobile sort and we never really lost our collective instinct for efficiency and delegation (see cloud computing for the latter) as a strategy for maximizing time.
Why do I think this is important? I think artificial intelligence is a noble pursuit. When it comes to social policies, logic and reason can often be the furthermost thing from being the operative principle. With computers though, it is entirely about the logic. I can tangibly see more reasons and deductions regarding machines that show that in their nature, they do not have the gift of evolution, biological adaptation, and conscious expression that would render them or their descendant technology as capable of threatening people with systematic intent. Any scenario involving the wholesale termination of people by machines, perhaps those machines defined as neural networks, fully involves the collusion of people. The machines are metal passing electricity unaware of the universe they inhabit and their ends are directed by us, in the first person or by proxy.
Further, I think this is important because artificial intelligence should be pursued. Not by me specifically. The closest I ever got was a website I made professionally for someone in 2001 in which I drafted a rules-based engine in which data decided how the website operated. It had the virtue of minimizing pre-encoded instructions within the website. It was successful but too abstract an exercise for me to attempt again. Real artificial intelligence is an intellectual enterprise that can only improve technology and the benefits it can bestow. The pursuit of artificial intelligence is an exercise that will deepen people’s understanding of the science and process of computation.
What is artificial intelligence? I answered that earlier in describing the machine. More specifically, the goal of artificial intelligence is to answer only 1 question when creating a computing system. What is the smallest program you can write that can do the greatest number of things reliably before a person has to get involved? Any one person or teams of persons highly proficient in writing software can write as much software code as necessary to do all the functions they need a program to carry out. Oftentimes, this results in large programs with lots of code that becomes difficult to evolve to newly realized requirements. Artificial intelligence is an engine defined using the concept of models and operations on those models such that evolution is easier, encoded capabilities are small in number but broad in scope, fully reliable in operation. The mental discipline that can be achieved and the mental growth that proceeds from this exercise become a huge research contribution that furthers science and technology.
Artificial intelligence is not the pursuit of a person in the machine. No one understands persons deconstructed into math. I cannot say that no mathematician has such a thing as a priority, but I am confident that scientific modelling may hold aspects of the mind such as consciousness as non-quantifiable constituents. While neural hardware in the brain can be described, features of artificial intelligence as they exist in reality are incompatible with the concepts of sentience that would follow from complex biological evolution. The person is not present. What I have presented here is not a definitive research document on the improbability of malevolent artificial intelligence. Rather, as a one-sided conversation, it is clear that no empirical data exists that conclusively or circumstantially shows that present computer technology bears the fundamental precedents necessary to evolve to a place that requires we even glimpse a fear that our tools learn to become our foes.
What I state here applies to machines made of metal. You can define computation in genetically engineered machines made of active biological substances. My analysis does not apply to them. Biological machines initially defined with the properties of an artificial neural network are a completely different matter. Mechanical machines of metal and electricity are as benign as their creators and as complete as their creators’ reason holds.
Between the Ancient Greeks who brought about many things including the Golden ratio and their intellectual descendants such as Gottfried Leibniz with his idea of pre-established harmony was the nobility of numbers. Numbers imply order, consistency, and a means to describe concordance, balance as well as a structured means to express the inverse. The harmony of numbers was believed to be a gateway to life by way of a form of reason. While no one has really achieved that, the related goals have good virtue. An idea regarding numbers is towards unification, a complete understanding of the inter-relatedness of all things. At this time, perhaps that emotions, senses, awareness, and perceptions as beyond the reach of numbers means that the nobility of numbers is a symptom of their relative simplicity. The greatest intellect in terms of reason can deduce the geometric qualities of the Earth. Yet, it can take a lifetime to fully understand even a single person or oneself. We are and always will be beyond machines. Therefore, there is nothing to fear and only opportunity to stretch our intellectual capacities in the advancement of their capability.
Further Exploration of the Question
- CNN: Is AI a threat to humanity?
- CNN: When machines outsmart humans
- Oxford: The Future Of Employment: How Susceptible Are Jobs To Computerisation?
- HuffingtonPost: Transcending Complacency on Superintelligent Machines
- Intelligence Explosion: Facing the Intelligence Explosion
AI is Dangerous
- BBC: Stephen Hawking warns artificial intelligence could end mankind
- The Guardian: Elon Musk: artificial intelligence is our biggest existential threat
- The Independent: Stephen Hawking right about dangers of AI… but for the wrong reasons, says eminent computer expert
AI is Not Dangerous
- IEEE Spectrum: Machine-Learning Maestro Michael Jordan on the Delusions of Big Data and Other Huge Engineering Efforts
- Youtube: E.O. Wilson: The Robots Aren’t Taking Over, and Here’s Why (Oct. 27, 2014) | Charlie Rose
- Michael Gautier: Is AI Dangerous Technology? (7/11/2013)